Under new pricing scheme of Google App Engine, I get a surprise pricing table as shown below.
The culprit is, I get a huge increased in "Datastore Read Operations", just within a few hours, although there are only less than 50 calls to my DownloadServlet
DownloadServlet will just read blob (Usually less than 1 MB) from database, and return it to user. Is there anything I can do to optimize my code, so that I will not hit the free quota limit so fast.
How can you reduce latency when adding expenses to Cloud Datastore? Use a batch operation to add multiple entities in one request. An employee can have multiple expense exports and each expense report can have multiple expenses. You need to store expense report information in Cloud Datastore.
A transaction is a set of Datastore operations on one or more entities in up to 25 entity groups. Each transaction is guaranteed to be atomic, which means that transactions are never partially applied. Either all of the operations in the transaction are applied, or none of them are applied.
Deleting unused indexes When you are sure that old indexes are no longer needed, you can delete them by using the datastore indexes cleanup command. This command deletes all indexes for the production Datastore mode instance that are not mentioned in the local version of index.
Datastore mode can automatically scale to millions of writes per second. Use Firestore in Native mode for new mobile and web apps. Firestore offers mobile and web client libraries with real-time and offline features. Native mode can automatically scale to millions of concurrent clients.
You're doing a lot of reads because you've broken your files up into 1MB chunks in the datastore. As a result, you have to do one read per chunk, and because you're not using key names or IDs, you're also doing a query for each, further depleting your quota.
Store your data in the blobstore instead.
If the data that you read from the datastore is relatively static (e.g. text for a blog entry), you could consider caching the data in the memcache.
There are no guarantee on how long the data could remain in the memcache, so you need to regularly re-fetch the data from the datastore in case the data in the memcache is invalidated, but the savings to the datastore read ops would be quite considerable.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With