Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is it possible to Bulk Insert using Google Cloud Datastore

We are migrating some data from our production database and would like to archive most of this data in the Cloud Datastore.

Eventually we would move all our data there, however initially focusing on the archived data as a test.

Our language of choice is Python, and have been able to transfer data from mysql to the datastore row by row.

We have approximately 120 million rows to transfer and at a one row at a time method will take a very long time.

Has anyone found some documentation or examples on how to bulk insert data into cloud datastore using python?

Any comments, suggestions is appreciated thank you in advanced.

like image 854
ADL Avatar asked Aug 31 '15 16:08

ADL


People also ask

Is Google Datastore deprecated?

Because Cloud Datastore API v1 is released, Cloud Datastore API v1beta3 is now deprecated.

What is the difference between cloud Datastore and Cloud Bigtable?

Optimized for small documents and easy to use with mobile applications. Cloud Datastore—a document database built for automatic scaling, high performance, and ease of use. Cloud Bigtable—an alternative to HBase, a columnar database system running on HDFS. Suitable for high throughput applications.

How can you reduce latency when adding expenses to Cloud Datastore?

How can you reduce latency when adding expenses to Cloud Datastore? Use a batch operation to add multiple entities in one request. An employee can have multiple expense exports and each expense report can have multiple expenses. You need to store expense report information in Cloud Datastore.

How does Google's Datastore work?

Datastore is a highly scalable NoSQL database for your applications. Datastore automatically handles sharding and replication, providing you with a highly available and durable database that scales automatically to handle your applications' load.


1 Answers

There is no "bulk-loading" feature for Cloud Datastore that I know of today, so if you're expecting something like "upload a file with all your data and it'll appear in Datastore", I don't think you'll find anything.

You could always write a quick script using a local queue that parallelizes the work.

The basic gist would be:

  • Queuing script pulls data out of your MySQL instance and puts it on a queue.
  • (Many) Workers pull from this queue, and try to write the item to Datastore.
  • On failure, push the item back on the queue.

Datastore is massively parallelizable, so if you can write a script that will send off thousands of writes per second, it should work just fine. Further, your big bottleneck here will be network IO (after you send a request, you have to wait a bit to get a response), so lots of threads should get a pretty good overall write rate. However, it'll be up to you to make sure you split the work up appropriately among those threads.


Now, that said, you should investigate whether Cloud Datastore is the right fit for your data and durability/availability needs. If you're taking 120m rows and loading it into Cloud Datastore for key-value style querying (aka, you have a key and an unindexed value property which is just JSON data), then this might make sense, but loading your data will cost you ~$70 in this case (120m * $0.06/100k).

If you have properties (which will be indexed by default), this cost goes up substantially.

The cost of operations is $0.06 per 100k, but a single "write" may contain several "operations". For example, let's assume you have 120m rows in a table that has 5 columns (which equates to one Kind with 5 properties).

A single "new entity write" is equivalent to:

  • + 2 (1 x 2 write ops fixed cost per new entity)
  • + 10 (5 x 2 write ops per indexed property)
  • = 12 "operations" per entity.

So your actual cost to load this data is:

120m entities * 12 ops/entity * ($0.06/100k ops) = $864.00

like image 170
JJ Geewax Avatar answered Oct 11 '22 23:10

JJ Geewax