Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MongoDB to BigQuery

What is the best way to export data from MongoDB hosted in mlab to google bigquery?

Initially, I am trying to do one time load from MongoDB to BigQuery and later on I am thinking of using Pub/Sub for real time data flow to bigquery.

I need help with first one time load from mongodb to bigquery.

like image 419
sam Avatar asked Feb 10 '17 19:02

sam


People also ask

What is BigQuery not good for?

You need to understand that BigQuery cannot be used to substitute a relational database, and it is oriented on running analytical queries, not for simple CRUD operations and queries.

Is BigQuery a SQL or NoSQL?

BigQuery is a fully managed, serverless SQL data warehouse that allows for speedy SQL queries and interactive analysis of large datasets (on the order of terabytes or petabytes).

Does BigQuery supports NoSQL?

BigQuery is a hybrid system that allows you to store data in columns, but it takes into the NoSQL world with additional features, like the record type, and the nested feature. Also you can have a 2Mbyte STRING column in which you can store raw document like a JSON document.


1 Answers

In my opinion, the best practice is building your own extractor. That can be done with the language of your choice and you can extract to CSV or JSON.

But if you looking to a fast way and if your data is not huge and can fit within one server, then I recommend using mongoexport. Let's assume you have a simple document structure such as below:

{
    "_id" : "tdfMXH0En5of2rZXSQ2wpzVhZ",
    "statuses" : [ 
        {
            "status" : "dc9e5511-466c-4146-888a-574918cc2534",
            "score" : 53.24388894
        }
    ],
    "stored_at" : ISODate("2017-04-12T07:04:23.545Z")
}

Then you need to define your BigQuery Schema (mongodb_schema.json) such as:

$ cat > mongodb_schema.json <<EOF
[
    { "name":"_id", "type": "STRING" },
    { "name":"stored_at", "type": "record", "fields": [
        { "name":"date", "type": "STRING" }
    ]},
    { "name":"statuses", "type": "record", "mode": "repeated", "fields": [
        { "name":"status", "type": "STRING" },
        { "name":"score", "type": "FLOAT" }
    ]}
]
EOF

Now, the fun part starts :-) Extracting data as JSON from your MongoDB. Let's assume you have a cluster with replica set name statuses, your db is sample, and your collection is status.

mongoexport \
    --host statuses/db-01:27017,db-02:27017,db-03:27017 \
    -vv \
    --db "sample" \
    --collection "status" \
    --type "json" \
    --limit 100000 \
    --out ~/sample.json

As you can see above, I limit the output to 100k records because I recommend you run sample and load to BigQuery before doing it for all your data. After running above command you should have your sample data in sample.json BUT there is a field $date which will cause you an error loading to BigQuery. To fix that we can use sed to replace them to simple field name:

# Fix Date field to make it compatible with BQ
sed -i 's/"\$date"/"date"/g' sample.json

Now you can compress, upload to Google Cloud Storage (GCS) and then load to BigQuery using following commands:

# Compress for faster load
gzip sample.json

# Move to GCloud
gsutil mv ./sample.json.gz gs://your-bucket/sample/sample.json.gz

# Load to BQ
bq load \
    --source_format=NEWLINE_DELIMITED_JSON \
    --max_bad_records=999999 \
    --ignore_unknown_values=true \
    --encoding=UTF-8 \
    --replace \
    "YOUR_DATASET.mongodb_sample" \
    "gs://your-bucket/sample/*.json.gz" \
    "mongodb_schema.json"

If everything was okay, then go back and remove --limit 100000 from mongoexport command and re-run above commands again to load everything instead of 100k sample.

ALTERNATIVE SOLUTION:

If you want more flexibility and performance is not your concern, then you can use mongo CLI tool as well. This way you can write your extract logic in a JavaScript and execute it against your data and then send output to BigQuery. Here is what I did for the same process but used JavaScript to output in CSV so I can load it much easier to BigQuery:

# Export Logic in JavaScript
cat > export-csv.js <<EOF
var size = 100000;
var maxCount = 1;
for (x = 0; x < maxCount; x = x + 1) {
    var recToSkip = x * size;
    db.entities.find().skip(recToSkip).limit(size).forEach(function(record) {
        var row = record._id + "," + record.stored_at.toISOString();;
        record.statuses.forEach(function (l) {
            print(row + "," + l.status + "," + l.score)
        });
    });
}
EOF

# Execute on Mongo CLI
_MONGO_HOSTS="db-01:27017,db-02:27017,db-03:27017/sample?replicaSet=statuses"
mongo --quiet \
    "${_MONGO_HOSTS}" \
    export-csv.js \
    | split -l 500000 --filter='gzip > $FILE.csv.gz' - sample_

# Load all Splitted Files to Google Cloud Storage
gsutil -m mv ./sample_* gs://your-bucket/sample/

# Load files to BigQuery
bq load \
    --source_format=CSV \
    --max_bad_records=999999 \
    --ignore_unknown_values=true \
    --encoding=UTF-8 \
    --replace \
    "YOUR_DATASET.mongodb_sample" \
    "gs://your-bucket/sample/sample_*.csv.gz" \
    "ID,StoredDate:DATETIME,Status,Score:FLOAT"

TIP: In above script I did small trick by piping output to able to split the output in multiple files with sample_ prefix. Also during split it will GZip the output so you can load it easier to GCS.

like image 194
Qorbani Avatar answered Sep 30 '22 01:09

Qorbani