Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sync MongoDB Via ssh

Tags:

shell

ssh

mongodb

Unlike Mysql, I found it quite challenging trying to sync MongoDB files -
They can not be piped back, since they don't send the data to stdout
(If I understand correctly).

So, I'm trying to find another way, that doesn't involve two ssh calls.
What needs to be done is this:

  • Log into the ssh server
  • Export all MongoDB files
  • Compress them to gzip
  • Send them back to the local machine
  • Extract and import

The key thing here, though, is leaving no trace behind -
I don't want the compressed files stay in the remote machine,
which would usually require me for another ssh login.
So something along the lines of "move files into archive" is the ideal solution,
if that could be later piped back to the local machine seamlessly.

I realize MongoDB has a method to connect to a server credentials using mongodump, but the port is closed atm, so I need the SSH method. Any other ideas would be welcome, BTW.

Edit - 11.06.14

Since this questions seems to be somewhat popular, I'd care to share a script that evolved from this questions' answers, and other resources during the last year (credit is where credit is due).
The script basically manages a sync from/to a remote server, for either type of db possible (probably. postgres, mysql and mongo for the time being).
It does have a few assumptions, like the root user having no password for the db, but that can be changed according to need.

The script can be found here: https://github.com/iwfmp/zsh/blob/master/scripts/db/db-sync

like image 411
Devon Ville Avatar asked May 18 '13 00:05

Devon Ville


1 Answers

You can accomplish this with SSH Tunneling, setting up your remote MongoDB instance to run on one of your local ports. By default, MongoDB runs on 27017, so in the example below, I've chosen to map my remote MongoDB instance to my local 27018 port.

If on your trying to copy a database from SERVER1 to LOCALHOST, you could run this command on your LOCALHOST:

ssh -L27018:localhost:27017 SERVER1

(Obviously replace SERVER1 with your actual server or ssh alias)

This opens an SSH connection to SERVER1, but also maps the port 27018 on LOCALHOST to the remote port 27017 on SERVER1. Don't close that SSH connection, and now try to connect to MongoDB on your localhost machine with port 27018, like so:

mongo --port 27018

You'll notice this is now the data on SERVER1, except you're accessing it from your local machine.

Just running MongoDB normally:

mongo (or mongo --port 27107)

Will be your local machine.

Now, since you technically have (on your LOCALHOST, where you ran the SSH tunnel):

  • MongoDB (LOCALHOST) on 27017
  • MongoDB (SERVER1) on 27018

You can just use the db.copyDatabase() function inside MongoDB (LOCALHOST) to copy over data.

FROM LOCALHOST ON PORT 27017 (Executing on live will DROP YOUR DATA)

// Use the right DB use DATABASENAME;  // Drop the Existing Data on LOCALHOST db.dropDatabase(); // Copies the entire database from 27018 db.copyDatabase("DATABASENAME", "DATABASENAME", "localhost:27018"); 

You should be able to wrap this all up into a shell script that can execute all of these commands for you. I have one myself, but it actually has a few extra steps that would probably make it a bit more confusing :)

Doing this, and using MongoDB's native db.copyDatabase() function will prevent you from having to dump/zip/restore. Of course, if you still want to go that route, it wouldn't be too hard to run mongodump, export the data, tar/gzip it, then use scp TARGETSERVER:/path/to/file /local/path/to/file to pull it down and run a mongorestore on it.

Just seems like more work!

Edit - Here's a SH and JS file that go together to make a shell script you can run this with. Run these on your LOCALHOST, don't run them on live or it'll do the db.dropDatabase on live. Put these two files in the same folder, and replace YOURSERVERNAME in pull-db.sh with the domain/ip/ssh alias, and then in pull-db.js change DBNAMEHERE to whatever your database name is.

I normally create a folder called scripts in my projects, and using Textmate, I just have to hit ⌘+R while having pull-db.sh open to edit in order to execute it.

pull-db.sh

ssh -L27018:localhost:27017 YOURSERVERNAME '     echo "Connected on Remote End, sleeping for 10";      sleep 10;      exit' & echo "Waiting 5 sec on local"; sleep 5; echo "Connecting to Mongo and piping in script"; cat pull-db.js | mongo 

pull-db.js

use DBNAMEHERE; db.dropDatabase(); use DBNAMEHERE; db.copyDatabase("DBNAMEHERE","DBNAMEHERE","localhost:27018"); 

I added some extra code to the shell script to echo out what it's doing (sorta). The sleep timers in the script are just to give the SSH connections time to get connected before the next line is run. Basically, here's what happens:

  1. First line of the code creates the tunnel on your machine, and sends the ECHO, SLEEP, then EXIT to the remote SSH session.
  2. It then waits 5 seconds, which allows the SSH session in step 1 to connect.
  3. Then we pipe the pull-db.js file into the local mongo shell. (Step #1 should be done within 5 sec...)
  4. The pull-db.js should be running in mongo now, and the SSH terminal in Step #1 has probably run for 10 seconds after it's connection opened, and the EXIT is sent to it's session. The command is issued, HOWEVER, the SSH session will actually stay open until the activity from Step #3 is complete.
  5. As soon as your pull-db.js script finishes pulling all of your data from the remote server, the EXIT command issued in Step #1 on the remote server is finally allowed to close the connection, unbinding 27108 on your localhost.

You should now have all of the data from your remote database in your localhost.

like image 144
Jesta Avatar answered Oct 06 '22 05:10

Jesta