Unlike Mysql, I found it quite challenging trying to sync MongoDB files -
They can not be piped back, since they don't send the data to stdout
(If I understand correctly).
So, I'm trying to find another way, that doesn't involve two ssh calls.
What needs to be done is this:
The key thing here, though, is leaving no trace behind -
I don't want the compressed files stay in the remote machine,
which would usually require me for another ssh login.
So something along the lines of "move files into archive" is the ideal solution,
if that could be later piped back to the local machine seamlessly.
I realize MongoDB has a method to connect to a server credentials using mongodump, but the port is closed atm, so I need the SSH method. Any other ideas would be welcome, BTW.
Since this questions seems to be somewhat popular, I'd care to share a script that evolved from this questions' answers, and other resources during the last year (credit is where credit is due).
The script basically manages a sync from/to a remote server, for either type of db possible (probably. postgres, mysql and mongo for the time being).
It does have a few assumptions, like the root user having no password for the db, but that can be changed according to need.
The script can be found here: https://github.com/iwfmp/zsh/blob/master/scripts/db/db-sync
You can accomplish this with SSH Tunneling, setting up your remote MongoDB instance to run on one of your local ports. By default, MongoDB runs on 27017, so in the example below, I've chosen to map my remote MongoDB instance to my local 27018 port.
If on your trying to copy a database from SERVER1 to LOCALHOST, you could run this command on your LOCALHOST:
ssh -L27018:localhost:27017 SERVER1
(Obviously replace SERVER1 with your actual server or ssh alias)
This opens an SSH connection to SERVER1, but also maps the port 27018 on LOCALHOST to the remote port 27017 on SERVER1. Don't close that SSH connection, and now try to connect to MongoDB on your localhost machine with port 27018, like so:
mongo --port 27018
You'll notice this is now the data on SERVER1, except you're accessing it from your local machine.
Just running MongoDB normally:
mongo
(or mongo --port 27107
)
Will be your local machine.
Now, since you technically have (on your LOCALHOST, where you ran the SSH tunnel):
You can just use the db.copyDatabase()
function inside MongoDB (LOCALHOST) to copy over data.
FROM LOCALHOST ON PORT 27017 (Executing on live will DROP YOUR DATA)
// Use the right DB use DATABASENAME; // Drop the Existing Data on LOCALHOST db.dropDatabase(); // Copies the entire database from 27018 db.copyDatabase("DATABASENAME", "DATABASENAME", "localhost:27018");
You should be able to wrap this all up into a shell script that can execute all of these commands for you. I have one myself, but it actually has a few extra steps that would probably make it a bit more confusing :)
Doing this, and using MongoDB's native db.copyDatabase() function will prevent you from having to dump/zip/restore. Of course, if you still want to go that route, it wouldn't be too hard to run mongodump
, export the data, tar/gzip it, then use scp TARGETSERVER:/path/to/file /local/path/to/file
to pull it down and run a mongorestore
on it.
Just seems like more work!
Edit - Here's a SH and JS file that go together to make a shell script you can run this with. Run these on your LOCALHOST, don't run them on live or it'll do the db.dropDatabase on live. Put these two files in the same folder, and replace YOURSERVERNAME in pull-db.sh
with the domain/ip/ssh alias, and then in pull-db.js
change DBNAMEHERE to whatever your database name is.
I normally create a folder called scripts
in my projects, and using Textmate, I just have to hit ⌘+R
while having pull-db.sh
open to edit in order to execute it.
pull-db.sh
ssh -L27018:localhost:27017 YOURSERVERNAME ' echo "Connected on Remote End, sleeping for 10"; sleep 10; exit' & echo "Waiting 5 sec on local"; sleep 5; echo "Connecting to Mongo and piping in script"; cat pull-db.js | mongo
pull-db.js
use DBNAMEHERE; db.dropDatabase(); use DBNAMEHERE; db.copyDatabase("DBNAMEHERE","DBNAMEHERE","localhost:27018");
I added some extra code to the shell script to echo out what it's doing (sorta). The sleep timers in the script are just to give the SSH connections time to get connected before the next line is run. Basically, here's what happens:
You should now have all of the data from your remote database in your localhost.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With