Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

"s3cmd get" rewrites local files

Trying to download S3 directory to local machine using s3cmd. I'm using the command:

s3cmd sync --skip-existing s3://bucket_name/remote_dir ~/local_dir

But if I restart downloading after interruption s3cmd doesn't skip existing local files downloaded earlier and rewrites them. What is wrong with the command?

like image 247
art.zhitnik Avatar asked Apr 12 '12 08:04

art.zhitnik


2 Answers

I had the same problem and found the solution in comment # 38 from William Denniss there http://s3tools.org/s3cmd-sync

If you have:

$s3cmd sync —verbose s3://mybucket myfolder

Change it to:

$s3cmd sync —verbose s3://mybucket/ myfolder/   # note the trailing slash

Then, the MD5 hashes are compared and everything works correctly! —skip-existing works as well.

To recap, both —skip-existing and md5 checks won’t happen if you use the first command, and both work if you use the second (I made a mistake in my previous post, as I was testing with 2 different directories).

like image 85
Alex F Avatar answered Sep 18 '22 05:09

Alex F


Use boto-rsync instead. https://github.com/seedifferently/boto_rsync

It correctly syncs only new/changed files from s3 to the local directory.

like image 35
Joe Van Dyk Avatar answered Sep 18 '22 05:09

Joe Van Dyk