Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Downloading an entire S3 bucket?

I noticed that there does not seem to be an option to download an entire s3 bucket from the AWS Management Console.

Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don't know if there's an easier way.

like image 224
rugbert Avatar asked Dec 28 '11 17:12

rugbert


People also ask

How do I download a whole S3 bucket?

To download an entire bucket to your local file system, use the AWS CLI sync command, passing it the s3 bucket as a source and a directory on your file system as a destination, e.g. aws s3 sync s3://YOUR_BUCKET . . The sync command recursively copies the contents of the source to the destination.

How do I download an entire aws folder?

If you only want to download the bucket from AWS, first install the AWS CLI in your machine. In terminal change the directory to where you want to download the files and run this command. aws s3 sync s3://bucket-name . Show activity on this post.

Can I download multiple files from S3?

The S3 service has no meaningful limits on simultaneous downloads (easily several hundred downloads at a time are possible) and there is no policy setting related to this... but the S3 console only allows you to select one file for downloading at a time.


2 Answers

AWS CLI

See the "AWS CLI Command Reference" for more information.

AWS recently released their Command Line Tools, which work much like boto and can be installed using

sudo easy_install awscli 

or

sudo pip install awscli 

Once installed, you can then simply run:

aws s3 sync s3://<source_bucket> <local_destination> 

For example:

aws s3 sync s3://mybucket . 

will download all the objects in mybucket to the current directory.

And will output:

download: s3://mybucket/test.txt to test.txt download: s3://mybucket/test2.txt to test2.txt 

This will download all of your files using a one-way sync. It will not delete any existing files in your current directory unless you specify --delete, and it won't change or delete any files on S3.

You can also do S3 bucket to S3 bucket, or local to S3 bucket sync.

Check out the documentation and other examples.

Whereas the above example is how to download a full bucket, you can also download a folder recursively by performing

aws s3 cp s3://BUCKETNAME/PATH/TO/FOLDER LocalFolderName --recursive 

This will instruct the CLI to download all files and folder keys recursively within the PATH/TO/FOLDER directory within the BUCKETNAME bucket.

like image 194
Layke Avatar answered Oct 08 '22 08:10

Layke


You can use s3cmd to download your bucket:

s3cmd --configure s3cmd sync s3://bucketnamehere/folder /destination/folder 

There is another tool you can use called rclone. This is a code sample in the Rclone documentation:

rclone sync /home/local/directory remote:bucket 
like image 35
Phil M. Avatar answered Oct 08 '22 07:10

Phil M.