Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Automatically sync two Amazon S3 buckets, besides s3cmd?

Is there a another automated way of syncing two Amazon S3 bucket besides using s3cmd? Maybe Amazon has this as an option? The environment is linux, and every day I would like to sync new & deleted files to another bucket. I hate the thought of keeping all eggs in one basket.

like image 961
novice18 Avatar asked Sep 19 '11 14:09

novice18


People also ask

What is the difference between S3 sync and S3 copy?

aws s3 cp will copy all files, even if they already exist in the destination area. It also will not delete files from your destination if they are deleted from the source. aws s3 sync looks at the destination before copying files over and only copies over files that are new and updated.

What enables automatic asynchronous copying of objects across buckets in different AWS regions?

Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets.

How can a company configure automatic asynchronous copying of objects in Amazon S3 buckets across regions?

With Amazon S3 Replication, you can configure Amazon S3 to automatically replicate S3 objects across different AWS Regions by using S3 Cross-Region Replication (CRR) or between buckets in the same AWS Region by using S3 Same-Region Replication (SRR).


2 Answers

You could use the standard Amazon CLI to make the sync. You just have to do something like:

aws s3 sync s3://bucket1/folder1 s3://bucket2/folder2

http://aws.amazon.com/cli/

like image 195
Franz Fahrenkrog Petermann Avatar answered Oct 04 '22 03:10

Franz Fahrenkrog Petermann


S3 buckets != baskets

From their site:

Data Durability and Reliability

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region. To help ensure durability, Amazon S3 PUT and COPY operations synchronously store your data across multiple facilities before returning SUCCESS. Once stored, Amazon S3 maintains the durability of your objects by quickly detecting and repairing any lost redundancy. Amazon S3 also regularly verifies the integrity of data stored using checksums. If corruption is detected, it is repaired using redundant data. In addition, Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.

Amazon S3’s standard storage is:

  • Backed with the Amazon S3 Service Level Agreement.
  • Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
  • Designed to sustain the concurrent loss of data in two facilities.

Amazon S3 provides further protection via Versioning. You can use Versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for every version stored.

That's very reliable.

like image 25
blahdiblah Avatar answered Oct 04 '22 03:10

blahdiblah