I'm doing some work for a client that has 2 separate AWS accounts. We need to move all the files in a bucket on one of their S3 accounts to a new bucket on the 2nd account.
We thought that s3cmd would allow this, using the format:
s3cmd cp s3://bucket1 s3://bucket2 --recursive
However this only allows me to use the keys of one account and I can't specify the accounts of the 2nd account.
Is there a way to do this without downloading the files and uploading them again to the 2nd account?
Amazon S3 supports global buckets, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition.
If the object that you can't copy between buckets is owned by another account, then the object owner can do one of the following: The object owner can grant the bucket owner full control of the object. After the bucket owner owns the object, the bucket policy applies to the object.
You don't have to open permissions to everyone. Use the below Bucket policies on source and destination for copying from a bucket in one account to another using an IAM user
Bucket to Copy from: SourceBucket
Bucket to Copy to: DestinationBucket
Source AWS Account ID: XXXX–XXXX-XXXX
Source IAM User: src–iam-user
The below policy means – the IAM user - XXXX–XXXX-XXXX:src–iam-user
has s3:ListBucket
and s3:GetObject
privileges on SourceBucket/*
and s3:ListBucket
and s3:PutObject
privileges on DestinationBucket/*
On the SourceBucket the policy should be like:
{ "Id": "Policy1357935677554", "Statement": [{ "Sid": "Stmt1357935647218", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": "arn:aws:s3:::SourceBucket", "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"} }, { "Sid": "Stmt1357935676138", "Action": ["s3:GetObject"], "Effect": "Allow", "Resource": "arn:aws:s3:::SourceBucket/*", "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"} }] }
On the DestinationBucket the policy should be:
{ "Id": "Policy1357935677555", "Statement": [{ "Sid": "Stmt1357935647218", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": "arn:aws:s3:::DestinationBucket", "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"} }, { "Sid": "Stmt1357935676138", "Action": ["s3:PutObject"], "Effect": "Allow", "Resource": "arn:aws:s3:::DestinationBucket/*", "Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"} }] }
Command to be run is s3cmd cp s3://SourceBucket/File1 s3://DestinationBucket/File1
Bandwidth inside AWS does not count, so you could save some money and time by doing it all from a box inside AWS, as long as the buckets are in the same region.
As for doing it without having the file touch down on a computer somewhere - don't think so.
Except:Since they do bulk uploads from hard drives you mail to them, they might do the same for you for a bucket to bucket transfer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With