Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to change permission recursively to folder with AWS s3 or AWS s3api

I am trying to grant permissions to an existing account in s3.

The bucket is owned by the account, but the data was copied from another account's bucket.

When I try to grant permissions with the command:

aws s3api put-object-acl --bucket <bucket_name> --key <folder_name> --profile <original_account_profile> --grant-full-control emailaddress=<destination_account_email>

I receive the error:

An error occurred (NoSuchKey) when calling the PutObjectAcl operation: The specified key does not exist.

while if I do it on a single file the command is successful.

How can I make it work for a full folder?

like image 291
gc5 Avatar asked Oct 04 '17 19:10

gc5


People also ask

How do I grant access to a specific directory in S3 bucket?

If the IAM user and S3 bucket belong to the same AWS account, then you can grant the user access to a specific bucket folder using an IAM policy. As long as the bucket policy doesn't explicitly deny the user access to the folder, you don't need to update the bucket policy if access is granted by the IAM policy.

How do I change permissions on S3 bucket?

To set ACL permissions for a bucketSign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that you want to set permissions for. Choose Permissions. Under Access control list, choose Edit.

Why can't I access a specific folder or Amazon S3 bucket?

Check the following permissions for any settings that are denying your access to the prefix or object: Ownership of the prefix or object. Restrictions in the bucket policy. Restrictions in your AWS Identity and Access Management (IAM) user policy.


3 Answers

This can be only be achieved with using pipes. Try -

aws s3 ls s3://bucket/path/ --recursive | awk '{cmd="aws s3api put-object-acl --acl bucket-owner-full-control --bucket bucket --key "$4; system(cmd)}'
like image 99
Saswata Chakravarty Avatar answered Oct 09 '22 15:10

Saswata Chakravarty


The other answers are ok, but the FASTEST way to do this is to use the aws s3 cp command with the option --metadata-directive REPLACE, like this:

aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE

This gives speeds of between 50Mib/s and 80Mib/s.

The answer from the comments from John R, which suggested to use a 'dummy' option, like --storage-class STANDARD. Whilst this works, only gave me copy speeds between 5Mib/s and 11mb/s.

The inspiration for trying this came from AWS's support article on the subject: https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-change-anonymous-ownership/

NOTE: If you encounter 'access denied` for some of your objects, this is likely because you are using AWS creds for the bucket owning account, whereas you need to use creds for the account where the files were copied from.

like image 43
Stretch Avatar answered Oct 09 '22 14:10

Stretch


You will need to run the command individually for every object.

You might be able to short-cut the process by using:

aws s3 cp --acl bucket-owner-full-control --metadata Key=Value --profile <original_account_profile> s3://bucket/path s3://bucket/path

That is, you copy the files to themselves, but with the added ACL that grants permissions to the bucket owner.

If you have sub-directories, then add --recursive.

like image 43
John Rotenstein Avatar answered Oct 09 '22 15:10

John Rotenstein