No, a AWS::S3::BucketPolicy can only have one PolicyDocument .
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates.
By default, all Amazon S3 buckets and objects are private. Only the resource owner which is the AWS account that created the bucket can access that bucket. The resource owner can, however, choose to grant access permissions to other resources and users.
There are now 3 ways to get this done: via the AWS Console, via the command line, or via the s3cmd command line tool.
This is now the recommended solution. It is straight forward, but it can take some time.
(thanks to @biplob - please give him some love below)
Originally, when I created this bucket policies were a no go, so I figured how to do it using aws-cli, and it is pretty slick. When researching I couldn't find any examples in the wild, so I thought I would post some of my solutions to help those in need.
NOTE: By default, aws-cli only copies a file's current metadata, EVEN IF YOU SPECIFY NEW METADATA.
To use the metadata that is specified on the command line, you need to add the '--metadata-directive REPLACE' flag. Here are a some examples.
For a single file
aws s3 cp s3://mybucket/file.txt s3://mybucket/file.txt --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
For an entire bucket (note --recursive flag):
aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
A little gotcha I found, if you only want to apply it to a specific file type, you need to exclude all the files, then include the ones you want.
Only jpgs and pngs:
aws s3 cp s3://mybucket/ s3://mybucket/ --exclude "*" --include "*.jpg" --include "*.png" \
--recursive --metadata-directive REPLACE --expires 2034-01-01T00:00:00Z --acl public-read \
--cache-control max-age=2592000,public
Here are some links to the manual if you need more info:
Known Issues:
"Unknown options: --metadata-directive, REPLACE"
this can be caused by an out of date awscli - see @eliotRosewater's answer below
S3cmd is a "Command line tool for managing Amazon S3 and CloudFront services". While this solution requires a git pull it might be a simpler and more comprehensive solution.
For full instructions, see @ashishyadaveee11's post below
Hope it helps!
Now, it can be changed easily from the AWS console.
It takes time to execute depends on your bucket files. Redo from the beginning if you accidentally close the browser.
steps
git clone https://github.com/s3tools/s3cmd
s3cmd --configure
(You will be asked for the two keys - copy and paste them from your
confirmation email or from your Amazon account page. Be careful when
copying them! They are case sensitive and must be entered accurately
or you'll keep getting errors about invalid signatures or similar.
Remember to add s3:ListAllMyBuckets
permissions to the keys or you will get an AccessDenied
error while testing access.)./s3cmd --recursive modify --add-header="Cache-Control:public ,max-age= 31536000" s3://your_bucket_name/
Were it that my reputation score were >50, I'd just comment. But it's not (yet) so here's another full answer.
I've been banging my head on this problem for a while now. Until I found & read the docs. Sharing that here in case it helps anyone else:
What ended up reliably working for me was this command. I chose a 1 second expiration time for testing to verify expected results:
aws s3 cp \
--metadata-directive REPLACE \
--cache-control max-age=1,s-maxage=1 \
s3://bucket/path/file \
s3://bucket/path/file
--metadata-directive REPLACE
is required when "cp
" modifying metadata on an existing file in S3max-age
sets Browser caching age, in secondss-maxage
sets CloudFront caching, in secondsLikewise, if setting these Cache-Control header values on a file while uploading to S3, the command would look like:
aws s3 cp \
--cache-control max-age=1,s-maxage=1 \
/local/path/file \
s3://bucket/path/file
I don't think you can specify this at the bucket level but there are a few workarounds for you.
Copy the object to itself on S3 setting the appropriate cache-control
headers for the copy operation.
Specify response headers in the url to the files. You need to use pre-signed urls for this to work but you can specify certain response headers in the querystring including cache-control
and expires
. For a full list of the available options see: http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTObjectGET.html?r=5225
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With