Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Set cache-control for entire S3 bucket automatically (using bucket policies?)

Tags:

amazon-s3

s3fs

People also ask

Can S3 bucket have multiple bucket policies?

No, a AWS::S3::BucketPolicy can only have one PolicyDocument .

What is the policy applied to an S3 bucket called as?

A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy. You add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it. Object permissions apply only to the objects that the bucket owner creates.

What is the default S3 bucket policy?

By default, all Amazon S3 buckets and objects are private. Only the resource owner which is the AWS account that created the bucket can access that bucket. The resource owner can, however, choose to grant access permissions to other resources and users.


There are now 3 ways to get this done: via the AWS Console, via the command line, or via the s3cmd command line tool.


AWS Console Instructions

This is now the recommended solution. It is straight forward, but it can take some time.

  • Log in to AWS Management Console
  • Go into S3 bucket
  • Select all files by route
  • Choose "More" from the menu
  • Select "Change metadata"
  • In the "Key" field, select "Cache-Control" from the drop down menu max-age=604800 Enter (7 days) for Value
  • Press "Save" button

(thanks to @biplob - please give him some love below)


AWS Command Line Solution

Originally, when I created this bucket policies were a no go, so I figured how to do it using aws-cli, and it is pretty slick. When researching I couldn't find any examples in the wild, so I thought I would post some of my solutions to help those in need.

NOTE: By default, aws-cli only copies a file's current metadata, EVEN IF YOU SPECIFY NEW METADATA.

To use the metadata that is specified on the command line, you need to add the '--metadata-directive REPLACE' flag. Here are a some examples.

For a single file

aws s3 cp s3://mybucket/file.txt s3://mybucket/file.txt --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public

For an entire bucket (note --recursive flag):

aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2034-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public

A little gotcha I found, if you only want to apply it to a specific file type, you need to exclude all the files, then include the ones you want.

Only jpgs and pngs:

aws s3 cp s3://mybucket/ s3://mybucket/ --exclude "*" --include "*.jpg" --include "*.png" \
--recursive --metadata-directive REPLACE --expires 2034-01-01T00:00:00Z --acl public-read \
--cache-control max-age=2592000,public

Here are some links to the manual if you need more info:

  • http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
  • http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#options

Known Issues:

"Unknown options: --metadata-directive, REPLACE"

this can be caused by an out of date awscli - see @eliotRosewater's answer below


S3cmd tool

S3cmd is a "Command line tool for managing Amazon S3 and CloudFront services". While this solution requires a git pull it might be a simpler and more comprehensive solution.

For full instructions, see @ashishyadaveee11's post below


Hope it helps!


Now, it can be changed easily from the AWS console.

  • Log in to AWS Management Console
  • Go into S3 bucket
  • Select all files by route
  • Choose "More" from the menu
  • Select "Change metadata"
  • In the "Key" field, select "Cache-Control" from the drop down menu
  • max-age=604800 Enter (7 days) for Value
  • Press "Save" button

It takes time to execute depends on your bucket files. Redo from the beginning if you accidentally close the browser.


steps

  1. git clone https://github.com/s3tools/s3cmd
  2. Run s3cmd --configure (You will be asked for the two keys - copy and paste them from your confirmation email or from your Amazon account page. Be careful when copying them! They are case sensitive and must be entered accurately or you'll keep getting errors about invalid signatures or similar. Remember to add s3:ListAllMyBuckets permissions to the keys or you will get an AccessDenied error while testing access.)
  3. ./s3cmd --recursive modify --add-header="Cache-Control:public ,max-age= 31536000" s3://your_bucket_name/

Were it that my reputation score were >50, I'd just comment. But it's not (yet) so here's another full answer.


I've been banging my head on this problem for a while now. Until I found & read the docs. Sharing that here in case it helps anyone else:

  • Amazon CloudFront Documentation: Specifying How Long Objects Stay in a CloudFront Edge Cache (Expiration)

What ended up reliably working for me was this command. I chose a 1 second expiration time for testing to verify expected results:

aws s3 cp \
  --metadata-directive REPLACE \
  --cache-control max-age=1,s-maxage=1 \
  s3://bucket/path/file \
  s3://bucket/path/file
  • --metadata-directive REPLACE is required when "cp" modifying metadata on an existing file in S3
  • max-age sets Browser caching age, in seconds
  • s-maxage sets CloudFront caching, in seconds

Likewise, if setting these Cache-Control header values on a file while uploading to S3, the command would look like:

aws s3 cp \
  --cache-control max-age=1,s-maxage=1 \
  /local/path/file \
  s3://bucket/path/file

I don't think you can specify this at the bucket level but there are a few workarounds for you.

  1. Copy the object to itself on S3 setting the appropriate cache-control headers for the copy operation.

  2. Specify response headers in the url to the files. You need to use pre-signed urls for this to work but you can specify certain response headers in the querystring including cache-control and expires. For a full list of the available options see: http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTObjectGET.html?r=5225