I have an Amazon S3 bucket with about 300K objects in it and need to set the Cache-control header on all of them. Unfortunately it seems like the only way to do this, besides one at a time, is by copying the objects to themselves and setting the cache control header that way:
Is the documentation for the Amazon S3 CLI copy command but I have been unsuccessful setting the cache control header using it. Does anyone have an example command that would work for this. I am trying to set cache-control to max-age=1814400
Some background material:
aws s3 cp will copy all files, even if they already exist in the destination area. It also will not delete files from your destination if they are deleted from the source. aws s3 sync looks at the destination before copying files over and only copies over files that are new and updated.
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files. The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files. Just experiment to get the result you want.
To only copy an object under certain conditions, such as whether the Etag matches or whether the object was modified before or after a specified date, use the following request parameters: x-amz-copy-source-if-match. x-amz-copy-source-if-none-match.
By default, aws-cli only copies a file's current metadata, EVEN IF YOU SPECIFY NEW METADATA.
To use the metadata that is specified on the command line, you need to add the '--metadata-directive REPLACE' flag. Here are some examples.
For a single file
aws s3 cp s3://mybucket/file.txt s3://mybucket/file.txt --metadata-directive REPLACE \
--expires 2100-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
For an entire bucket:
aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2100-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
A little gotcha I found, if you only want to apply it to a specific file type, you need to exclude all the files, then include the ones you want.
Only jpgs and pngs
aws s3 cp s3://mybucket/ s3://mybucket/ --exclude "*" --include "*.jpg" --include "*.png" \
--recursive --metadata-directive REPLACE --expires 2100-01-01T00:00:00Z --acl public-read \
--cache-control max-age=2592000,public
Here are some links to the manual if you need more info:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With