Following the instructions in this guide, I've managed to get uploads working via signed URLs. It looks something like this:
const s3 = new aws.S3();
const s3Params = {
Bucket: S3_BUCKET,
Key: fileName,
Expires: 60,
ContentType: fileType,
ACL: 'public-read',
CacheControl: 'public, max-age=31536000',
};
s3.getSignedUrl('putObject', s3Params, (err, data) => {
// ...
});
...except my CacheControl
param (which I added myself; it isn't in the guide) does not seem to take effect. When I use the above code to generate a signed URL and upload something to it, the resulting object in S3 is served with no Cache-Control
header.
What am I doing wrong?
If you want to enable Cache-Control for all files, add Header set line outside the filesMatch block. As you can see, we set the Cache-Control header's max-age to 3600 seconds and to public for the listed files.
The signed URL allows the user to download or stream the content. This step is automatic; the user usually doesn't have to do anything additional to access the content. For example, if a user is accessing your content in a web browser, your application returns the signed URL to the browser.
Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . In the Buckets list, choose the name of the bucket that contains the object that you want a presigned URL for. In the Objects list, select the object that you want to create a presigned URL for.
You can use presigned URLs to generate a URL that can be used to access your Amazon S3 buckets. When you create a presigned URL, you associate it with a specific action. You can share the URL, and anyone with access to it can perform the action embedded in the URL as if they were the original signing user.
You must send the Cache-Control
header in the upload request, regardless of what you set during the signed URL generation.
Whether this is a bug or an intentional behaviour is questionable and beyond my ability to answer. The Cache-Control
header, as you noticed, is part of the signed URL, but for whatever reason the information is completely ignored during the file upload, ie. not specifying a CacheControl
property in the getSignedUrl()
function still allows the client to set Cache-Control
header to whatever value they choose.
If you need to have control over the Cache-Control
header, then using the getSignedUrl()
is most likely not appropriate for your use case.
AWS now supports a new signature scheme, called AWS Signature version 4 which allows full control over what the upload request may or may not contain, including which headers are sent and with what values.
The JavaScript SDK supports this new signature version: createPresignedPost()
.
A detailed example of how to generate this pre-signed POST
policy and how the upload form should look like can be found directly on AWS's documentation.
Even though the example demonstrates the file upload via standard http upload <form>
element, the principles can be applied to any client/consumer capable of performing HTTP communication.
For completeness, here is the example (taken from AWS documentation page linked above) of how a pre-signed POST
policy looks like:
{ "expiration": "2015-12-30T12:00:00.000Z",
"conditions": [
{"bucket": "sigv4examplebucket"},
["starts-with", "$key", "user/user1/"],
{"acl": "public-read"},
{"success_action_redirect": "http://sigv4examplebucket.s3.amazonaws.com/successful_upload.html"},
["starts-with", "$Content-Type", "image/"],
{"x-amz-meta-uuid": "14365123651274"},
{"x-amz-server-side-encryption": "AES256"},
["starts-with", "$x-amz-meta-tag", ""],
{"x-amz-credential": "AKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request"},
{"x-amz-algorithm": "AWS4-HMAC-SHA256"},
{"x-amz-date": "20151229T000000Z" }
]
}
This POST policy sets the following conditions on the request:
sigv4examplebucket
. The bucket must be in the region that you specified in the credential scope (x-amz-credential
form parameter), because the signature you provided is valid only within this scope.user/user1
. For example, user/user1/MyPhoto.jpg
.public-read
.http://sigv4examplebucket.s3.amazonaws.com/successful_upload.html
.x-amz-meta-uuid
tag must be set to 14365123651274
.x-amz-meta-tag
can contain any value.Note that the list of conditions in this example is not exhaustive and CacheControl
is supported. See the creating a POST policy document for what you can do with this.
Contrary to what the accepted answer says, you can add Cache-Control
to the URL obtained from the getSignedURL, but only for the getObject operation. I hope this helps people who came here for getObject
.
This is what I have done
const params = {
Bucket: <YOUR_S3_BUCKET>,
Key: <YOUR_KEY>,
ResponseCacheControl: `public, max-age=900, immutable`,
Expires: 900 // default value
}
return this.S3.getSignedUrl('getObject', params)
The resulting URL looks like this
https://<YOUR_S3_BUCKET>.s3.us-east-2.amazonaws.com/<YOUR_KEY>?...&response-cache-control=public%2C%20max-age%3D900%2C%20immutable
and the response header Cache-Control: public, max-age=900, immutable
Check out the doc here and notice the params in the last example. You can find the ResponseCacheControl
and Expires
params there.
If you look at the docs, you can see that putObject
contains the CacheControl
param but adding that to params does not do anything.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With