I am generating signed urls on my webapp (nodejs) using the knox nodejs-library. However the issue arises, that for every request, I need to regenerate an unique GET signed url for the current user, leaving browser's cache-control out of the game.
I've searched the web without success as browsers seem to use the full url as caching key so I am really curious how I can, under the given circumstances (nodejs, knox library) get the issue solved and use caching control while still being able to generated signed urls for each and every request as I need to verify the user's access rights.
I cannot believe there's no solution to that though.
S3 pre-signed URLs are a form of an S3 URL that temporarily grants restricted access to a single S3 object to perform a single operation — either PUT or GET — for a predefined time limit. To break it down: It is secure — the URL is signed using an AWS access key.
You can use signed URLs or signed cookies for any CloudFront distribution, regardless of whether the origin is an Amazon S3 bucket or an HTTP server.
A Signed URL is safe because: It is valid for only a limited time period that you specify. It is valid only for the Amazon S3 object that you specify. It cannot be used to retrieve a different object nor can the time period be modified (because it would invalidate the signature)
With a signed URL a user gets access only to a single file whereas with a signed cookie a user can access multiple files.
I am working with Java AmazonS3 client, but the process should be the same.
There is a strategy that can be used to handle this situation.
You could use a fixed date time as an expiration date. I set this date to tomorrow at 12 pm.
Now every time you generate a url, it will be the same throughout that day until 00:00. That way browser caching can be used to some extent.
If you use CloudFront with S3, you can use a Custom Policy, if you restrict each url to the user's IP and a reasonably long timeout, it means that when they request the same content again, they will get the same URL and hence their browser can cache the content but the URL will not work for someone else (on a different IP).
(see: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html)
Expanding @semir-deljić Answer.
Every time we call getSignedUrl
function, it will generate new URLs. This will result in images not being cached even if Cache Control
header is present.
Thus, we are using timekeeper library to freeze time. Now when the function is called, it thinks that the time has not passed, and it returns same URL.
const moment = require('moment');
const tk = require("timekeeper");
function url4download(awsPath, awsKey) {
function getFrozenDate() {
return moment().startOf('week').toDate();
}
// Paramters for getSignedUrl function
const params = {
// Ref: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
// Ref: https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
Bucket: awsBucket,
Key: `${awsPath}/${awsKey}`,
// 604800 == 7 days
ResponseCacheControl: `public, max-age=604800, immutable`,
Expires: 604800, // 7 days is max
};
const url = tk.withFreeze(getFrozenDate(), () => {
return S3.getSignedUrl('getObject', params);
});
return url;
}
Note:
Using moment().toDate()
, as the timekeeper requires a Native Date Object.
Even tough the question is for using knox
library, my answer uses aws official library.
// This is how the AWS & S3 is initiliased.
const AWS = require('aws-sdk');
const S3 = new AWS.S3({
accessKeyId: awsAccessId,
secretAccessKey: awsSecretKey,
region: 'ap-south-1',
apiVersion: '2006-03-01',
signatureVersion: 'v4',
});
Inspiration: https://advancedweb.hu/cacheable-s3-signed-urls/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With