I met a problem with setting the header "X-Content-Type-Options : nosniff" on static files (js, css, jpg, gif, ...) on my Amazon S3 bucket.
When I try to add it, it says to me : "User-defined metadata keys must start with x-amz-meta-."
How to do it so ? Should I do "x-amz-meta-X-Content-Type-Options"
Thanks in advance !
User-defined metadata indeed must start with x-amz-meta-*
but this won't help you -- they are also returned as x-amz-meta-*
headers when the object is fetched, and x-amz-meta-X-Content-Type-Options
will not be recognized by browsers.
S3 has very limited support for headers that don't begin with x-amz-meta-*
. Content-Type
and Content-Disposition
and Content-Encoding
are valid, but most others are not.
As this support forum post indicates (and testing confirms) if such headers are added to the upload (when working directly with the S3 API), they are simply ignored. They're not stored, and not returned with the response.
One known but undocumented exception is X-Robots-Tag
, which S3 does accept and will return with the response, although the AWS console won't let you edit it if you add it using the API.
One possible workaround that should be available soon is Lambda@Edge, which is an integration between Lambda and CloudFront, where the Lambda function runs within the CloudFront network and can modify request and response headers in and out of CloudFront... and of course, CloudFront integrates well with S3, so this could be a viable option once Lambda@Edge is generally available.
I tested this (I signed up for the preview of Lambda@Edge. I haven't officially heard back that I had been granted access, but it seems to be working.)
Using this Lambda function code:
'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
headers['X-Content-Type-Options'] = ['nosniff'];
callback(null, response);
};
...gives this response...
$ curl -v http://dxxxexample.cloudfront.net/robots.txt
* Hostname was NOT found in DNS cache
* Trying x.x.x.x...
* Connected to dxxxexample.cloudfront.net (x.x.x.x) port 80 (#0)
> GET /robots.txt HTTP/1.1
> User-Agent: curl/7.35.0
> Host: dxxxexample.cloudfront.net
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/plain
< Content-Length: 324
< Connection: keep-alive
< Date: Tue, 10 Jan 2017 20:38:33 GMT
< Last-Modified: Tue, 10 Jan 2017 17:13:36 GMT
< ETag: "dbe2f9a267e8ef192f0fdf0c888da01c"
< Cache-Control: no-cache
< Accept-Ranges: bytes
* Server AmazonS3 is not blacklisted
< Server: AmazonS3
< Via: 1.1 xxxxxxxxxx.cloudfront.net (CloudFront)
< X-Content-Type-Options: nosniff
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: xxxxx
<
User-agent: *
Disallow: /
...so that seems to be a viable workaround.
I configured this function to trigger on "Viewer Response" (the trigger fires to allow modifying the response before the response is returned from CloudFront to the browser) but in fact it could probably trigger instead on "Origin Response," requiring it to run less frequently (assuming, unlike in the example above, you didn't also use Cache-Control: no-cache
, like I did in my test. I used /robots.txt
simply because I happened to already have this set up in a bucket along with CloudFront and Lambda -- obviously this file isn't a particularly interesting application for X-Content-Type-Options
but as you can see, this does work).
I don't know when Lambda@Edge will be released from preview.
If you want to submit this as a feature request for S3 itself, you might contact your AWS account representative if you have one, or post about it in the AWS support forums. (I am not affiliated with AWS).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With