Fortunately, Amazon CloudFront can serve both types of content, to reduce latency, protect your architecture, and optimize costs. In this post, we demonstrate how to use CloudFront to deliver both static and dynamic content using a single distribution, for dynamic and static websites and web applications.
Open the CloudFront console. Choose Create Distribution. Under Origin, for Origin domain, choose your S3 bucket's REST API endpoint from the dropdown list. Or, enter your S3 bucket's website endpoint.
CloudFront doesn't compress the object again. If the origin returns an uncompressed object to CloudFront (there's no Content-Encoding header in the HTTP response), CloudFront determines whether the object is compressible.
UPDATE: Amazon now supports gzip compression, so this is no longer needed. Amazon Announcement
Original answer:
The answer is to gzip the CSS and JavaScript files. Yes, you read that right.
gzip -9 production.min.css
This will produce production.min.css.gz
. Remove the .gz
, upload to S3 (or whatever origin server you're using) and explicitly set the Content-Encoding
header for the file to gzip
.
It's not on-the-fly gzipping, but you could very easily wrap it up into your build/deployment scripts. The advantages are:
gzip -9
).Assuming that your CSS/JavaScript files are (a) minified and (b) large enough to justify the CPU required to decompress on the user's machine, you can get significant performance gains here.
Just remember: If you make a change to a file that is cached in CloudFront, make sure you invalidate the cache after making this type of change.
My answer is a take off on this: http://blog.kenweiner.com/2009/08/serving-gzipped-javascript-files-from.html
Building off skyler's answer you can upload a gzip and non-gzip version of the css and js. Be careful naming and test in Safari. Because safari won't handle .css.gz
or .js.gz
files.
site.js
and site.js.jgz
and
site.css
and site.gz.css
(you'll need to set the content-encoding
header to the correct MIME type to get these to serve right)
Then in your page put.
<script type="text/javascript">var sr_gzipEnabled = false;</script>
<script type="text/javascript" src="http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr.gzipcheck.js.jgz"></script>
<noscript>
<link type="text/css" rel="stylesheet" href="http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css">
</noscript>
<script type="text/javascript">
(function () {
var sr_css_file = 'http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css';
if (sr_gzipEnabled) {
sr_css_file = 'http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css.gz';
}
var head = document.getElementsByTagName("head")[0];
if (head) {
var scriptStyles = document.createElement("link");
scriptStyles.rel = "stylesheet";
scriptStyles.type = "text/css";
scriptStyles.href = sr_css_file;
head.appendChild(scriptStyles);
//alert('adding css to header:'+sr_css_file);
}
}());
</script>
gzipcheck.js.jgz is just sr_gzipEnabled = true;
This tests to make sure the browser can handle the gzipped code and provide a backup if they can't.
Then do something similar in the footer assuming all of your js is in one file and can go in the footer.
<div id="sr_js"></div>
<script type="text/javascript">
(function () {
var sr_js_file = 'http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr-br-min.js';
if (sr_gzipEnabled) {
sr_js_file = 'http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr-br-min.js.jgz';
}
var sr_script_tag = document.getElementById("sr_js");
if (sr_script_tag) {
var scriptStyles = document.createElement("script");
scriptStyles.type = "text/javascript";
scriptStyles.src = sr_js_file;
sr_script_tag.appendChild(scriptStyles);
//alert('adding js to footer:'+sr_js_file);
}
}());
</script>
UPDATE: Amazon now supports gzip compression. Announcement, so this is no longer needed. Amazon Announcement
Cloudfront supports gzipping.
Cloudfront connects to your server via HTTP 1.0. By default some webservers, including nginx, dosn't serve gzipped content to HTTP 1.0 connections, but you can tell it to do by adding:
gzip_http_version 1.0
to your nginx config. The equivalent config could be set for whichever web server you're using.
This does have a side effect of making keep-alive connections not work for HTTP 1.0 connections, but as the benefits of compression are huge, it's definitely worth the trade off.
Taken from http://www.cdnplanet.com/blog/gzip-nginx-cloudfront/
Edit
Serving content that is gzipped on the fly through Amazon cloud front is dangerous and probably shouldn't be done. Basically if your webserver is gzipping the content, it will not set a Content-Length and instead send the data as chunked.
If the connection between Cloudfront and your server is interrupted and prematurely severed, Cloudfront still caches the partial result and serves that as the cached version until it expires.
The accepted answer of gzipping it first on disk and then serving the gzipped version is a better idea as Nginx will be able to set the Content-Length header, and so Cloudfront will discard truncated versions.
We've made a few optimisations for uSwitch.com recently to compress some of the static assets on our site. Although we setup a whole nginx proxy to do this, I've also put together a little Heroku app that proxies between CloudFront and S3 to compress content: http://dfl8.co
Given publicly accessible S3 objects can be accessed using a simple URL structure, http://dfl8.co just uses the same structure. I.e. the following URLs are equivalent:
http://pingles-example.s3.amazonaws.com/sample.css
http://pingles-example.dfl8.co/sample.css
http://d1a4f3qx63eykc.cloudfront.net/sample.css
Yesterday amazon announced new feature, you can now enable gzip on your distribution.
It works with s3 without added .gz files yourself, I tried the new feature today and it works great. (need to invalidate you're current objects though)
More info
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With