First, I found some resources online here and here saying about the same thing:
For a normal/soft reload, the browser will re-validate the cache, checking to see if the files are modified.
I tested it on Chrome. I have a webpage index.html
which loads a few javascript files in the end of body
. When hitting the refresh button (soft/normal), from the network panel I saw index.html
was 304 Not Modified
, which was good. However, all the javascript files were loaded from memory cache
with status code 200. No revalidation!
Then I tried modifying one of the javascript files. Did the soft reload. And guess what? That file was still loaded from memory cache!
Why does Chrome do this? Doesn't that defeat the purpose of the refresh button?
Here is more information about Chrome's memory cache.
This is a relatively new behaviour which was introduced in 2017 by Chrome browser.
The well-known behaviour of browsers is to revalidate cached resource when the user refreshes the page (either by using CTRL+R combination or dedicated refresh button) by sending If-Modified-Since
or If-None-Match
header. It works for all resources obtained by GET request: stylesheets, scripts, htmls etc. This leads to tons of HTTP requests that in the majority of cases end with 304 Not Modified
responses.
The most popular websites are the ones with constantly changing content, so their users tend to refresh them habitually to get the latest news, tweets, videos and posts. It's not hard to imagine how many unnecessary requests were made every second and as it is said that the best request is the one never made, Facebook decided to address this problem and asked Chrome and Firefox to find a solution together.
Chrome came up with the described solution.
Instead of invalidating each subresource, it only checks if the HTML document changed. If it didn't, it means that it's very likely that everything else also wasn't modified, so it's returned from browser's cache. This works best when each resource has content addressed URL; for example, URL contains a hash of the content of the file. Users can always overcome this behaviour by performing a hard refresh.
Firefox's solution gives more control to developers, and it's on a good way to be implemented by all browser vendors. That is the new Cache-control
directive: immutable
.
You can find more information about it here: https://developer.mozilla.org/pl/docs/Web/HTTP/Headers/Cache-Control#Revalidation_and_reloading
Resources:
Browser caches are a little more complex than simple 200 and 304s than they once were and pay attention to server side directives in headers to tell them how to handle caching for each specific site.
We can adjust the browser caching profiles using various headers (such as Cache-Control
) by specifically setting the time before expires you can tell a browser to use the local copy instead of requesting a new fresh copy, these can be quite aggressive in the cases of content you really don't want changed (i.e a companies logo). By doing something like Cache-Control: public, max-age=31536000
Additionally you can also set the Expires
header which will allow you to almost do the same as Cache-Control
but with a little less control. It just sets the amount of time to pass before the browser considers a asset stale and re-requests. Although with a re-request we could still get a cached result if the not modified response code is sent back from the server.
A lot of web servers have settings enabled to allow more aggressive caching of certain asset files (js, images, css) but less aggressive caching of content files.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With