In one of the Service Worker examples by Google, cache and return requests
self.addEventListener('fetch', function(event) {
event.respondWith(
caches.match(event.request)
.then(function(response) {
// Cache hit - return response
if (response) {
return response;
}
// IMPORTANT: Clone the request. A request is a stream and
// can only be consumed once. Since we are consuming this
// once by cache and once by the browser for fetch, we need
// to clone the response.
var fetchRequest = event.request.clone();
return fetch(fetchRequest).then(
function(response) {
// Check if we received a valid response
if(!response || response.status !== 200 || response.type !== 'basic') {
return response;
}
// IMPORTANT: Clone the response. A response is a stream
// and because we want the browser to consume the response
// as well as the cache consuming the response, we need
// to clone it so we have two streams.
var responseToCache = response.clone();
caches.open(CACHE_NAME)
.then(function(cache) {
cache.put(event.request, responseToCache);
});
return response;
}
);
})
);
});
On the other hand, the example provided by MDN, Using Service Workers, does not clone the request.
this.addEventListener('fetch', function(event) {
event.respondWith(
caches.match(event.request).then(function(resp) {
return resp || fetch(event.request).then(function(response) {
caches.open('v1').then(function(cache) {
cache.put(event.request, response.clone());
});
return response;
});
}).catch(function() {
return caches.match('/sw-test/gallery/myLittleVader.jpg');
})
);
});
So in the case of a cache miss in the Google example:
I understand why we have to clone the response: because it's consumed by cache.put
, and we still want to return the response back to the webpage who requested it.
But why does one have to clone the request? In the comment it says it's consumed by cache and the browser for fetch. What does it mean exactly?
cache.put
? If so, why doesn't caches.match
consume the request?Using Service Workers, you can build your own custom HTTP responses, including editing their headers. This functionality makes Service Workers extremely powerful—which is why you can understand that they need to serve requests over HTTPS.
The clone() method of the Request interface creates a copy of the current Request object. Like the underlying ReadableStream.
Service Workers are a special type of Web Worker with the ability to intercept, modify, and respond to all network requests using the Fetch API. Service Workers can access the Cache API, and asynchronous client-side data stores, such as IndexedDB , to store resources.
HTTP Header Interceptor On the httpRequest object, we can call the clone method to modify the request object and return a new copy. In this example we are attaching the API_KEY value as a header to every HTTP request httpRequest. clone({ setHeaders: { API_KEY } }) .
The comment seems to me to say quite clearly why the author of that code thought cloning was necessary:
A request is a stream and can only be consumed once. Since we are consuming this once by cache and once by the browser for fetch, we need to clone the response.
Remember that the body
of a request can be a ReadableStream
. If cache.match
had to read the stream (or partially read the stream) to know whether a cache entry was a match, a subsequent read by fetch
would continue that read, missing any data that cache.match
read.
I wouldn't be surprised if it only mattered in limited situations (unless the code in the Google example is just plain wrong and it's not necessary), and so failing to do it probably works in many test cases (for instance, where the body is null
or a string, not a stream). Remember that MDN is very good, but it is community-edited, and errors and poor examples do creep in periodically. (I've had to fix several blatant errors in it over the years.) Usually the community spots them and fixes them.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With