Fetch is the new Promise-based API for making network requests:
fetch('https://www.everythingisawesome.com/')
.then(response => console.log('status: ', response.status));
This makes sense to me - when we initiate a network call, we return a Promise which lets our thread carry on with other business. When the response is available, the code inside the Promise executes.
However, if I'm interested in the payload of the response, I do so via methods of the response, not properties:
These methods return promises, and I'm unclear as to why.
fetch('https://www.everythingisawesome.com/') //IO bound
.then(response => response.json()); //We now have the response, so this operation is CPU bound - isn't it?
.then(entity => console.log(entity.name));
Why would processing the response's payload return a promise - it's unclear to me why it should be an async operation.
Fetch API is an asynchronous web API that comes with native JavaScript, and it returns the data in the form of promises. You use several Web APIs without knowing that they are APIs. One of them is the Fetch API, and it is used for making API requests.
With the fetch() API, once you get a Response object, you need to call another function to get the response data. In this case, we want to get the response data as JSON, so we would call the json() method of the Response object. It turns out that json() is also asynchronous.
forEach is synchronous, while fetch is asynchronous. While each element of the results array will be visited in order, forEach will return without the completion of fetch, thus leaving you empty-handed.
async/await syntax fits great with fetch() because it simplifies the work with promises. const response = await fetch('/movies'); // waits until the request completes... fetchMovies() is an asynchronous function since it's marked with the async keyword.
Why are these fetch methods asynchronous?
The naïve answer is "because the specification says so"
- The
arrayBuffer()
method, when invoked, must return the result of running consume body with ArrayBuffer.- The
blob()
method, when invoked, must return the result of running consume body with Blob.- The
formData()
method, when invoked, must return the result of running consume body with FormData.- The
json()
method, when invoked, must return the result of running consume body with JSON.- The
text()
method, when invoked, must return the result of running consume body with text.
Of course, that doesn't really answer the question because it leaves open the question of "Why does the spec say so?"
And this is where it gets complicated, because I'm certain of the reasoning, but I have no evidence from an official source to prove it. I'm going to attempt to explain the rational to the best of my understanding, but be aware that everything after here should be treated largely as outside opinion.
When you request data from a resource using the fetch API, you have to wait for the resource to finish downloading before you can use it. This should be reasonably obvious. JavaScript uses asynchronous APIs to handle this behavior so that the work involved doesn't block other scripts, and—more importantly—the UI.
When the resource has finished downloading, the data might be enormous. There's nothing that prevents you from requesting a monolithic JSON object that exceeds 50MB.
What do you think would happen if you attempted to parse 50MB of JSON synchronously? It would block other scripts, and—more importantly—the UI.
Other programmers have already solved how to handle large amounts of data in a performant manner: Streams. In JavaScript, streams are implemented using an asynchronous API so that they don't block, and if you read the consume body details, it's clear that streams are being used to parse the data:
Let stream be body's stream if body is non-null, or an empty ReadableStream object otherwise.
Now, it's certainly possible that the spec could have defined two ways of accessing the data: one synchronous API meant for smaller amounts of data, and one asynchronous API for larger amounts of data, but this would lead to confusion and duplication.
Besides Ya Ain't Gonna Need It. Everything that can be expressed using synchronous code can be expressed in asynchronous code. The reverse is not true. Because of this, a single asynchronous API was created that could handle all use cases.
Because the content is not transferred until you start reading it. The headers come first.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With