I learned that under HTTP1.1, the max number of default simultaneous persistent connections per host name (origin?) is going to be 6, at least for chrome. I am not asking about the exact number of the limit since I know it varies from browser to browser. I am more curious about when we will open a new connection for new requests - does the browser reuse the same TCP connection somehow or it always starts a new TCP connection unless if it hasn't reached the limit of concurrent requests?
Let's say we are using HTTP1.1 and we have Connection: Keep-Alive
if in the html we have
<script src="https://foo/foo1.js"></script>
<script src="https://foo/foo2.js"></script>
<script src="https://foo/foo3.js"></script>
<script src="https://foo/foo4.js"></script>
<script src="https://foo/foo5.js"></script>
<script src="https://foo/foo6.js"></script>
<script src="https://foo/foo7.js"></script>
will each one of the scripts result in a new TCP connection established or all the subsequent requests will reuse the first TCP connection established by the first script tab? And if each one of these script result in a new TCP connection established, given the browser's limit for concurrent requests being 6, does the 7th request have to wait until the 6th request to be finished in order to establish the connection?
The above example is about initiating requests from HTML tags. What about api calls made from JavaScript? Let's in our javascript we have
const result1 = apiCall1()
const result2 = apiCall2()
const result3 = apiCall3()
const result4 = apiCall4()
const result5 = apiCall5()
const result6 = apiCall6()
const result7 = apiCall7()
And assume the endpoint that those API calls are hitting is all api.foo.com/v1/tasks
, my questions are, again: will each one of the api call result in a new TCP connection established or all the subsequent requests will reuse the first TCP connection established by the first api call? And if each one of these api call result in a new TCP connection established, given the browser's limit for concurrent requests being 6, does the 7th request have to wait until the 6th request to be finished in order to establish the connection?
My last question is, compared to http1.1, does http2 address this problem by allowing sending many requests at the same time over one single TCP connection?
Multiplexing: HTTP/1.1 loads resources one after the other, so if one resource cannot be loaded, it blocks all the other resources behind it. In contrast, HTTP/2 is able to use a single TCP connection to send multiple streams of data at once so that no one resource blocks any other resource.
First, because they received requests in large batches instead of smaller, more spread-out batches. And secondly, because with HTTP/2, the requests were all sent together—instead of staggered like they were with HTTP/1.1—so their start times were closer together, which meant they were all likely to time out.
It doesn't mean backwards supported - it just means it is not a breaking change and HTTP/1.1 browsers can still connect and continue to use HTTP/1.1 without any change. HTTP/2 has a number of way to detect if it supported by browser and server and only use it if both do, and continue to use HTTP/1.1 if they don't.
The primary advantage of HTTP/2 is its improved speed over HTTP/1.1. This is achieved thanks to a combination of elements: HTTP/2 is binary, instead of textual meaning it is more compact, travels faster 'on the wire' and is less susceptible to errors. HTTP/2 is fully multiplexed.
HTTP stands for hypertext transfer protocol & it is used in client-server communication. By using HTTP user sends the request to the server & the server sends the response to the user. There are several stages of development of HTTP but we will focus mainly on HTTP/1.1 which was created in 1997 & the new one is HTTP/2 which was created in 2015.
Note: HTTP/1.1 does have the concept of pipelining which also allows multiple requests to be sent off at once. However they still had to be returned in order they were requested, in their entirety, so nowhere near as good as HTTP/2, even if conceptually it's similar.
For example, let’s say you are visiting a website at the domain www.example.com. When you navigate to this URL, the web browser on your computer sends an HTTP request in the form of a text-based message, similar to the one shown here: This request uses the GET method, which asks for data from the host server listed after Host:.
This is the most advanced feature of the HTTP/2 protocol because it allows you to download web files asynchronously from one server. Most modern browsers limit TCP connections to one server. This reduces the additional round trip time (RTT), making your website load faster without any optimization, and makes domain sharding unnecessary.
will each one of the scripts result in a new TCP connection established or all the subsequent requests will reuse the first TCP connection established by the first script tab?
Yes it would download them one by one, and start to open up more TCP connections to do that, up to the maximum of 6. The 7th request would have to wait for one of the connections to free up before it could be downloaded.
But the reality is, that the first request may have finished by the time later TCP connections are opened so it might not quite reach the 6 limit for only 6 or 7 requests.
What about api calls made from JavaScript? Let's in our javascript
Exact same thing. Limit of 6 per origin. Though one thing to note is certain CORS requests sent without credentials effectively counts as another origin (even though it’s the same actual origin) and so get another 6 connections.
My last question is, compared to http1.1, does http2 address this problem by allowing sending many requests at the same time over one single TCP connection?
Basically yes. Not quite at the same time due to the way TCP works, but as near as possible. See my answer here: What does multiplexing mean in HTTP/2
The process is simple, if you assign keep-alive the connection is remembered for faster handshake so a user can make many requests without having to re-open a costly secure connection.
Now there will always be the syn/ack process to make requests with the server. For the server to respond to every item your user requested a new connection is needed. There's bypassing this a little with cache to help your bandwidth and lessen the requests to server. All connections are ended upon request served.
So in a scenario 100 browsers want to hit your site, each request looks like 1.js 2.js... The output should be in order but this can greatly depend on a lot of things. Your language you're coding in server-sided, how it's handled, serves and if you manage any queues. If you make a request that requires longer processing (will get back to you in the future) other requests could go ahead as long as you're not blocking the event loop (comes down to your server).
Below you can see the process to establish a connection to the server, this is engaged each and every request. The cost to TLS can be improved but initial request is expensive.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With