Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

HTTP requests optimization: What is the limit?

It is common knowledge now to combine stylesheets and scripts in effort to reduce HTTP Requests. I have 2 questions:

  1. How expensive are they, really?
  2. When is a request too big it should be split?

I cannot find the answers to these two questions in all online readings I did, such as Yahoo! Best Practices which states a number of times that HTTP requests are expensive, but never cite why or how.

Thanks in advance.

like image 559
syaz Avatar asked Aug 11 '09 01:08

syaz


People also ask

How many HTTP requests is too many?

How Many HTTP Requests Should Web Pages Have? You should strive to keep the number of HTTP requests under 50. If you can get requests below 25, you're doing amazing. By their nature, HTTP requests are not bad.

How do I limit HTTP requests?

In the limit middleware function we call the global limiter's Allow() method each time the middleware receives a HTTP request. If there are no tokens left in the bucket Allow() will return false and we send the user a 429 Too Many Requests response.

What does too many HTTP requests mean?

The HTTP 429 Too Many Requests response status code indicates the user has sent too many requests in a given amount of time ("rate limiting"). A Retry-After header might be included to this response indicating how long to wait before making a new request.


2 Answers

A HTTP request requires a TCP/IP connection to be made (Think, 3-way handshaking handshaking) before it can handle the HTTP request it self

This involves at least a delay of sending the SYN message to the server and getting the SYN/ACK back (It then sends the ACK to OPEN the socket).

So, say the delay between the client and server is uniform both ways and 50ms, that results in a 100ms delay before it can send the HTTP request. It is then another 100ms before it starts getting the actual request back (Sends the request, then server replies).

Of course, you need to also take into consideration that a standard web browser limits the number of concurrent HTTP requests it is processing at the same time. If your requests have to wait, you don't get that handshake time for free (so to say), since you need to wait for another connection to finish as well. Servers play a role as well, depending on how they server the requests.

like image 172
Dan McGrath Avatar answered Oct 02 '22 20:10

Dan McGrath


  1. Whenever a request is made, it is subjected to the harsh realities of network reliability. Two requests made in rapid succession from the same location might take entirely different routes, so with each request you're adding an element of unpredictability in terms of performance. A single consolidated request can help to mitigate that risk. @Dan McG wrote a sound point about the TCP handshake overhead.
  2. HTTP does not care about request size, as it serves as an application layer protocol on the IP (Internet Protocol Suite) stack. That is for TCP/IP to worry about, what would be of concern to the publisher is keeping document/file sizes as small as possible, and small enough that their application is performant enough.

Hope that makes sense.

like image 36
karim79 Avatar answered Oct 02 '22 19:10

karim79