Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the "average" requests per second for a production web application?

Tags:

optimization

People also ask

How many requests per second should a website handle?

With a single CPU core, a web server can handle around 250 concurrent requests at one time, so with 2 CPU cores, your server can handle 500 visitors at the same time. Getting the balance right between performance and cost is crucial as your site grows in popularity.

How do you calculate number of requests per second?

In input/output systems, the number of requests per second is bound by the memory available. To calculate requests per second you divide the memory by memory required for an instance and then multiple it by the inverse of task time taken to complete an instance.

How many requests can Apache handle per second?

With this number of instantiated workers, Apache can handle almost 160 requests per second without increasing the number of workers. Assuming the number of requests and the CPU time are linearly dependent, this leads to CPU consumption of about 30%.


OpenStreetMap seems to have 10-20 per second

Wikipedia seems to be 30000 to 70000 per second spread over 300 servers (100 to 200 requests per second per machine, most of which is caches)

Geograph is getting 7000 images per week (1 upload per 95 seconds)


Not sure anyone is still interested, but this information was posted about Twitter (and here too):

The Stats

  • Over 350,000 users. The actual numbers are as always, very super super top secret.
  • 600 requests per second.
  • Average 200-300 connections per second. Spiking to 800 connections per second.
  • MySQL handled 2,400 requests per second.
  • 180 Rails instances. Uses Mongrel as the "web" server.
  • 1 MySQL Server (one big 8 core box) and 1 slave. Slave is read only for statistics and reporting.
  • 30+ processes for handling odd jobs.
  • 8 Sun X4100s.
  • Process a request in 200 milliseconds in Rails.
  • Average time spent in the database is 50-100 milliseconds.
  • Over 16 GB of memcached.

When I go to the control panel of my webhost, open up phpMyAdmin, and click on "Show MySQL runtime information", I get:

This MySQL server has been running for 53 days, 15 hours, 28 minutes and 53 seconds. It started up on Oct 24, 2008 at 04:03 AM.

Query statistics: Since its startup, 3,444,378,344 queries have been sent to the server.

Total 3,444 M
per hour 2.68 M
per minute 44.59 k
per second 743.13

That's an average of 743 mySQL queries every single second for the past 53 days!

I don't know about you, but to me that's fast! Very fast!!


personally, I like both analysis done every time....requests/second and average time/request and love seeing the max request time as well on top of that. it is easy to flip if you have 61 requests/second, you can then just flip it to 1000ms / 61 requests.

To answer your question, we have been doing a huge load test ourselves and find it ranges on various amazon hardware we use(best value was the 32 bit medium cpu when it came down to $$ / event / second) and our requests / seconds ranged from 29 requests / second / node up to 150 requests/second/node.

Giving better hardware of course gives better results but not the best ROI. Anyways, this post was great as I was looking for some parallels to see if my numbers where in the ballpark and shared mine as well in case someone else is looking. Mine is purely loaded as high as I can go.

NOTE: thanks to requests/second analysis(not ms/request) we found a major linux issue that we are trying to resolve where linux(we tested a server in C and java) freezes all the calls into socket libraries when under too much load which seems very odd. The full post can be found here actually.... http://ubuntuforums.org/showthread.php?p=11202389

We are still trying to resolve that as it gives us a huge performance boost in that our test goes from 2 minutes 42 seconds to 1 minute 35 seconds when this is fixed so we see a 33% performancce improvement....not to mention, the worse the DoS attack is the longer these pauses are so that all cpus drop to zero and stop processing...in my opinion server processing should continue in the face of a DoS but for some reason, it freezes up every once in a while during the Dos sometimes up to 30 seconds!!!

ADDITION: We found out it was actually a jdk race condition bug....hard to isolate on big clusters but when we ran 1 server 1 data node but 10 of those, we could reproduce it every time then and just looked at the server/datanode it occurred on. Switching the jdk to an earlier release fixed the issue. We were on jdk1.6.0_26 I believe.