Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you maximize server performance?

I have been trying to get my head around to understanding performance and scalability and would like to know what developers/sysadmins are doing to juice their systems. To standardize the answers, it would help if you could take your best shot at responding to any of the following:

  1. Profile - Magazine publication on Joomla; Jobs board on CodeIgniter + OpenId + AJAX
    • Performance - Maximum requests per second per server?
    • Hardware - Server, router, disk, LAN?
    • Software - Lighttpd, Memcache, Varnish, Nginx, Squid, Pound, LVS, eAccelerator, etc.
    • Services - Amazon S3, Akamai, Google compute, etc.
    • Configuration - Static hashing, Upstream module, Memcache for x minutes after n requests, Disable logging image requests, etc.
    • Other - Anything else? (example, normalized tables bad for sites with lots of reads)

Edit: Please re-consider before closing this question as it is important to web developers to seek out this stuff. A programmer could tweak the semicolons out of his/her code but still lose out to a bad coder writing for memcached or managing to put together a CDN via Google App Engine.

like image 670
aleemb Avatar asked Jan 11 '09 10:01

aleemb


3 Answers

Our system: I can't tell you much about it, but it's a large SaaS application serving many paying customers.


Every piece of performance / capacity work we do is done very carefully - we can't just try things to see if they work.

Initially there would be some analysis of the current performance and capacity, whether we could continue to work anyway.

If possible, we'd reproduce the performance problems on a non-production system where we could profile the code and make experimental changes. We can't always use the exact same hardware as production (production has a large number of very high spec servers; dev has only a few production-spec dedicated performance test boxes).

If the problem can't be analysed meaningfully in a non-production environment, we'd ship some instrumentation on to our code in production (after careful testing to ensure the instrumentation doesn't impact the system itself). This instrumentation would be shipped "off" and turned on selectively to gather enough data.

Once we'd got an accurate analysis of a problem, we'd look at possible solutions, and maybe develop prototypes - these could be tested for functional correctness.

We normally go for the least risky option if there are several.

The normal release process would then be followed - lots of testing, code reviews etc.

If relevant, the change might be shipped with a "revert switch" which allowed it to be turned off in production quickly if there was a problem.

There are many potential performance improvements we've identified, most of which we will not develop further until a problem occurs (unless we're doing an unrelated refactoring of that piece of software anyway).

like image 83
MarkR Avatar answered Oct 11 '22 23:10

MarkR


There is no concrete master-plan for performance optimisation (like start at software "xyz" first).

General approach:

  1. Identify (measure!) your most improveable entity by the means of improvement/invested time
  2. Optimise it
  3. Repeat
like image 40
Karsten Avatar answered Oct 12 '22 00:10

Karsten


I don't have the time to answer you question bullet by bullet. =) But I can recommend a general strategy of separating concerns and not couple server resources when there's no immediate need for it. mod_proxy (and any equivalents) is your friend. It makes it easy to throw hardware at performance problems that shows up. Of course, you don't have to factor the system perfectly from the start (since it's really hard to anticipate where the real bottlenecks will show up). But when you do encounter problems. Remember your friend.

like image 40
PEZ Avatar answered Oct 11 '22 23:10

PEZ