Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance analysis strategies

I am assigned to a performance-tuning-debugging-troubleshooting task.

Scenario: a multi-application environment running on several networked machines using databases. OS is Unix, DB is Oracle. Business logic is implemented across applications using synchronous/asynchronous communication. Applications are multi-user with several hundred call center users at peak time. User interfaces are web-based.

Applications are third party, I can get access to developers and source code. I only have the production system and a functional test environment, no load test environment.

Problem: bad performance! I need fast results. Management is going crazy.

I got symptom examples like these: user interface actions taking minutes to complete. Seaching for a customer usually takes 6 seconds but an immediate subsequent search with same parameters may take 6 minutes.

What would be your strategy for finding root causes?

like image 342
Bernd Avatar asked Oct 25 '22 15:10

Bernd


1 Answers

If this is an 11th-hour type scenario, and this is a system you're walking up to without prior knowledge, here's how I'd go about it - specific instructions below are for the unix newb, but the general principles are sound for any system triage:

  1. Create a text file with the name of every single one of your production hosts in it. Let's call it prodhosts
  2. Get your public ssh key onto ~/.ssh/authorized_keys on every one of prod_hosts. If you're not familiar with ssh agents and how to make logins everywhere fast, take 10 minutes and read up on it, or use a script that handles it for you.
  3. Check system load on all servers

    for i in `cat prodhosts` ; do echo $i ; ssh $i uptime ; done
    

    High load averages (very generally speaking, more than the number of cores you have) indicate problem servers. Make a note of them - you'll look at them soon.

  4. Check for full disks - these are very common

    for i in `cat prodhosts` ; do echo $i ; ssh $i df -h ; done
    

    Any host that's at or near 100% disk usage is going to be a problem. Make a note of any problem servers you find in this way.

  5. Check for swap activity - swapping is the most common cause of bad performance (and it's usually paired with the above indicator of a high load average).

    for i in `cat prodhosts` ; do echo $i ; ssh $i free -m ; done
    

    That'll tell you how much memory all of your boxes have, and how much they're each swapping. Here's what a healthy system with around 16GB of RAM might look like:

                 total       used       free     shared    buffers     cached
    Mem:         15884      15766        117          0         61      14928
    -/+ buffers/cache:        776      15107
    Swap:        31743          0      31743
    

    It's likely that your problem boxes will have a high number in the used column for Swap. That's the amount of memory your applications are trying to use that your machine doesn't have.

  6. Armed with that information, you should have a better idea of where the bottleneck is in 95% of all systems (the remaining 5% would be slowed down by remote network resources or gremlins). Now you do standard triage. Start at the bottom of the stack - i.e. if you have high load and crappy performance everywhere, start with your database, because it's likely that its problems are cascading out everywhere else (if your DB is humming along fine, obviously look elsewhere first - but always be suspicious of databases when performance is on the line):

    • Database - get a log of all queries being run that take over, say, 400ms, in as large of a sample period as you can afford to take (ideally these logs will already exist, otherwise get them together and let the data collect for an hour or so). Hack together some scripts that normalize the queries and figure out which queries take up the most total time on your system (also be on the lookout for crappy 1-off queries that take way too long and slow everything else down). You'll want to analyze those queries with an explain plan and figure out how to get them to hit indexes better, or figure out how to remove them from your system altogether if possible. Ask your DBA for help if you have one, and use an off-the-shelf query log analyzer if you can.
    • Application - look through the logs and watch out for anything crazy. Apps and logging vary wildly, so this is very system-dependent.
    • Operating System (use this on any box) - look at the output of dmesg on your box - does it have any warnings? Look through the logs in /var/log - see anything interesting? Any logs that are bursting at the seems? Those are your problem points.

After you've done the fast and loose hacking to get the system back to a stable state, sit down and talk to "management" about monitoring, log analysis, and all of the standard tools of the sysadmin trade that should help prevent scenarios like the one you're in from occurring. Read up on Nagios, Munin, rsyslog, etc, etc, or hire someone who can automate your datacenter and its monitoring for you. Also, if the app's third party, talk to them about how they expect you to handle this type of situation - if this is an off-the-shelf product, they should have guidelines for the requirements necessary to run their app successfully. If it's something you hired a random contracting company to build, consider recommending to management that they hire people who know what they're doing.

like image 78
bpo Avatar answered Nov 17 '22 17:11

bpo