Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to interpret slow query log information generated by MySQL?

So my understanding of slow query log is that it logs info about all those queries that took >= time (in secs) that we have set in my.conf file.

Now lets take 3 cases of 3 different SELECT queries (against tables with INNODB engine):

QUERY I: Query_time: 32.937667 Lock_time: 0.000081 Rows_sent: 343 Rows_examined: 12714043

QUERY II: Query_time: 12.937667 Lock_time: 0.000081 Rows_sent: 43 Rows_examined: 714043

QUERY III: Query_time: 42.937667 Lock_time: 0.000081 Rows_sent: 18 Rows_examined: 483

To me both QUERY I and QUERY II looks like possible cases of a bad query or poor indexing (or missing indexing) or fragmented table data etc. (anything else that I might have missed?) that user may look at to improve query execution time.

But for the QUERY III, I am not able to get my head around, I mean what could really be wrong with the DB that it takes 42 secs to just examine 483 rows and sent back 18 of them (with a negligble lock time). This becomes even more confusing when I see it happening intermittently.

So what I really want to ask here is:

  • how should I interpret lock time info? Does it mean query had to wait for that many secs before it actually started executing? If yes, then in my example query III actually took 42 secs to examine 483 rows and sent back 18 of them?
  • if lock time is neglible but still query time is super huge with just few hundreds of rows being examined and sent back, where should I start looking for issues?
  • Could it be that query is spending to much time in some background IO activity? say logging or bin-logging.
  • How badly could size of table affect the performance of query? e.g. can we say MySQL is good enough to handle table with 200+million rows
  • Is there any better tool or way to monitor DB activity specially to figure background activity of DB? In short, to check where is that query spending most of it's time.

There might be lot many factors affecting such slow queries, so in case you feel that you need some more information from side to be able to help me then please do let me know.

like image 280
sactiw Avatar asked Sep 26 '13 18:09

sactiw


People also ask

What is MySQL slow query log?

The slow query log consists of SQL statements that take more than long_query_time seconds to execute and require at least min_examined_row_limit rows to be examined. The slow query log can be used to find queries that take a long time to execute and are therefore candidates for optimization.

What is the default Long_query_time value when working with slow query logs?

Slow Query Log Parameters. The minimum and default values of long_query_time are 0 and 10, respectively. The value can be specified to a resolution of microseconds.


1 Answers

  • Lock time is the time spend before the query starts executing. I.e., time waiting for other threads to give up their locks on the data the current query needs to lock.

  • Query time is the time to execute the query. This may involve waiting for I/O if the rows aren't already in the buffer pool. Repeating the same query for the same data may be faster, after the data is loaded into the buffer pool.

    If your query is sorting on disk for a given query, it'll be slower even if it examines few rows.

    If your I/O system is overtaxed, you can get intermittent slowness. This also can happen to virtualized I/O (for example, cheap AWS instances). Or also if your disks are starting to fail, they may get errors intermittently.

    Monitor iostat and watch the queue length, average wait, and service time. See if there are periods of slowness, or if performance and throughput are more or less consistent.

  • Rows examined does not reflect multiple I/O's needed to fetch a given row. For example, if the row has a lot of big BLOB/TEXT/VARCHAR columns stored on overflow pages. Or if the transaction needs to visit the rollback segment to fetch old versions of some rows, if they have been modified since this transaction began.

    Rows examined also does not tell us how complex are the expressions in your query. You could be calculating Fibonacci sequences in stored functions or something crazy like that.

    Without seeing the queries and their EXPLAIN report, it's hard to make a generalization to explain the slowness, given only those numbers from the slow query log.

MySQL certainly can store 200 million rows in a table, but at that scale, you do start to get performance problems even when an index can reduce the search to 483 rows examined. That is because the depth of the B-tree index and size of an indexed column directly relate to the number of I/O operations needed to look up those 483 rows. The more I/O's, the longer it takes, and this is not reflected by rows examined. The query time includes I/O time, but does not make clear how much of the query time is due to I/O.

A few other places to look for more detailed diagnostics are:

  • MySQL Performance Schema

  • MySQL Query Profiler (but note this is deprecated in favor of the Performance Schema)

  • Enhanced verbosity slow query log in Percona Server reports the time a query spent waiting for I/O.

like image 97
Bill Karwin Avatar answered Oct 15 '22 13:10

Bill Karwin