Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

optimize mysql count query

Is there a way to optimize this further or should I just be satisfied that it takes 9 seconds to count 11M rows ?

devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "desc record_updates"                                                                    
+--------------+----------+------+-----+---------+-------+
| Field        | Type     | Null | Key | Default | Extra |
+--------------+----------+------+-----+---------+-------+
| record_id    | int(11)  | YES  | MUL | NULL    |       | 
| date_updated | datetime | YES  | MUL | NULL    |       | 
+--------------+----------+------+-----+---------+-------+
devuser@xcmst > date; mysql --user=user --password=pass -D marctoxctransformation -e "select count(*) from record_updates where date_updated > '2009-10-11 15:33:22' "; date                         
Thu Dec  9 11:13:17 EST 2010
+----------+
| count(*) |
+----------+
| 11772117 | 
+----------+
Thu Dec  9 11:13:26 EST 2010
devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "explain select count(*) from record_updates where date_updated > '2009-10-11 15:33:22' "      
+----+-------------+----------------+-------+--------------------------------------------------------+--------------------------------------------------------+---------+------+----------+--------------------------+
| id | select_type | table          | type  | possible_keys                                          | key                                                    | key_len | ref  | rows     | Extra                    |
+----+-------------+----------------+-------+--------------------------------------------------------+--------------------------------------------------------+---------+------+----------+--------------------------+
|  1 | SIMPLE      | record_updates | index | idx_marctoxctransformation_record_updates_date_updated | idx_marctoxctransformation_record_updates_date_updated | 9       | NULL | 11772117 | Using where; Using index | 
+----+-------------+----------------+-------+--------------------------------------------------------+--------------------------------------------------------+---------+------+----------+--------------------------+
devuser@xcmst > mysql --user=user --password=pass -D marctoxctransformation -e "show keys from record_updates"
+----------------+------------+--------------------------------------------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+
| Table          | Non_unique | Key_name                                               | Seq_in_index | Column_name  | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+----------------+------------+--------------------------------------------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+
| record_updates |          1 | idx_marctoxctransformation_record_updates_date_updated |            1 | date_updated | A         |        2416 |     NULL | NULL   | YES  | BTREE      |         | 
| record_updates |          1 | idx_marctoxctransformation_record_updates_record_id    |            1 | record_id    | A         |    11772117 |     NULL | NULL   | YES  | BTREE      |         | 
+----------------+------------+--------------------------------------------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+
like image 690
andersonbd1 Avatar asked Dec 09 '10 16:12

andersonbd1


2 Answers

If mysql has to count 11M rows, there really isn't much of a way to speed up a simple count. At least not to get it to a sub 1 second speed. You should rethink how you do your count. A few ideas:

  1. Add an auto increment field to the table. It looks you wouldn't delete from the table, so you can use simple math to find the record count. Select the min auto increment number for the initial earlier date and the max for the latter date and subtract one from the other to get the record count. For example:

    SELECT min(incr_id) min_id FROM record_updates WHERE date_updated BETWEEN '2009-10-11 15:33:22' AND '2009-10-12 23:59:59';
    SELECT max(incr_id) max_id FROM record_updates WHERE date_updated > DATE_SUB(NOW(), INTERVAL 2 DAY);`
    
  2. Create another table summarizing the record count for each day. Then you can query that table for the total records. There would only be 365 records for each year. If you need to get down to more fine grained times, query the summary table for full days and the current table for just the record count for the start and end days. Then add them all together.

If the data isn't changing, which it doesn't seem like it is, then summary tables will be easy to maintain and update. They will significantly speed things up.

like image 126
Brent Baisley Avatar answered Oct 25 '22 05:10

Brent Baisley


Since >'2009-10-11 15:33:22' contains most of the records,
I would suggest to do a reverse matching like <'2009-10-11 15:33:22' (mysql work less harder and less rows involved)

select 
  TABLE_ROWS -
  (select count(*) from record_updates where add_date<"2009-10-11 15:33:22") 
from information_schema.tables 
where table_schema = "marctoxctransformation" and table_name="record_updates"

You can combine with programming language (like bash shell)
to make this calculation a bit smarter...
such as do execution plan first to calculate which comparison will use lesser row

From my testing (around 10M records), the normal comparison takes around 3s,
and now cut-down to around 0.25s

like image 42
ajreal Avatar answered Oct 25 '22 04:10

ajreal