Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Very slow query on mysql table with 35 million rows

Tags:

mysql

I am trying to figure out why a query is so slow on my MySQL database. I've read various content about MySQL performance, various SO questions, but this stays a riddle for me.

  1. I am using MySQL 5.6.23-log - MySQL Community Server (GPL)
  2. I have a table with roughly 35 million rows.
  3. This table is being inserted to about 5 times / second
  4. The table looks like this: enter image description here

  5. I have indexes on all the columns except for answer_text

The query I'm running is:

SELECT answer_id, COUNT(1) 
FROM answers_onsite a 
WHERE a.screen_id=384 
 AND a.timestamp BETWEEN 1462670000000 AND 1463374800000 
GROUP BY a.answer_id

this query takes roughly 20-30 seconds, then gives a result set:

enter image description here

Any insights?

EDIT

as asked, my show create table:

CREATE TABLE 'answers_onsite' (
  'id' bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  'device_id' bigint(20) unsigned NOT NULL,
  'survey_id' bigint(20) unsigned NOT NULL,
  'answer_set_group' varchar(255) NOT NULL,
  'timestamp' bigint(20) unsigned NOT NULL,
  'screen_id' bigint(20) unsigned NOT NULL,
  'answer_id' bigint(20) unsigned NOT NULL DEFAULT '0',
  'answer_text' text,
  PRIMARY KEY ('id'),
  KEY 'device_id' ('device_id'),
  KEY 'survey_id' ('survey_id'),
  KEY 'answer_set_group' ('answer_set_group'),
  KEY 'timestamp' ('timestamp'),
  KEY 'screen_id' ('screen_id'),
  KEY 'answer_id' ('answer_id')
) ENGINE=InnoDB AUTO_INCREMENT=35716605 DEFAULT CHARSET=utf8
like image 259
ThaiKov Avatar asked May 16 '16 17:05

ThaiKov


People also ask

Can MySQL handle 1 million records?

The MySQL maximum row size limit of 65,535 bytes is demonstrated in the following InnoDB and MyISAM examples. The limit is enforced regardless of storage engine, even though the storage engine may be capable of supporting larger rows.

How do I fix slow queries in MySQL?

The following is the best process for collecting and aggregating the top queries: Set long_query_time = 0 (in some cases, you may need to rate limit to not flood the log) Enable the slow log and collect for some time (slow_query_log = 1) Stop collection and process the log with pt-query-digest.

How do you optimize select query timing for million records?

1:- Check Indexes. 2:- There should be indexes on all fields used in the WHERE and JOIN portions of the SQL statement 3:- Limit Size of Your Working Data Set. 4:- Only Select Fields You select as Need. 5:- Remove Unnecessary Table and index 6:- Remove OUTER JOINS.


2 Answers

ALTER TABLE answers_onsite ADD key complex_index (screen_id,`timestamp`,answer_id);
like image 187
Alex Avatar answered Oct 06 '22 23:10

Alex


you can use mysql Partitioning like this :

alter table answers_onsite drop primary key;
alter table answers_onsite add primary key (id, timestamp) partition by HASH(id) partitions 500;

Running the above may take a while depending on the size of your table.

like image 27
Gogo Avatar answered Oct 07 '22 00:10

Gogo