Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Mysql select on indexded column slowdown on large tables

I have two tables : A - 301 columns ( 1st one named a1 int(11) Primary Key, 2nd to 301th - double(15,11) ) & B - 33 columns ( 1st one - b1 int(11) Unique Key, 2nd One - b2 varchar(100) Primary Key, ... , 33rd - b33 int(11) MUL ).

Both A & B have ~ 13,500,000 records.

My mysql query : For every value of pos, with pos in set (1, 1000, 2000, ..., 13500000) in multiples of 1000 :

select A.*, b2, b5, b7, b8, b10, b13, b33 from A join B on a1=b1 where b33 >= pos and b33 < pos+1000;

The query takes 1-5 seconds for values of b33 <= 600,000. After that the query starts taking 20-30 seconds. When b33 >= 8,000,000 the query starts taking 60-70s. I can't understand why the slowdown is happening. b33 is indexed and the join takes place on the key that is defined as primary in one table and unique in the other. Is there a workaround for this? This is really hampering the speed of the code and I will have to split the tables A & B into several smaller ones if nothing else works. I really hope I don't have to do that! Please help!

EDIT: Here is the o/p of EXPLAIN -

************* 1. row *************
id: 1
select_type: SIMPLE
table: B
type: range
possible_keys: b1,b33
key: b33
key_len: 4
ref: NULL
rows: 981
Extra: Using where
************* 2. row *************
id: 1
select_type: SIMPLE
table: A
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: DBName.B.b1
rows: 1
Extra:
2 rows in set (0.00 sec)

like image 949
ayesha129 Avatar asked Nov 14 '22 22:11

ayesha129


1 Answers

Since your database has several million records, are you doing anything to keep your DB in good health?

Running the following command nightly might help with general responsiveness if your data changes frequently (lots of inserts, maybe?):

mysqlcheck --check --analyze --auto-repair --all-databases --silent

Though I would suggest reading up a bit on mysqlcheck before running the command, just so you know what it's doing.

You should also look at optimizing your InnoDB configuration, especially innodb_buffer_pool_size (the more memory you can give it, the better). I was experiencing a similar slowness on a date-based field (which, of course, we immediately indexed) in a similarly sized table, and increasing the buffer pool size from the default 8 megabytes to several gigabytes made a very noticeable difference.

If you're deleting many rows from any table involved in the join, you might consider running OPTIMIZE TABLE as well.

like image 172
Dave Avatar answered Dec 19 '22 12:12

Dave