Have a huge mysql table with like 300,000 records and wanted to page the records in PHP (not the point here though) with a query in this manner:
SELECT * FROM `table` LIMIT 250000, 100
It could be majorly slow in the latter part of the records, especially when near the end of the table (LIMIT start very large). My guess is MySQL has to count all the way down to exactly 250000 before scooping the results to me?
So how to work around this or any other approach for paging that could be much faster? Thanks!
While designing the application, we populate the entire dataset from the database server and then perform pagination on the application/web server. Now, when you perform this on the database server, you can achieve performance improvement.
insert myTable (col1,col2) values ('a1','b1'), ('a2','b2'), ('a3','b3'); So in the above example, 3 rows are inserted with one statement. Typically for speed it is best to play around with 500 to 1000 rows at a time (not 3). That all depends on your string size, based on your schema, for that insert statement.
Millions of rows is fine, tens of millions of rows is fine - provided you've got an even remotely decent server, i.e. a few Gbs of RAM, plenty disk space. You will need to learn about indexes for fast retrieval, but in terms of MySQL being able to handle it, no problem. Save this answer. Show activity on this post.
Make sure you're using an index otherwise it's doing a full table scan. You can look at the execution plan to verify this or force the issue by using an ORDER BY
clause (on an indexed column). Here is more information.
Your table isn't that large at 300k rows. There are performance issues with getting near th eend of the table however. The only real solution for this is to to fake the limit clause. Have an auto increment field that numbers the rows from 1 to 300,000 and then do:
SELECT *
FROM mytable
WHERE field BETWEEN 250000 and 250100
or similar. That might be problematic or impossible if you're frequently deleting rows but I tend to find that older data tends to change less so you could somewhat optimize it by using LIMIT
for the first 100,000 rows and the surrogate paging column beyond that.
You are correct: MySQL has to scan 250000 useless rows before reading the ones you want. There is really no workaround for this save splitting a table into multiple ones or having a hack such as:
SELECT * FROM table WHERE id BETWEEN 250000 AND 250000 + 100 - 1
;or
SELECT * FROM table WHERE id > 250000 ORDER BY id ASC LIMIT 100
But this still doesn't accurately emulate the function of the LIMIT
operator on complex queries. It's a speed:functionality opportunity cost.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With