Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MySQL Query with LARGE number of records gets Killed

I run the following query from my shell :

    mysql -h my-host.net -u myuser -p -e "SELECT component_id, parent_component_id FROM myschema.components comp INNER JOIN my_second_schema.component_parents related_comp ON comp.id = related_comp.component_id ORDER BY component_id;" > /tmp/IT_component_parents.txt

The query runs for a LONG time and then gets KILLED.

However if I add LIMIT 1000, then the query runs till the end and output is written in file.

I further investigated and found (using COUNT(*)) that the total number of records that would be returned are 239553163.

Some information about my server is here:

MySQL 5.5.27

    +----------------------------+----------+
    | Variable_name              | Value    |
    +----------------------------+----------+
    | connect_timeout            | 10       |
    | delayed_insert_timeout     | 300      |
    | innodb_lock_wait_timeout   | 50       |
    | innodb_rollback_on_timeout | OFF      |
    | interactive_timeout        | 28800    |
    | lock_wait_timeout          | 31536000 |
    | net_read_timeout           | 30       |
    | net_write_timeout          | 60       |
    | slave_net_timeout          | 3600     |
    | wait_timeout               | 28800    |
    +----------------------------+----------+

Here's STATE of the query as I monitored :

    copying to tmp table on disk
    sorting results
    sending data
    writing to net
    sending data
    writing to net
    sending data
    writing to net
    sending data ...
    KILLED

Any guesses what's wrong here ?

like image 370
Manmohan Bishnoi Avatar asked Aug 07 '13 13:08

Manmohan Bishnoi


People also ask

What can happen if we create a query that deletes lots of rows (> 1 million at once?

This will theoretically get around the locked table issue because other queries will be able to make it into the queue and run in between the deletes. But it will still spike the load on the database quite a bit and will take a long time to run. Rename the table and recreate the existing table (it'll now be empty).

How many queries can MySQL handle at once?

MySQL can run more than 50,000 simple queries per second on commodity server hardware and over 2,000 queries per second from a single correspondent on a Gigabit network, so running multiple queries isn't necessarily such a bad thing.

Which query will take more time for execution?

There are a number of things that may cause a query to take longer time to execute: Inefficient query – Use non-indexed columns while lookup or joining, thus MySQL takes longer time to match the condition. Table lock – The table is locked, by global lock or explicit table lock when the query is trying to access it.

Can MySQL store billions of records?

Pretty much any of the widely used relational databases would handle this sort of thing just fine. I've worked on single-instance MySQL databases that had individual tables with 20 billion records in them - and a query requirement of several hundred queries per second.


2 Answers

The mysql client probably runs out of memory.

Use the --quick option to not buffer results in memory.

like image 158
JT. Avatar answered Sep 27 '22 21:09

JT.


What is wrong is that you are returning 239 553 163 rows of data! Don't be surprised it it takes a lot of time to process. Actually, the longest part might very well be sending the result set back to your client.

Reuduce the result set (do you really need all these rows?). Or try to output the data in smaller batches:

mysql -h my-host.net -u myuser -p -e "SELECT ... LIMIT 10000, 0" >> dump.txt
mysql -h my-host.net -u myuser -p -e "SELECT ... LIMIT 10000, 10000" >> dump.txt
like image 43
RandomSeed Avatar answered Sep 27 '22 23:09

RandomSeed