Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

"The total number of locks exceeds the lock table size" Deleting 267 Records

I'm trying to delete 267 records out of about 40 million. The query looks like:

delete from pricedata
where
pricedate > '20120413'

pricedate is a char(8) field.

I know about adjusting innodb_buffer_pool_size, but if I can do

select from pricedata
where
pricedate > '20120413'

and get 267 records (and that's all there are), no errors, why does it choke on the delete?

And if adjusting innodb_buffer_pool_size doesn't work, what should I do?

like image 328
davej Avatar asked Apr 20 '12 20:04

davej


People also ask

Does SQL DELETE lock table?

The QuestionBy default, a DELETE statement always acquires an exclusive (X) lock on the table it modifies, and holds that lock until the transaction completes.

How do I stop a MySQL table from locking?

Third option to prevent table locks with MySQL database is to use AUTOCOMMIT on the database level. This will prevent table locks from occurring unintentionally during report execution since all the transactions are committed after they are executed without additional commit commands.

How do I lock a table in MySQL?

LOCK TABLES works as follows: Sort all tables to be locked in an internally defined order (from the user standpoint the order is undefined). If a table is locked with a read and a write lock, put the write lock before the read lock. Lock one table at a time until the thread gets all locks.

What is row level locking in MySQL?

Row-level locking systems can lock entire tables if the WHERE clause of a statement cannot use an index. For example, UPDATES that cannot use an index lock the entire table. Row-level locking systems can lock entire tables if a high number of single-row locks would be less efficient than a single table-level lock.


3 Answers

What worked: changing innodb_buffer_pool_size to 256M (see comments under Quassnoi's original comment).

like image 67
davej Avatar answered Dec 06 '22 12:12

davej


(A late answer, but alwayx good to have it when people find this issue in google)

A solution without having to alter the innodb_buffer_pool_size or creating an index can be to limit the amount of rows to be deleted.

So, in your case DELETE from pricedata where pricedata > '20120413' limit 100; for example. This will remove 100 rows and leave 167 behind. So, you can run the same query again and delete another 100. For the last 67 it's tricky... when the amount of rows left in the database is less than the given limit you will again end up with the error about the number of locks. Probably because the server will search for more matching rows to fill up to the 100. In this case, use limit 67 to delete the last part. (Ofcourse you could use limit 267 already in the beginning as well)

And for those who like to script... a nice example I used in a bash script to cleanup old data :

   # Count the number of rows left to be deleted
   QUERY="select count(*) from pricedata where pricedata > '20120413';"
   AMOUNT=`${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} | tail -1`
   ERROR=0
   while [ ${AMOUNT} -gt 0 -a ${ERROR} -eq 0 ]
   do
      ${LOGGER} "   ${AMOUNT} rows left to delete"
      if [ ${AMOUNT} -lt 1000 ]
      then
         LIMIT=${AMOUNT}
      else
         LIMIT=1000
      fi
      QUERY="delete low_priority from pricedata where pricedata > '20120413' limit ${LIMIT};"
      ${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB}
      STATUS=$?
      if [ ${STATUS} -ne 0 ]
      then
         ${LOGGER} "Cleanup failed for ${TABLE}"
         ERROR=1
      fi
      QUERY="select count(*) from pricedata where pricedata > '20120413';"
      AMOUNT=`${MYSQL} -u ${MYSQL_USER} -p${MYSQL_PWD} -e "${QUERY}" ${DB} | tail -1`
   done
like image 29
user3472174 Avatar answered Dec 06 '22 14:12

user3472174


It seems that you don't have an index on pricedate (or MySQL does not use this index for some reason).

With REPEATABLE READ (the default transaction isolation level), InnoDB places shared locks on the records read and filtered out by the query and it seems you don't have enough space for 40M locks.

To work around this problem use any of these solutions:

  1. Create the index on pricedate if it's not there (may take time)

  2. Break your query into smaller chunks:

    DELETE
    FROM    pricedata
    WHERE   pricedate > '20120413'
            AND id BETWEEN 1 AND 1000000
    
    DELETE
    FROM    pricedata
    WHERE   pricedate > '20120413'
            AND id BETWEEN 1000001 AND 2000000
    

    etc. (change the id ranges as needed). Note that each statement should be run in its own transaction (don't forget to commit after each statement if AUTOCOMMIT is off).

  3. Run the DELETE query with READ COMMITTED transaction isolation level. It will make InnoDB lift locks from the records as soon as they are read. This will not work if you are using binary log in statement mode and don't allow binlog-unsafe queries (this is the default setting).

like image 25
Quassnoi Avatar answered Dec 06 '22 12:12

Quassnoi