Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is MySQL InnoDB insert so slow?

I am using large random numbers as keys (coming in from another system). Inserts and updates on fairly-small (as in a few million rows) tables are taking much longer than I think is reasonable.

I have distilled a very simple test to illustrate. In the test table I've tried to make it as simple as possible; my real code does not have such a simple layout and has relations and additional indices and such. However, a simpler setup shows equivalent performance.

Here are the results:

creating the MyISAM table took 0.000 seconds creating 1024000 rows of test data took 1.243 seconds inserting the test data took 6.335 seconds selecting 1023742 rows of test data took 1.435 seconds fetching 1023742 batches of test data took 0.037 seconds dropping the table took 0.089 seconds creating the InnoDB table took 0.276 seconds creating 1024000 rows of test data took 1.165 seconds inserting the test data took 3433.268 seconds selecting 1023748 rows of test data took 4.220 seconds fetching 1023748 batches of test data took 0.037 seconds dropping the table took 0.288 seconds 

Inserting 1M rows into MyISAM takes 6 seconds; into InnoDB takes 3433 seconds!

What am I doing wrong? What is misconfigured? (MySQL is a normal Ubuntu installation with defaults)

Here's the test code:

import sys, time, random import MySQLdb as db  # usage: python script db_username db_password database_name  db = db.connect(host="127.0.0.1",port=3306,user=sys.argv[1],passwd=sys.argv[2],db=sys.argv[3]).cursor()  def test(engine):      start = time.time() # fine for this purpose     db.execute(""" CREATE TEMPORARY TABLE Testing123 ( k INTEGER PRIMARY KEY NOT NULL, v VARCHAR(255) NOT NULL ) ENGINE=%s;"""%engine)     duration = time.time()-start     print "creating the %s table took %0.3f seconds"%(engine,duration)      start = time.time()     # 1 million rows in 100 chunks of 10K     data = [[(str(random.getrandbits(48)) if a&1 else int(random.getrandbits(31))) for a in xrange(10*1024*2)] for b in xrange(100)]     duration = time.time()-start     print "creating %d rows of test data took %0.3f seconds"%(sum(len(rows)/2 for rows in data),duration)      sql = "REPLACE INTO Testing123 (k,v) VALUES %s;"%("(%s,%s),"*(10*1024))[:-1]     start = time.time()     for rows in data:         db.execute(sql,rows)     duration = time.time()-start     print "inserting the test data took %0.3f seconds"%duration      # execute the query     start = time.time()     query = db.execute("SELECT k,v FROM Testing123;")     duration = time.time()-start     print "selecting %d rows of test data took %0.3f seconds"%(query,duration)      # get the rows in chunks of 10K     rows = 0     start = time.time()     while query:         batch = min(query,10*1024)         query -= batch         rows += len(db.fetchmany(batch))     duration = time.time()-start     print "fetching %d batches of test data took %0.3f seconds"%(rows,duration)      # drop the table     start = time.time()     db.execute("DROP TABLE Testing123;")     duration = time.time()-start     print "dropping the table took %0.3f seconds"%duration   test("MyISAM") test("InnoDB") 
like image 734
Will Avatar asked Mar 22 '12 09:03

Will


People also ask

Why is insert so slow MySQL?

Remove existing indexes - Inserting data to a MySQL table will slow down once you add more and more indexes. Therefore, if you're loading data to a new table, it's best to load it to a table without any indexes, and only then create the indexes, once the data was loaded.

How do you make MySQL insert faster?

To optimize insert speed, combine many small operations into a single large operation. Ideally, you make a single connection, send the data for many new rows at once, and delay all index updates and consistency checking until the very end.

How do I speed up InnoDB?

If you run maintenance operations on InnoDB tables that contain FULLTEXT indexes, consider turning the innodb_optimize_fulltext_only variable to ON – after this variable is enabled, the OPTIMIZE TABLE query should run faster because it will skip the reorganization of data in the table.

Why is InnoDB slower than MyISAM?

The InnoDB Buffer Pool caches data and index pages. MyISAM only caches index pages. Just in this area alone, MyISAM does not waste time caching data. That's because it's not designed to cache data.


2 Answers

InnoDB has transaction support, you're not using explicit transactions so innoDB has to do a commit after each statement ("performs a log flush to disk for every insert").

Execute this command before your loop:

START TRANSACTION 

and this after you loop

COMMIT 
like image 164
flo Avatar answered Sep 21 '22 12:09

flo


InnoDB doesn't cope well with 'random' primary keys. Try a sequential key or auto-increment, and I believe you'll see better performance. Your 'real' key field could still be indexed, but for a bulk insert you might be better off dropping and recreating that index in one hit after the insert in complete. Would be interested to see your benchmarks for that!

Some related questions

  • Slow INSERT into InnoDB table with random PRIMARY KEY column's value
  • Why do MySQL InnoDB inserts / updates on large tables get very slow when there are a few indexes?
  • InnoDB inserts very slow and slowing down
like image 33
Paul Dixon Avatar answered Sep 20 '22 12:09

Paul Dixon