Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MySQL: How many UPDATES per second can an average box support?

We've got a constant stream of simple updates to a single MySQL table (storing user activity information). Let's say we group these into batch updates each second.

I want a ballpark idea of when mysql on a typical 4-core 8GB box will start having an issue keeping up with the updates coming in each second. E.g. how many rows of updates can I make @ 1 per second?

This is a thought exercise to decide if I should get going with MySQL in the early days of our applications release (simplify development), or if MySQL's likely to bomb so soon as to make it not worth even venturing down that path.

like image 474
David Parks Avatar asked Oct 11 '12 11:10

David Parks


2 Answers

The only way you can get a decent figure is through benchmarking your specific use case. There are just too many variables and there is no way around that.

It shouldn't take too long either if you just knock a bash script or a small demo app and hammer it with jmeter, then that can give you a good idea.

I used jmeter when trying to benchmark a similar use case. The difference was I was looking for write throughput for number of INSERTS. The most useful thing that came out when I was playing was the 'innodb_flush_log_at_trx_commit' param. If you are using INNODB and don't need ACID compliance for your use case, then changing it to 0. This makes a huge difference to INSERT throughput and will likely do the same in your UPDATE use case. Although note that with this setting, changes only get flushed to disk once per second, so if your server gets a power cut or something, you could lose a seconds worth of data.

On my Quad Core 8GB Machine for my use case:

  • innodb_flush_log_at_trx_commit=1 resulted in 80 INSERTS per second
  • innodb_flush_log_at_trx_commit=0 resulted in 2000 INSERTS per second

These figures will probably bear no relevance to your use case - which is why you need to benchmark it yourself.

like image 108
theon Avatar answered Sep 24 '22 15:09

theon


A lot of it depends on the quality of the code which you use to push to the DB.

If you write your batch to insert a single value per INSERT request (i.e.,

INSERT INTO table (field) VALUES (value_1);
INSERT INTO table (field) VALUES (value_2);
...
INSERT INTO table (field) VALUES (value_n);

, your performance will crash and burn.

If you insert multiple values using a single INSERT (i.e.

INSERT INTO table (field) values (value_1),(value_2)...(value_n);

, you'll find that you could easily insert many records per second

As an example, I wrote a quick app which needed to add the details of a request for an LDAP account to a holding DB. Inserting one field at a time (i.e., LDAP_field, LDAP_value), execution of the whole script took 10's of seconds. When I concatenated the values into a single INSERT request, execution time of the script went down to about 2 seconds from start to finish. This included the overhead of starting and committing a transaction

Hope this helps

like image 23
DaveyBoy Avatar answered Sep 22 '22 15:09

DaveyBoy