Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

DB with best inserts/sec performance? [closed]

We deploy an (AJAX - based) Instant messenger which is serviced by a Comet server. We have a requirement to store the sent messages in a DB for long-term archival purposes in order to meet legal retention requirements.

Which DB engine provides the best performance in this write-once, read never (with rare exceptions) requirement?

We need at least 5000 Insert/Sec. I am assuming neither MySQL nor PostgreSQL can meet these requirements.

Any proposals for a higher performance solution? HamsterDB, SQLite, MongoDB ...?

like image 519
Nenad Avatar asked Aug 19 '10 08:08

Nenad


People also ask

Which database is good for high writes?

Apache Cassandra is great in write operations, thanks to its unique persistence model. Some claim that it writes about 20 times faster than it reads but I believe it's really dependent on your usage profile. Read about it in their FAQ and in various blog posts.

Is insertion faster in NoSQL?

For insertions, we usually collect all metadata for an object and then insert it (Just like we collect values of all attributes and then insert the row). Since a document contains all the data for an object we just need to add a new document. So Insertions are faster in case of NoSQL database.

How many inserts per second can Postgres handle?

At 200 million rows the insert rate in PostgreSQL is an average of 30K rows per second and only gets worse; at 1 billion rows, it's averaging 5K rows per second. On the other hand, TimescaleDB sustains an average insert rate of 111K rows per second through 1 billion rows of data–a 20x improvement.

How many inserts can MySQL handle per second?

Where the Transaction method had maximum throughput of 1,588 inserts per second, Load Data allowed MySQL to process process a staggering 28,108 inserts per second.


1 Answers

Please ignore the above Benchmark we had a bug inside.

We have Insert 1M records with following columns: id (int), status (int), message (140 char, random). All tests was done with C++ Driver on a Desktop PC i5 with 500 GB Sata Disk.

Benchmark with MongoDB:

1M Records Insert without Index

time: 23s, insert/s: 43478

1M Records Insert with Index on Id

time: 50s, insert/s: 20000

next we add 1M records to the same table with Index and 1M records

time: 78s, insert/s: 12820

that all result in near of 4gb files on fs.

Benchmark with MySQL:

1M Records Insert without Index

time: 49s, insert/s: 20408

1M Records Insert with Index

time: 56s, insert/s: 17857

next we add 1M records to the same table with Index and 1M records

time: 56s, insert/s: 17857

exactly same performance, no loss on mysql on growth

We see Mongo has eat around 384 MB Ram during this test and load 3 cores of the cpu, MySQL was happy with 14 MB and load only 1 core.

Edorian was on the right way with his proposal, I will do some more Benchmark and I'm sure we can reach on a 2x Quad Core Server 50K Inserts/sec.

I think MySQL will be the right way to go.

like image 100
Nenad Avatar answered Sep 20 '22 19:09

Nenad