I have a utility in my application where i need to perform bulk load of INSERT, UPDATE & DELETE operations. I am trying to create transaction around this so that once this system is invoke and the data is fed to it, it is ensured that it is either all or none added to the database.
The concern what is have is what is the boundary conditions here? How many INSERT, UPDATE & DELETE can i have in one transaction? Is transaction size configurable?
PostgreSQL processes more than 20 thousand transactions per second when MongoDB doesn't reach 2 thousand. PostgreSQL latencies are under 50ms for the 99% percentile, and as low as less than 1 millisecond.
If you're simply filtering the data and data fits in memory, Postgres is capable of parsing roughly 5-10 million rows per second (assuming some reasonable row size of say 100 bytes). If you're aggregating then you're at about 1-2 million rows per second.
As commercial database vendors are bragging about their capabilities we decided to push PostgreSQL to the next level and exceed 1 billion rows per second to show what we can do with Open Source. To those who need even more: 1 billion rows is by far not the limit - a lot more is possible. Watch and see how we did it.
I don't think there's a maximum amount of work that can be performed in a transaction. Data keeps getting added to the table files, and eventually the transaction either commits or rolls backs: AIUI this result gets stored in pg_clog; if it rolls back, the space will eventually be reclaimed by vacuum. So it's not as if the ongoing transaction work is held in memory and flushed at commit time, for instance.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With