Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Maximum transaction size in PostgreSQL

I have a utility in my application where i need to perform bulk load of INSERT, UPDATE & DELETE operations. I am trying to create transaction around this so that once this system is invoke and the data is fed to it, it is ensured that it is either all or none added to the database.

The concern what is have is what is the boundary conditions here? How many INSERT, UPDATE & DELETE can i have in one transaction? Is transaction size configurable?

like image 687
Salman A. Kagzi Avatar asked Apr 02 '09 13:04

Salman A. Kagzi


People also ask

How many transactions can Postgres handle?

PostgreSQL processes more than 20 thousand transactions per second when MongoDB doesn't reach 2 thousand. PostgreSQL latencies are under 50ms for the 99% percentile, and as low as less than 1 millisecond.

Can Postgres handle millions of records?

If you're simply filtering the data and data fits in memory, Postgres is capable of parsing roughly 5-10 million rows per second (assuming some reasonable row size of say 100 bytes). If you're aggregating then you're at about 1-2 million rows per second.

Can Postgres handle billions of rows?

As commercial database vendors are bragging about their capabilities we decided to push PostgreSQL to the next level and exceed 1 billion rows per second to show what we can do with Open Source. To those who need even more: 1 billion rows is by far not the limit - a lot more is possible. Watch and see how we did it.


1 Answers

I don't think there's a maximum amount of work that can be performed in a transaction. Data keeps getting added to the table files, and eventually the transaction either commits or rolls backs: AIUI this result gets stored in pg_clog; if it rolls back, the space will eventually be reclaimed by vacuum. So it's not as if the ongoing transaction work is held in memory and flushed at commit time, for instance.

like image 140
araqnid Avatar answered Sep 19 '22 17:09

araqnid