Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Connection pool using pg-promise

I'm using Node js and Postgresql and trying to be most efficient in the connections implementation.
I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling.
I also read that "more than 100 clients at a time is a very bad thing" (node-postgres).

I'm using pg-promise and wanted to know:

  1. what is the recommended poolSize for a very big load of data.
  2. what happens if poolSize = 100 and the application gets 101 request simultaneously (or even more)? Does Postgres handles the order and makes the 101 request wait until it can run it?
like image 676
michalDia Avatar asked Apr 06 '17 09:04

michalDia


People also ask

What is the difference between PG-pool and PG-promise?

node-postgres started using pg-pool from version 6.x, while pg-promise remains on version 5.x which uses the internal connection pool implementation. Here's the reason why.

What is connection pooling in node Postgres?

node-postgres ships with built-in connection pooling via the pg-pool module. The client pool allows you to have a reusable pool of clients you can check out, use, and return. You generally want a limited number of these in your application and usually just 1.

What is a connection pool and how do I use it?

By using a connection pool and sharing a fixed set of recyclable connections, you can handle significantly more concurrent client connections, and squeeze extra performance out of your PostgreSQL database. In this tutorial we’ll use pgbench, PostgreSQL’s built-in benchmarking tool, to run load tests on a DigitalOcean Managed PostgreSQL Database.

What is the difference between Postgresql multi-row insert and PG-promise?

See Multi-row insert with pg-promise and PostgreSQL multi-row updates in Node.js. I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling. node-postgres started using pg-pool from version 6.x, while pg-promise remains on version 5.x which uses the internal connection pool implementation.


2 Answers

I'm the author of pg-promise.

I'm using Node js and Postgresql and trying to be most efficient in the connections implementation.

There are several levels of optimization for database communications. The most important of them is to minimize the number of queries per HTTP request, because IO is expensive, so is the connection pool.

  • If you have to execute more than one query per HTTP request, always use tasks, via method task.
  • If your task requires a transaction, execute it as a transaction, via method tx.
  • If you need to do multiple inserts or updates, always use multi-row operations. See Multi-row insert with pg-promise and PostgreSQL multi-row updates in Node.js.

I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling.

node-postgres started using pg-pool from version 6.x, while pg-promise remains on version 5.x which uses the internal connection pool implementation. Here's the reason why.

I also read that "more than 100 clients at a time is a very bad thing"

My long practice in this area suggests: If you cannot fit your service into a pool of 20 connections, you will not be saved by going for more connections, you will need to fix your implementation instead. Also, by going over 20 you start putting additional strain on the CPU, and that translates into further slow-down.

what is the recommended poolSize for a very big load of data.

The size of the data got nothing to do with the size of the pool. You typically use just one connection for a single download or upload, no matter how large. Unless your implementation is wrong and you end up using more than one connection, then you need to fix it, if you want your app to be scalable.

what happens if poolSize = 100 and the application gets 101 request simultaneously

It will wait for the next available connection.


See also:

  • Chaining Queries
  • Performance Boost
like image 125
vitaly-t Avatar answered Oct 17 '22 06:10

vitaly-t


  1. what happens if poolSize = 100 and the application gets 101 request simultaneously (or even more)? Does Postgres handles the order and makes the 101 request wait until it can run it?

Right, the request will be queued. But it's not handled by Postgres itself, but by your app (pg-pool). So whenever you run out of free connections, the app will wait for a connection to release, and then the next pending request will be performed. That's what pools are for.

  1. what is the recommended poolSize for a very big load of data.

It really depends on many factors, and no one will really tell you the exact number. Why not test your app under huge load and see in practise how it performs, and find the bottlenecks.


Also I find the node-postgres documentation quite confusing and misleading on the matter:

Once you get >100 simultaneous requests your web server will attempt to open 100 connections to the PostgreSQL backend and 💥 you'll run out of memory on the PostgreSQL server, your database will become unresponsive, your app will seem to hang, and everything will break. Boooo!

https://github.com/brianc/node-postgres

It's not quite true. If you reach the connection limit at Postgres side, you simply won't be able to establish a new connection until any previous connection is closed. Nothing will break, if you handle this situation in your node app.

like image 22
AlexM Avatar answered Oct 17 '22 06:10

AlexM