Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Limiting the total memory usage of PostgreSQL

Tags:

postgresql

I know that with postgresql.conf I can limit the memory usage of specific parts of PostgreSQL, like shared memory or worker memory, but is there a way to limit the total memory usage of the PostgreSQL server? I failed to find a configuration property that does that...

BTW, I'm going to need to run it on a Windows Server 2008 machine.

like image 903
Idan Arye Avatar asked Feb 25 '13 15:02

Idan Arye


2 Answers

What do you mean "total" memory? It's the sum of what you allocate to the various pools.

So, in short... don't set the various pools/buffers too big and that's your total memory limit.

like image 164
Tyler Eaves Avatar answered Nov 09 '22 23:11

Tyler Eaves


PostgreSQL does not have any global limit that you can set. Instead you configure shared_buffers (usually around 25% - 50% of the total RAM you intend to grant for PostgreSQL), max_connections (how many parallel client connections you need, try to keep this as low as possible, maybe use PgPool or pgbouncer) and work_mem; the actual memory usage is roughly shared_buffers + max_connections * work_mem * N, where N depends on your average query complexity. If you only do simple selects, N will be 1. If you do lots of sorted subqueries and complex multi-table joins you should assume N is somewhere between 4 and 10. Default value for work_mem is 4 MB and you shouldn't reduce it unless you know that you cannot have even that much. In my experience, if you need to set work_mem higher than 16-32 MB you probably should improve your queries instead.

If you have lots of parallel serialized transactions you might need to push max_connections to artificially high number to allow enough row locks to be taken to get high performance. (PostgreSQL scales the amount of locks according to max_connections and if your transactions need a lot of locks the default scaling might not be enough and PostgreSQL will start converting row locks to table level locks which will cause more serialized transactions to rollback. However, do not mindlessly scale max_connections too high because PostgreSQL constantly reserves memory for the max lock count it has been configured to support. Expect to see performance problems if you need more than 4 million parallel locks – that is, overlapping transactions reading or writing over 4 million rows in total. Note that sequential table scanning over big tables have to read lots of rows so you want proper indexes!

Note that if you need to conserve memory, setting shared_buffers and work_mem to low values will allow PostgreSQL to work within the limits you need, but the performance will suffer. I think the minimum possible is shared_buffers = 128 KB and work_mem = 64 KB. However, as PostgreSQL runs each backend child as separate OS level process you usually end up spending at least 1-4 MB per backend in any case so you really want to limit parallel client connections if you want to go under 4 MB per connection. For example, pgbouncer would be one good option. In practice, I wouldn't recommend setting shared_buffers less than 12 MB or work_mem less than 1 MB. If you truly cannot afford that, you might need to create a fully custom design that doesn't use SQL database at all. Or try using e.g. SQLite.

like image 37
Mikko Rantalainen Avatar answered Nov 10 '22 00:11

Mikko Rantalainen