Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Need to load the whole postgreSQL database into the RAM

Tags:

How do I put my whole PostgreSql database into the RAM for a faster access?? I have 8GB memory and I want to dedicate 2 GB for the DB. I have read about the shared buffers settings but it just caches the most accessed fragment of the database. I needed a solution where the whole DB is put into the RAM and any read would happen from the RAM DB and any write operation would first write into the RAM DB and then the DB on the hard drive.(some thing like the default fsync = on with shared buffers in postgresql configuration settings).

like image 358
Bharath Avatar asked Jan 02 '09 15:01

Bharath


People also ask

How much RAM is needed for PostgreSQL?

The 2GB of memory is a recommendation for memory you can allocate to PostgreSQL outside of the operating system.

Does Postgres use RAM?

PostgreSQL uses shared memory and process memory. Shared memory is a chunk of memory used primarily for data page cache. The shared_buffers parameter configures the size of the shared memory. This shared memory is used by all the PostgreSQL processes.

What happens when Postgres runs out of memory?

The most common cause of out of memory issue happens when PostgreSQL is unable to allocate the memory required for a query to run. This is defined by work_mem parameter, which sets the maximum amount of memory that can be used by a query operation before writing to temporary disk files.


2 Answers

I have asked myself the same question for a while. One of the disadvantages of PostgreSQL is that it does not seem to support an IN MEMORY storage engines as MySQL does...

Anyway I ran in to an article couple of weeks ago describing how this could be done; although it only seems to work on Linux. I really can't vouch for it for I have not tried it myself, but it does seem to make sense since a PostgreSQL tablespace is indeed assigned a mounted repository.

However, even with this approach, I am not sure you could put your index(s) into RAM as well; I do not think MySQL forces HASH index use with its IN MEMORY table for nothing...

I also wanted to do a similar thing to improve performance for I am also working with huge data sets. I am using python; they have dictionary data types which are basically hash tables in the form of {key: value} pairs. Using these is very efficient and effective. Basically, to get my PostgreSQL table into RAM, I load it into such a python dictionary, work with it, and persist it into db once in a while; its worth it if it is used well.

If you are not using python, I am pretty sure their is a similar dictionary-mapping data structure in your language.

Hope this helps!

like image 81
Nicholas Leonard Avatar answered Oct 12 '22 04:10

Nicholas Leonard


if you are pulling data by id, use memcached - http://www.danga.com/memcached/ + postgresql.

like image 38
l_39217_l Avatar answered Oct 12 '22 02:10

l_39217_l