Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Running PostgreSQL in memory only

People also ask

What is Shared_buffers PostgreSQL?

shared_buffers. PostgreSQL uses 'double buffering', meaning that PostgreSQL uses its own internal buffer as well as kernel buffered IO. In short, this means that data is stored in memory twice. The PostgreSQL buffer is named shared_buffers and it defines how much dedicated system memory PostgreSQL will use for cache.

What is maintenance work memory PostgreSQL?

maintenance_work_mem ( integer ) Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM , CREATE INDEX , and ALTER TABLE ADD FOREIGN KEY . If this value is specified without units, it is taken as kilobytes. It defaults to 64 megabytes ( 64MB ).

How much RAM is needed for PostgreSQL?

The 2GB of memory is a recommendation for memory you can allocate to PostgreSQL outside of the operating system.

What happens when Postgres runs out of memory?

The most common cause of out of memory issue happens when PostgreSQL is unable to allocate the memory required for a query to run. This is defined by work_mem parameter, which sets the maximum amount of memory that can be used by a query operation before writing to temporary disk files.


(Moving my answer from Using in-memory PostgreSQL and generalizing it):

You can't run Pg in-process, in-memory

I can't figure out how to run in-memory Postgres database for testing. Is it possible?

No, it is not possible. PostgreSQL is implemented in C and compiled to platform code. Unlike H2 or Derby you can't just load the jar and fire it up as a throwaway in-memory DB.

Unlike SQLite, which is also written in C and compiled to platform code, PostgreSQL can't be loaded in-process either. It requires multiple processes (one per connection) because it's a multiprocessing, not a multithreading, architecture. The multiprocessing requirement means you must launch the postmaster as a standalone process.

Instead: preconfigure a connection

I suggest simply writing your tests to expect a particular hostname/username/password to work, and having the test harness CREATE DATABASE a throwaway database, then DROP DATABASE at the end of the run. Get the database connection details from a properties file, build target properties, environment variable, etc.

It's safe to use an existing PostgreSQL instance you already have databases you care about in, so long as the user you supply to your unit tests is not a superuser, only a user with CREATEDB rights. At worst you'll create performance issues in the other databases. I prefer to run a completely isolated PostgreSQL install for testing for that reason.

Instead: Launch a throwaway PostgreSQL instance for testing

Alternately, if you're really keen you could have your test harness locate the initdb and postgres binaries, run initdb to create a database, modify pg_hba.conf to trust, run postgres to start it on a random port, create a user, create a DB, and run the tests. You could even bundle the PostgreSQL binaries for multiple architectures in a jar and unpack the ones for the current architecture to a temporary directory before running the tests.

Personally I think that's a major pain that should be avoided; it's way easier to just have a test DB configured. However, it's become a little easier with the advent of include_dir support in postgresql.conf; now you can just append one line, then write a generated config file for all the rest.

Faster testing with PostgreSQL

For more information about how to safely improve the performance of PostgreSQL for testing purposes, see a detailed answer I wrote on this topic earlier: Optimise PostgreSQL for fast testing

H2's PostgreSQL dialect is not a true substitute

Some people instead use the H2 database in PostgreSQL dialect mode to run tests. I think that's almost as bad as the Rails people using SQLite for testing and PostgreSQL for production deployment.

H2 supports some PostgreSQL extensions and emulates the PostgreSQL dialect. However, it's just that - an emulation. You'll find areas where H2 accepts a query but PostgreSQL doesn't, where behaviour differs, etc. You'll also find plenty of places where PostgreSQL supports doing something that H2 just can't - like window functions, at the time of writing.

If you understand the limitations of this approach and your database access is simple, H2 might be OK. But in that case you're probably a better candidate for an ORM that abstracts the database because you're not using its interesting features anyway - and in that case, you don't have to care about database compatibility as much anymore.

Tablespaces are not the answer!

Do not use a tablespace to create an "in-memory" database. Not only is it unnecessary as it won't help performance significantly anyway, but it's also a great way to disrupt access to any other you might care about in the same PostgreSQL install. The 9.4 documentation now contains the following warning:

WARNING

Even though located outside the main PostgreSQL data directory, tablespaces are an integral part of the database cluster and cannot be treated as an autonomous collection of data files. They are dependent on metadata contained in the main data directory, and therefore cannot be attached to a different database cluster or backed up individually. Similarly, if you lose a tablespace (file deletion, disk failure, etc), the database cluster might become unreadable or unable to start. Placing a tablespace on a temporary file system like a ramdisk risks the reliability of the entire cluster.

because I noticed too many people were doing this and running into trouble.

(If you've done this you can mkdir the missing tablespace directory to get PostgreSQL to start again, then DROP the missing databases, tables etc. It's better to just not do it.)


Or you could create a TABLESPACE in a ramfs / tempfs and create all your objects there.
I recently was pointed to an article about doing exactly that on Linux. The original link is dead. But it was archived (provided by Arsinclair):

  • https://web.archive.org/web/20160319031016/http://magazine.redhat.com/2007/12/12/tip-from-an-rhce-memory-storage-on-postgresql/

Warning

This can endanger the integrity of your whole database cluster.
Read the added warning in the manual.
So this is only an option for expendable data.

For unit-testing it should work just fine. If you are running other databases on the same machine, be sure to use a separate database cluster (which has its own port) to be safe.


This is not possible with Postgres. It does not offer an in-process/in-memory engine like HSQLDB or MySQL.

If you want to create a self-contained environment you can put the Postgres binaries into SVN (but it's more than just a single executable).

You will need to run initdb to setup your test database before you can do anything with this. This can be done from a batch file or by using Runtime.exec(). But note that initdb is not something that is fast. You will definitely not want to run that for each test. You might get away running this before your test-suite though.

However while this can be done, I'd recommend to have a dedicated Postgres installation where you simply recreate your test database before running your tests.

You can re-create the test-database by using a template database which makes creating it quite fast (a lot faster than running initdb for each test run)


Now it is possible to run an in-memory instance of PostgreSQL in your JUnit tests via the Embedded PostgreSQL Component from OpenTable: https://github.com/opentable/otj-pg-embedded.

By adding the dependency to the otj-pg-embedded library (https://mvnrepository.com/artifact/com.opentable.components/otj-pg-embedded) you can start and stop your own instance of PostgreSQL in your @Before and @Afer hooks:

EmbeddedPostgres pg = EmbeddedPostgres.start();

They even offer a JUnit rule to automatically have JUnit starting and stopping your PostgreSQL database server for you:

@Rule
public SingleInstancePostgresRule pg = EmbeddedPostgresRules.singleInstance();

You could use TestContainers to spin up a PosgreSQL docker container for tests: http://testcontainers.viewdocs.io/testcontainers-java/usage/database_containers/

TestContainers provide a JUnit @Rule/@ClassRule: this mode starts a database inside a container before your tests and tears it down afterwards.

Example:

public class SimplePostgreSQLTest {

    @Rule
    public PostgreSQLContainer postgres = new PostgreSQLContainer();

    @Test
    public void testSimple() throws SQLException {
        HikariConfig hikariConfig = new HikariConfig();
        hikariConfig.setJdbcUrl(postgres.getJdbcUrl());
        hikariConfig.setUsername(postgres.getUsername());
        hikariConfig.setPassword(postgres.getPassword());

        HikariDataSource ds = new HikariDataSource(hikariConfig);
        Statement statement = ds.getConnection().createStatement();
        statement.execute("SELECT 1");
        ResultSet resultSet = statement.getResultSet();

        resultSet.next();
        int resultSetInt = resultSet.getInt(1);
        assertEquals("A basic SELECT query succeeds", 1, resultSetInt);
    }
}

There is now an in-memory version of PostgreSQL from Russian Search company named Yandex: https://github.com/yandex-qatools/postgresql-embedded

It's based on Flapdoodle OSS's embed process.

Example of using (from github page):

// starting Postgres
final EmbeddedPostgres postgres = new EmbeddedPostgres(V9_6);
// predefined data directory
// final EmbeddedPostgres postgres = new EmbeddedPostgres(V9_6, "/path/to/predefined/data/directory");
final String url = postgres.start("localhost", 5432, "dbName", "userName", "password");

// connecting to a running Postgres and feeding up the database
final Connection conn = DriverManager.getConnection(url);
conn.createStatement().execute("CREATE TABLE films (code char(5));");

I'm using it some time. It works well.

UPDATED: this project is not being actively maintained anymore

Please be adviced that the main maintainer of this project has successfuly 
migrated to the use of Test Containers project. This is the best possible 
alternative nowadays.