Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use a PostgreSQL container with existing data?

I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data and want to set it as a volume to a PostgreSQL container.

My part from docker-compose.yml file about PostgreSQL:

db:     image: postgres:9.4     ports:         - 5432:5432     environment:         POSTGRES_PASSWORD: postgres         POSTGRES_USER: postgres         PGDATA : /var/lib/postgresql/data     volumes:         - /projects/own/docker_php/pgdata:/var/lib/postgresql/data 

When I make docker-compose up I get this message:

db_1  | initdb: directory "/var/lib/postgresql/data" exists but is not empty db_1  | If you want to create a new database system, either remove or empty db_1  | the directory "/var/lib/postgresql/data" or run initdb db_1  | with an argument other than "/var/lib/postgresql/data". 

I tried to create my own image from the container, so my Dockerfile is:

FROM postgres:9.4 COPY pgdata /var/lib/postgresql/data 

But I got the same error, what am I doing wrong?

Update

I got SQL using pg_dumpall and put it in /docker-entrypoint-initdb.d, but this file executes every time I do docker-compose up.

like image 511
Albert Tobac Avatar asked Feb 28 '16 07:02

Albert Tobac


People also ask

Where does Postgres container store data?

To circumvent this issue, we can use the information we gathered earlier that showed us that the volume is mounted at /var/lib/postgresql/data. Inside the container, this directory is where Postgres stores all the relevant tables and databases.


2 Answers

To build on irakli's answer, here's an updated solution:

  • use newer version 2 Docker Compose file
  • separate volumes section
  • extra settings deleted

docker-compose.yml

version: '2'  services:   postgres9:     image: postgres:9.4     expose:       - 5432     volumes:       - data:/var/lib/postgresql/data  volumes:   data: {} 

demo

Start Postgres database server:

$ docker-compose up 

Show all tables in the database. In another terminal, talk to the container's Postgres:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z' 

It'll show nothing, as the database is blank. Create a table:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()' 

List the newly-created table:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'                           Access privileges  Schema |   Name    | Type  | Access privileges | Column access privileges  --------+-----------+-------+-------------------+--------------------------  public | beer      | table |                   |  

Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.

Now, kill the Postgres server container:

$ docker-compose stop 

Start up the Postgres container again:

$ docker-compose up 

We expect that the database server will re-use the storage, so our very important data is still there. Check:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'                          Access privileges  Schema |   Name    | Type  | Access privileges | Column access privileges  --------+-----------+-------+-------------------+-------------------------- public | beer      | table |                   |  

We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.

storing data

First, make a backup, storing our data on the host:

$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql 

Zap our data from the guest database:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer' 

Restore our backup (stored on the host) into the Postgres container.

Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.

$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql 

List the tables to verify the restore worked:

$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'                          Access privileges Schema |   Name    | Type  | Access privileges | Column access privileges  --------+-----------+-------+-------------------+-------------------------- public | beer      | table |                   |  

To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.

Thanks Tomasz!

like image 178
johntellsall Avatar answered Sep 29 '22 12:09

johntellsall


It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.

You should be creating data-only Docker containers, instead. Like this:

postgres9:   image: postgres:9.4   ports:     - 5432:5432   volumes_from:     - pg_data   environment:     POSTGRES_PASSWORD: postgres     POSTGRES_USER: postgres     PGDATA : /var/lib/postgresql/data/pgdata  pg_data:   image: alpine:latest   volumes:     - /var/lib/postgresql/data/pgdata   command: "true" 

which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good

As for: how to import initial data, you can either:

  1. docker cp, into the data-only container of the setup, or
  2. Use an SQL dump of the data, instead of moving binary files around (which is what I would do).
like image 43
irakli Avatar answered Sep 29 '22 14:09

irakli