I would like to create a MySQL Docker image with data already populated.
I want to create 3 layers like this:
|---------------------|---------------------|
Layer 3 | Customer 1 Database | Customer 2 Database |
|---------------------|---------------------|
Layer 2 | Database image with tables but no data |
|-------------------------------------------|
Layer 1 | mysql:5.6.26 |
|-------------------------------------------|
My question is now how to create a correct Dockerfile for layer 2 and 3? Where my empty_with_tables.sql file is loaded into layer 2 and customer1.sql and customer2.sql is loaded into two images in layer 3. I read something about putting SQL files into '/docker-entrypoint-initdb.d'. But this would result in the data being when the image is started for the first time. This not what I want. I want the data to be ready in the image (for example to be quickly available in testing).
I could start the mysql image, load the data from commandline and do a 'commit' but this is not reproducible, requiring doing that again when data in the SQL files are changed.
How can this be done?
You only need to build the image once, and use it until the installed dependencies (like Python packages) or OS-level package versions need to be changed. Not every time your code is modified.
Building Docker images is the longest process on this list. For example, it took 14 minutes to build each non-optimized backend image.
I ran into the same problem this week. I've found a working solution without the need for --volumes-from
The problem that's already stated is that /var/lib/mysql
is a volume, and since Docker is not going to support UNVOLUME
in it's Dockerfile in the near future, you can't use this location for your database storage if you want to start off with an empty database by default. (https://github.com/docker/docker/issues/18287). That's why I overwrite etc/mysqld.my.cnf
, giving mysql a new datadir.
Together with pwes' his answer, you can create a Dockerile like this:
FROM mysql:5.6
ENV MYSQL_DATABASE db
ENV MYSQL_ROOT_PASSWORD pass
COPY db.sql /docker-entrypoint-initdb.d/db.sql
COPY my.cnf /etc/mysql/my.cnf
RUN /entrypoint.sh mysqld & sleep 30 && killall mysqld
RUN rm /docker-entrypoint-initdb.d/db.sql
The only change there is in my.cnf
is the location of the datadir:
....
[mysqld]
skip-host-cache
skip-name-resolve
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql2 <-- can be anything except /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
explicit_defaults_for_timestamp
....
This cannot be done cleanly the exact way you want it to, at least when basing on official mysql
image, because you need to communicate with the server to import the data and the server is not run and initialized (from mysql's docker-entrypoint.sh
) until the container is run, which is only when the image is already built.
The not-so-clean way is to run the process in the container, using the /entrypoint.sh
script from mysql image, but you must take care of all the settings required by the entrypoint (like $MYSQL_ROOT_PASSWORD
) as well as a clean way to stop the daemon just after importing the data. Something like:
FROM mysql:5.6
ADD data.sql /docker-entrypoint-initdb.d/00-import-data.sql
ENV MYSQL_ROOT_PASSWORD somepassword
ENV MYSQL_DATABASE db1
RUN /entrypoint.sh mysqld & sleep 30 && killall mysqld
is a hackish way that results in pre-initialized DB, but... it doesn't work. The reason is that /var/lib/mysql
is declared as a volume in mysql's Dockerfile, and any changes to this directory during build process are lost after the build step is done. This can be observed in the following Dockerfile:
FROM mysql:5.6
RUN touch /var/lib/mysql/some-file && ls /var/lib/mysql
RUN touch /var/lib/mysql/some-file2 && ls /var/lib/mysql
So I suggest going with docker commit
way you described. The end result is the same as the one you want to achieve, with an exception of Layer 2 maybe.
UPDATE: As OP commented below, the commit doesn't contain volumes either. So, the only way seems to be to either edit MySQL Dockerfile and remove VOLUME
to keep data inside the container, or to manage the volumes separately from containers.
MegaWubs's answer is great, except for this "sleep 30" that forces you to guess initdb's execution time. To avoid this, I put a small shell script to be executed after every others in /docker-entrypoint-initdb.d :
/docker-entrypoint-initdb.d/
|- 01_my_data1.sql
|- 02_my_data2.sql
...
|- 99_last_processed_file.sh
With 99_last_processed_file.sh
:
#!/usr/bin/env bash
touch /tmp/server_can_shutdown.txt
--
In parallel, in Dockerfile, I run another script in replacement of Mortenn's "sleep && killall" :
# Dockerfile
# ...
COPY wait_then_shutdown.sh /tmp/wait_then_shutdown.sh
RUN /entrypoint.sh mysqld & /tmp/wait_then_shutdown.sh # <--
RUN rm /docker-entrypoint-initdb.d/*
With wait_then_shutdown.sh
:
#!/usr/bin/env bash
while [ ! -f /tmp/server_can_shutdown.txt ] # <-- created by 99_last_processed_file.sh
do
sleep 2
done
kill $(pidof mysqld)
--
And now, mysqld
stops only when all other files are processed in /docker-entrypoint-initdb.d
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With