I am trying to distribute a set of connected applications running in several linked containers that includes a mongo database that is required to:
Ideally the data will also be persisted in a linked data volume container.
I can get the data into the mongo
container using a mongo
base instance that doesn't mount any volumes (dockerhub image: psychemedia/mongo_nomount
- this is essentially the base mongo Dockerfile without the VOLUME /data/db
statement) and a Dockerfile
config along the lines of:
ADD . /files WORKDIR /files RUN mkdir -p /data/db && mongod --fork --logpath=/tmp/mongodb.log && sleep 20 && \ mongoimport --db testdb --collection testcoll --type csv --headerline --file ./testdata.csv #&& mongod --shutdown
where ./testdata.csv
is in the same directory (./mongo-with-data
) as the Dockerfile.
My docker-compose config file includes the following:
mongo: #image: mongo build: ./mongo-with-data ports: - "27017:27017" #Ideally we should be able to mount this against a host directory #volumes: # - ./db/mongo/:/data/db #volumes_from: # - devmongodata #devmongodata: # command: echo created # image: busybox # volumes: # - /data/db
Whenever I try to mount a VOLUME it seems as if the original seeded data - which is stored in /data/db
- is deleted. I guess that when a volume is mounted to /data/db
it replaces whatever is there currently.
That said, the docker userguide suggests that: Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization? So I expected the data to persist if I placed the VOLUME command after the seeding RUN
command?
So what am I doing wrong?
The long view is that I want to automate the build of several linked containers, and then distribute a Vagrantfile
/docker-compose YAML file that will fire up a set of linked apps, that includes a pre-seeded mongo
database with a (partially pre-populated) persistent data container.
Running MongoDB in a Docker Container You can pull the latest MongoDB image and run it in a Docker container. For production, the application can connect to a cloud-hosted database using the MongoDB Atlas or MongoDB Enterprise Server. This will pull the latest official image from Docker Hub.
Database seeding is the initial seeding of a database with data. Seeding a database is a process in which an initial set of data is provided to a database when it is being installed. In this post, you will learn how to get a working seed script setup for MongoDB databases using. Node.js.
I do this using another docker container whose only purpose is to seed mongo, then exit. I suspect this is the same idea as ebaxt's, but when I was looking for an answer to this, I just wanted to see a quick-and-dirty, yet straightforward, example. So here is mine:
docker-compose.yml
mongodb: image: mongo ports: - "27017:27017" mongo-seed: build: ./mongo-seed links: - mongodb # my webserver which uses mongo (not shown in example) webserver: build: ./webserver ports: - "80:80" links: - mongodb
mongo-seed/Dockerfile
FROM mongo COPY init.json /init.json CMD mongoimport --host mongodb --db reach-engine --collection MyDummyCollection --type json --file /init.json --jsonArray
mongo-seed/init.json
[ { "name": "Joe Smith", "email": "[email protected]", "age": 40, "admin": false }, { "name": "Jen Ford", "email": "[email protected]", "age": 45, "admin": true } ]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With