I use the official elasticsearch docker image and wonder how can I include also during building a custom index, so that the index is already there when I start the container.
My attempt was to add the following line to my dockerfile:
RUN curl -XPUT 'http://127.0.0.1:9200/myindex' -d @index.json
I get the following error:
0curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused
Can I reach elasticsearch during build with such an API call or is there a complete different way to implement that?
Create a file named elasticsearch.yml in the same directory as the Dockerfile, with this content: Also, to get logging to work with docker we should add a simple logging.yml file: To add these files to the container we add the following to the Dockerfile: What’s left now is to actually make the container run Elasticsearch at startup.
Elasticsearch is also available as Docker images. The images use centos:8 as the base image. A list of all published Docker images and tags is available at www.docker.elastic.co. The source files are in Github. This package contains both free and subscription features. Start a 30-day trial to try out all of the features.
The syntax for creating a new index in Elasticsearch cluster is: To create an index, all you have to do is pass the index name without other parameters, which creates an index using default settings. You can also specify various features of the index, such as in the index body:
By default, Elasticsearch will auto-generate a keystore file for secure settings. This file is obfuscated but not encrypted. To encrypt your secure settings with a password and have them persist outside the container, use a docker run command to manually create the keystore instead.
I've had a similar problem.
I wanted to create a docker container with preloaded data (via some scripts and json files in the repo). The data inside elasticsearch was not going to change during the execution and I wanted as few build steps as possible (ideally only docker-compose up -d
).
One option would be to do it manually once, and store the elasticsearch data folder (with a docker volume) in the repository. But then I would have had duplicate data and I would have to check in manually a new version of the data folder every time the data changes.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
(the folder needs to be created with the right permissions)
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
This script will wait until elasticsearch is up to run our insert commands.
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
This command starts elasticsearch during the build process, inserts data and takes it down in one RUN command. The container is left as it was except for elasticsearch's data folder which has been properly initialized now.
FROM elasticsearch RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh # Copy the files you may need and your insert script RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
And that's it! When you run this image, the database will have preloaded data, indexes, etc...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With