I'm attempting to run an ELK stack using Docker. I found docker-elk which has already set up the config for me, using docker-compose
.
I'd like to store the elasticsearch data on the host machine instead of a container. As per docker-elk's README, I added a volumes
line to elasticsearch
's section of docker-compose.yml
:
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200"
- "9300"
volumes:
- ../../env/elasticsearch:/usr/share/elasticsearch/data
However, when I run docker-compose up
I get:
$ docker-compose up
Starting dev_elasticsearch_1
Starting dev_logstash_1
Starting dev_kibana_1
Attaching to dev_elasticsearch_1, dev_logstash_1, dev_kibana_1
kibana_1 | Stalling for Elasticsearch
elasticsearch_1 | [2016-03-09 00:23:35,193][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
elasticsearch_1 | Exception in thread "main" java.lang.IllegalStateException: Unable to access 'path.data' (/usr/share/elasticsearch/data/elasticsearch)
elasticsearch_1 | Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/elasticsearch
elasticsearch_1 | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
elasticsearch_1 | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
... etc ...
Looking in ../../env
, the elasticsearch
directory was indeed created, but it was empty. If I create ../../env/elasticsearch/elasticsearch
then I get an access error for /usr/share/elasticsearch/data/elasticsearch/nodes
. If I creates /nodes
then I get an error for /nodes/0
, etc...
In short, it appears that the container doesn't have write permissions on the directory.
How do I get it to have write permissions? I tried chmod a+wx ../../env/elasticsearch
, and then it manages to create the next directory, but that directory has permission drwxr-xr-x
and it gets stuck again.
I don't like the idea of having to run this as root.
In your Dockerfile, just make use of RUN chown <your user> <location> . Save this answer. Show activity on this post. Write data into the writable layer of the container and commit the changes to an image with different tag (OR) use the same container name to start the container.
Find out the name of the volume with docker volume list. Shut down all running containers to which this volume is attached to. Run docker run -it --rm --mount source=[NAME OF VOLUME],target=/volume busybox. A shell will open.
A simple solution to this in a Linux machine is to use the --network=”host” option along with the Docker run command. After that, the localhost (127.0. 0.1) in your Docker container will point to the host Linux machine. This runs a Docker container with the settings of the network set to host.
Docker doesn't tend to worry about these things in its base images because it expects you to use volumes or volume containers. Mounting to the host gets second class support. But as long as the UID that owns the directory is not zero (and it seems it's not based on our comment exchange) you should be able to get away with running elasticsearch as the user who already owns the directory. You could try removing and re-adding the elasticsearch user from the container, specifying its UID.
You would need to do this at entrypoint time, so your best bet would be to build a custom container. Create a file called my-entrypoint
with these contents:
#!/bin/bash
# Allow running arbitrary one-off commands
[[ $1 && $1 != elasticsearch ]] && exec "$@"
# Otherwise, fix perms and then delegate the rest to vanilla
target_uid=$(stat -c %u /usr/share/elasticsearch/data)
userdel elasticsearch
useradd -u "$target_uid" elasticsearch
. /docker-entrypoint "$@"
Make sure it's executable. Then create a Dockerfile with these contents:
FROM elasticsearch
COPY my-entrypoint /
ENTRYPOINT ["/my-entrypoint"]
And finally update your docker-compose.yml file:
elasticsearch:
build: .
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200"
- "9300"
volumes:
- ../../env/elasticsearch:/usr/share/elasticsearch/data
Now when you run docker-compose up
it should build an elasticsearch container with your changes.
(I had to do something like this once with apache for Magento.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With