Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Root user in Elasticsearch 2.4.0 in Docker container

I am running ELK stack with Docker for log management with current configuration of ES 1.7, Logstash 1.5.4 and Kibana 4.1.4. Now I am trying to upgrade Elasticsearch to 2.4.0, found at https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz by using tar.gz file with Docker. As ES 2.X does not allow running it as root user, I have used

-Des.insecure.allow.root=true

option while running the elasticsearch service, yet my container doesn't start. The logs don't mention any problem.

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 874 100 874 0 0 874k 0 --:--:-- --:--:-- --:--:-- 853k
//opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found

[email protected] start /opt/log-management/Scheduler
node scheduler-app.js

[email protected] start /opt/log-management/ESExportWrapper
node app.js
Jobs are registered
[2016-09-28 09:04:24,646][INFO ][bootstrap ] max_open_files [1048576]
[2016-09-28 09:04:24,686][WARN ][bootstrap ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance, but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-28 09:04:24,874][INFO ][node ] [Kismet Deadly] initializing ...
Wed, 28 Sep 2016 09:04:24 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Wed, 28 Sep 2016 09:04:24 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-28 09:04:25,399][INFO ][plugins ] [Kismet Deadly] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs]
[2016-09-28 09:04:25,423][INFO ][env ] [Kismet Deadly] heap size [7.8gb], compressed ordinary object pointers [true]
[2016-09-28 09:04:25,455][WARN ][threadpool ] [Kismet Deadly] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] initialized
[2016-09-28 09:04:27,575][INFO ][node ] [Kismet Deadly] starting ...
[2016-09-28 09:04:27,695][INFO ][transport ] [Kismet Deadly] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-09-28 09:04:27,700][INFO ][discovery ] [Kismet Deadly] ccs-elasticsearch/q2Sv4FUFROGIdIWJrNENVA

Any leads would be appreciated.

EDIT 1: As //opt//log-management//elasticsearch/bin/elasticsearch: line 134: hostname: command not found was an error and docker image does not have hostname utility, I tried using uname -ncommand to get HOSTNAME in ES. Now it does not throw hostname error, but problem remains same. It does not start. Is it correct alternative to use?

One more doubt, when I am using ES 1.7, which is up and running currently, hostname utility is not in there as well but it runs without any problem. Very confused. Logs after using uname -n :

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1083  100  1083    0     0  1093k      0 --:--:-- --:--:-- --:--:-- 1057k

> [email protected] start /opt/log-management/ESExportWrapper
> node app.js


> [email protected] start /opt/log-management/Scheduler
> node scheduler-app.js

Jobs are registered
[2016-09-30 10:10:37,785][INFO ][bootstrap                ] max_open_files [1048576]
[2016-09-30 10:10:37,822][WARN ][bootstrap                ] running as ROOT user. this is a bad idea!
Native thread-sleep not available.
This will result in much slower performance, but it will still work.
You should re-install spawn-sync or upgrade to the lastest version of node if possible.
Check /opt/log-management/ESExportWrapper/node_modules/sync-request/node_modules/spawn-sync/error.log for more details
[2016-09-30 10:10:37,993][INFO ][node                     ] [Helleyes] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-09-30 10:10:37,993][INFO ][node                     ] [Helleyes] initializing ...
Fri, 30 Sep 2016 10:10:38 GMT express deprecated app.configure: Check app.get('env') in an if statement at lib/express/index.js:60:5
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated multipart: use parser (multiparty, busboy, formidable) npm module instead at node_modules/express/node_modules/connect/lib/middleware/bodyParser.js:56:20
Fri, 30 Sep 2016 10:10:38 GMT connect deprecated limit: Restrict request size at location of read at node_modules/express/node_modules/connect/lib/middleware/multipart.js:86:15
[2016-09-30 10:10:38,435][INFO ][plugins                  ] [Helleyes] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-09-30 10:10:38,455][INFO ][env                      ] [Helleyes] using [1] data paths, mounts [[/data (/dev/mapper/platform-data)]], net usable_space [1tb], net total_space [1tb], spins? [possibly], types [xfs]
[2016-09-30 10:10:38,456][INFO ][env                      ] [Helleyes] heap size [7.8gb], compressed ordinary object pointers [true]
[2016-09-30 10:10:38,483][WARN ][threadpool               ] [Helleyes] requested thread pool size [60] for [index] is too large; setting to maximum [24] instead
[2016-09-30 10:10:40,151][INFO ][node                     ] [Helleyes] initialized
[2016-09-30 10:10:40,152][INFO ][node                     ] [Helleyes] starting ...
[2016-09-30 10:10:40,278][INFO ][transport                ] [Helleyes] publish_address {10.240.118.68:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-09-30 10:10:40,283][INFO ][discovery                ] [Helleyes] ccs-elasticsearch/wvVGkhxnTqaa_wS5GGjZBQ
[2016-09-30 10:10:40,360][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x329b2977, /172.17.0.15:53388 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:40,360][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xdf31e5e6, /172.17.0.15:46846 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,798][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xcff0b2b6, /172.17.0.15:46958 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:41,800][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0xb47caaf6, /172.17.0.15:53501 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,302][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x6247aa3f, /172.17.0.15:47057 => /10.240.118.70:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[2016-09-30 10:10:43,303][WARN ][transport.netty          ] [Helleyes] exception caught on transport layer [[id: 0x1d266aa0, /172.17.0.15:53598 => /10.240.118.69:9300]], closing connection
java.lang.NullPointerException
    at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:179)
    at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:174)
    at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:122)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
    at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) [2016-09-30 10:10:44,807][INFO ][cluster.service ] [Helleyes] new_master {Helleyes}{wvVGkhxnTqaa_wS5GGjZBQ}{10.240.118.68}{10.240.118.68:9300}, reason: zen-disco-join(elected_as_master, [0] joins received) [2016-09-30 10:10:44,852][INFO ][http ] [Helleyes] publish_address {10.240.118.68:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200} [2016-09-30 10:10:44,852][INFO ][node ] [Helleyes] started [2016-09-30 10:10:44,984][INFO ][gateway ] [Helleyes] recovered [32] indices into cluster_state

Error after failed deployment

failed: [10.240.118.68] (item={u'url': u'http://10.240.118.68:9200'}) => {"content": "", "failed": true, "item": {"url": "http://10.240.118.68:9200"}, "msg": "Status code was not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://10.240.118.68:9200"}

EDIT 2: Even with hostnameutility installed and working fine, containers don't start. Logs are same as EDIT 1.

EDIT 3: Container does start but is not reachable at address http://nodeip:9200. Out of 3 nodes, only 1 has 2.4 other 2 still has 1.7 and 2.4 is not the part of cluster. Inside container running 2.4, curl to localhost:9200 gives running elasticsearch result, but unreachable from outside.

EDIT 4: I tried running basic installation of ES 2.4 on cluster, where in same set up ES 1.7 is working fine. I have run ES migration plugin to check if cluster is okay to run ES 2.4 and it gave me green. Basic installation details follow

Dockerfile

#Pulling SLES12 thin base image
FROM private-registry-1

#Author
MAINTAINER XYZ

# Pre-requisite - Adding repositories
RUN zypper ar private-registry-2

RUN zypper --no-gpg-checks -n refresh

#Install required packages and dependencies
RUN zypper -n in  net-tools-1.60-764.185 wget-1.14-7.1 python-2.7.9-14.1 python-base-2.7.9-14.1 tar-1.27.1-7.1 

#Downloading elasticsearch executable
ENV ES_VERSION=2.4.0
ENV ES_DIR="//opt//log-management//elasticsearch"
ENV ES_CONFIG_PATH="${ES_DIR}//config"
ENV ES_REST_PORT=9200
ENV ES_INTERNAL_COM_PORT=9300

WORKDIR /opt/log-management
RUN wget private-registry-3/elasticsearch/elasticsearch/${ES_VERSION}.tar/elasticsearch-${ES_VERSION}.tar.gz --no-check-certificate
RUN tar -xzvf ${ES_DIR}-${ES_VERSION}.tar.gz \
&& rm ${ES_DIR}-${ES_VERSION}.tar.gz \
&& mv ${ES_DIR}-${ES_VERSION} ${ES_DIR} 

#Exposing elasticsearch server container port to the HOST
EXPOSE ${ES_REST_PORT} ${ES_INTERNAL_COM_PORT}

#Removing binary files which are not needed
RUN zypper -n rm wget

# Removing zypper repos
RUN zypper rr caspiancs_common

#Running elasticsearch executable
WORKDIR ${ES_DIR}
ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true

Build with

docker build -t es-test .

1) When Run with docker run -d --name elasticsearch --net=host -p 9200:9200 -p 9300:9300 es-test as told in one of the comments and do curl localhost:9200 inside the container or node which is running the container, I get correct response. I still can't reach to other nodes of cluster on 9200 port.

2) When Run with docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 es-testand do curl localhost:9200 inside the container, it works fine, but not at the node giving me error

curl: (56) Recv failure: Connection reset by peer

I still can't reach to other nodes of cluster on 9200 port.

EDIT 5: Using this answer on this question, I got all three out of three containers running ES 2.4. But ES is unable to form a cluster with all these three containers. Network configuration is as follows network.host : 0.0.0.0, http.port: 9200,

#configure elasticsearch.yml for clustering
echo 'discovery.zen.ping.unicast.hosts: [ELASTICSEARCH_IPS] ' >> ${ES_CONFIG_PATH}/elasticsearch.yml

Logs got with docker logs elasticsearch are following:

[2016-10-06 12:31:28,887][WARN ][bootstrap                ] running as ROOT user. this is a bad idea!
[2016-10-06 12:31:29,080][INFO ][node                     ] [Screech] version[2.4.0], pid[1], build[ce9f0c7/2016-08-29T09:14:17Z]
[2016-10-06 12:31:29,081][INFO ][node                     ] [Screech] initializing ...
[2016-10-06 12:31:29,652][INFO ][plugins                  ] [Screech] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2016-10-06 12:31:29,684][INFO ][env                      ] [Screech] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8.7gb], net total_space [9.7gb], spins? [unknown], types [rootfs]
[2016-10-06 12:31:29,684][INFO ][env                      ] [Screech] heap size [989.8mb], compressed ordinary object pointers [true]
[2016-10-06 12:31:29,720][WARN ][threadpool               ] [Screech] requested thread pool size [60] for [index] is too large; setting to maximum [5] instead
[2016-10-06 12:31:31,387][INFO ][node                     ] [Screech] initialized
[2016-10-06 12:31:31,387][INFO ][node                     ] [Screech] starting ...
[2016-10-06 12:31:31,456][INFO ][transport                ] [Screech] publish_address {172.17.0.16:9300}, bound_addresses {[::]:9300}
[2016-10-06 12:31:31,465][INFO ][discovery                ] [Screech] ccs-elasticsearch/YeO41MBIR3uqzZzISwalmw
[2016-10-06 12:31:34,500][WARN ][discovery.zen            ] [Screech] failed to connect to master [{Bobster}{Gh-6yBggRIypr7OuW1tXhA}{172.17.0.15}{172.17.0.15:9300}], retrying...
ConnectTransportException[[Bobster][172.17.0.15:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: /172.17.0.15:9300];
    at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:1002)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:937)
    at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:911)
    at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:260)
    at org.elasticsearch.discovery.zen.ZenDiscovery.joinElectedMaster(ZenDiscovery.java:444)
    at org.elasticsearch.discovery.zen.ZenDiscovery.innerJoinCluster(ZenDiscovery.java:396)
    at org.elasticsearch.discovery.zen.ZenDiscovery.access$4400(ZenDiscovery.java:96)
    at org.elasticsearch.discovery.zen.ZenDiscovery$JoinThreadControl$1.run(ZenDiscovery.java:1296)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: /172.17.0.15:9300
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)

Whenever I mention IP address of host running that container as network.host, I end up in old situation i.e. only one container running with ES 2.4, other two running 1.7.

Just saw that docker proxy is listening at 9300 or "I think" it is listening.

elasticsearch-server/src/main/docker # netstat -nlp | grep 9300
tcp        0      0 :::9300                 :::*                    LISTEN      6656/docker-proxy   

Any leads on this?

like image 680
vvs14 Avatar asked Sep 29 '16 11:09

vvs14


People also ask

How do I get to root user in Docker?

As an alternative, we can also access the Docker container as root. In this case, we'll use the nsenter command to access the Docker container. To use the nsenter command, we must know the PID of the running container. This allows us to access the Docker container as a root user and run any command to access any file.

What is the root path of Docker container?

Within the virtual image, the path is the default Docker path /var/lib/docker .

How do I run an Elasticsearch container in Docker?

First, we need to create a network that will be used by both Elasticsearch and Kibana. Then we can create a Docker container for Elasticsearch: Key points here: The network just created is used for Elasticsearch so it can be discovered by Kibana which will also use this network.

Should Docker run as root or user?

One of the best practices while running Docker Container is to run processes with a non-root user. This is because if a user manages to break out of the application running as root in the container, he may gain root user access on host.


2 Answers

I was able to form cluster with following settings

network.publish_host=CONTAINER_HOST_ADDRESS i.e. address of node where the container is running. network.bind_host=0.0.0.0
transport.publish_port=9300
transport.publish_host=CONTAINER_HOST_ADDRESS

tranport.publish_port is important when your are running ES behind proxy/ load balancer such nginx or haproxy.

like image 64
vvs14 Avatar answered Oct 22 '22 16:10

vvs14


As per the documentation for elasticsearch 2.x by default the network.host binds to localhost

You would need to explicitly set the network.host:0.0.0.0 as specified in this answer :

Example:

ENTRYPOINT ${ES_DIR}/bin/elasticsearch -Des.insecure.allow.root=true -Des.network.host=0.0.0.0 
like image 41
keety Avatar answered Oct 22 '22 16:10

keety