I'm trying to improve the performance of an Elasticsearch 6.2.4. I'm trying to set bootstrap.memory_lock: true
. I have done the following changes
1) File /etc/default/elasticsearch
ES_JAVA_OPTS="-Xms4g -Xmx4g"
MAX_LOCKED_MEMORY=unlimited
2) File /etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
3) File /usr/lib/systemd/system/elasticsearch.service
changed as below and run systemctl daemon-reload
LimitMEMLOCK=infinity
4) File /etc/elasticsearch/elasticsearch.yml
bootstrap.memory_lock: true
5) File /etc/elasticsearch/jvm.options
-Xms4g
-Xmx4g
ulimit -as
output
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30689
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30689
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
After making these changes Elasticsearch stopped working with the following log
[2018-07-17T12:58:17,514][WARN ][o.e.b.JNANatives ] Unable to
lock JVM Memory: error=12, reason=Cannot allocate memory
[2018-07-17T12:58:17,517][WARN ][o.e.b.JNANatives ] This can result in part of the JVM being swapped out.
[2018-07-17T12:58:17,517][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-07-17T12:58:17,517][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2018-07-17T12:58:17,518][WARN ][o.e.b.JNANatives ] If you are logged in interactively, you will have to re-login for the new limits to take effect.
[2018-07-17T12:58:17,684][INFO ][o.e.n.Node ] [] initializing ...
[2018-07-17T12:58:17,757][INFO ][o.e.e.NodeEnvironment ] [8fsU41g] using [1] data paths, mounts [[/ (/dev/nvme0n1p1)]], net usable_space [5.4gb], net total_space [7.6gb], types [ext4]
[2018-07-17T12:58:17,758][INFO ][o.e.e.NodeEnvironment ] [8fsU41g] heap size [3.9gb], compressed ordinary object pointers [true]
[2018-07-17T12:58:17,808][INFO ][o.e.n.Node ] node name [8fsU41g] derived from node ID [8fsU41ghScq506TqNnjegQ]; set [node.name] to override
[2018-07-17T12:58:17,809][INFO ][o.e.n.Node ] version[6.2.4], pid[2823], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/4.4.0-1062-aws/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_171/25.171-b11]
[2018-07-17T12:58:17,809][INFO ][o.e.n.Node ] JVM arguments [-Xms4g, -Xmx4g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.POxZWZQp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Xms4g, -Xmx4g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [aggs-matrix-stats]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [analysis-common]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [ingest-common]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [lang-expression]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [lang-mustache]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [lang-painless]
[2018-07-17T12:58:18,564][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [mapper-extras]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [parent-join]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [percolator]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [rank-eval]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [reindex]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [repository-url]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [transport-netty4]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] loaded module [tribe]
[2018-07-17T12:58:18,565][INFO ][o.e.p.PluginsService ] [8fsU41g] no plugins loaded
[2018-07-17T12:58:21,149][INFO ][o.e.d.DiscoveryModule ] [8fsU41g] using discovery type [zen]
[2018-07-17T12:58:21,633][INFO ][o.e.n.Node ] initialized
[2018-07-17T12:58:21,633][INFO ][o.e.n.Node ] [8fsU41g] starting ...
[2018-07-17T12:58:21,767][INFO ][o.e.t.TransportService ] [8fsU41g] publish_address {172.31.20.225:9300}, bound_addresses {[::]:9300}
[2018-07-17T12:58:21,790][INFO ][o.e.b.BootstrapChecks ] [8fsU41g] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-07-17T12:58:21,792][ERROR][o.e.b.Bootstrap ] [8fsU41g] node validation exception
[1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
[2018-07-17T12:58:21,794][INFO ][o.e.n.Node ] [8fsU41g] stopping ...
[2018-07-17T12:58:21,820][INFO ][o.e.n.Node ] [8fsU41g] stopped
[2018-07-17T12:58:21,820][INFO ][o.e.n.Node ] [8fsU41g] closing ...
[2018-07-17T12:58:21,832][INFO ][o.e.n.Node ] [8fsU41g] closed
Are there any other changes should I incorporate to make it working?
If you received error=12, Cannot allocate memory or error=12, Not enough space, this means your system ran out of memory or swap space when Java tried to fork a process. The problem is inherent with the way Java allocates memory when executing processes. When Java executes a process, it must fork () then exec ().
There is insufficient memory for the Java Runtime Environment to continue. Native memory allocation (malloc) failed to allocate 2712666112 bytes for committing reserved memory. An error report file with more information is saved as: /tmp/jvm-29955/hs_error.log`
As Scary Wombatmentions, the JVM is trying to allocate 2712666112 bytes (2.7 Gb) of memory, and you only have 691424000 bytes (0.69 Gb) of free physical memory and nothing available on the swap. Share Follow answered Oct 27 '18 at 3:22
Forking JVM: error=12, Cannot allocate memory or error=12, Not enough space 1 Symptoms. 2 Cause. If you received error=12, Cannot allocate memory or error=12, Not enough space, this means your system ran out of... 3 Resolution. If you are hosting multiple products in the same Tomcat container as Stash, move Stash to its own Tomcat... More ...
Run:
sudo vim /usr/lib/systemd/system/elasticsearch.service
Set:
[Service]
LimitMEMLOCK=infinity
Adding [Service]
solved the problem to me.
Try reloading the services, if you have already added required configuration.
sudo /bin/systemctl daemon-reload
sudo systemctl restart elasticsearch.service
sudo systemctl status elasticsearch.service
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With