Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Ubuntu - elasticsearch - Error: Cannot allocate memory

I'm trying to install elasticsearch on my local Ubuntu machine following guide at:

https://www.elastic.co/guide/en/elasticsearch/reference/current/_installation.html

, and when try to run './elasticsearch', got following error:

Java HotSpot(TM) 64-Bit Server VM warning: INFO: <br>

os::commit_memory(0x00007f0e50cc0000, 64075595776, 0) failed; <br>

error='Cannot allocate memory' (errno=12) <br>

There is insufficient memory for the Java Runtime Environment to continue.<br>

Native memory allocation (mmap) failed to map 64075595776 bytes for committing reserved memory

Here is memory stats:

             total       used       free     shared    buffers     cached
Mem:       8113208    4104900    4008308      44244     318076    1926964
-/+ buffers/cache:    1859860    6253348
Swap:      7812092          0    7812092

Error message from logs:

There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 64075595776 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2627), pid=13021, tid=139764129740544
#
# JRE version:  (8.0_66-b17) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode linux-amd64 )
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again

Already tried earlier version, installing from repositories using apt, nothing worked.

Anyone have any idea what might be the problem?

like image 998
Nemilenko Avatar asked Jan 12 '16 16:01

Nemilenko


4 Answers

Looks like you're trying to start ElasticSearch with the default options, which attempt to set a stack size of 2Go. If you don't have that free... kaboom (silently).

Have a look in /etc/elasticsearch/jvm.options and modify the lines:

-Xms2g
-Xmx2g

to something that will fit in your available memory. But be aware that ElasticSearch is a great big memory hog and wants it all. You may not get a useful system under the 2Go limit.

like image 156
Nello Avatar answered Nov 13 '22 15:11

Nello


First of all, Elasticsearch uses a hybrid mmapfs / niofs directory. The operating system limits on mmap counts, usually default value is 65536. It may result in out-of-memory exceptions. On Linux, you can increase this default kernel value by running the following command as root:

sysctl -w vm.max_map_count=262144

Or permanently, by updating the vm.max_map_count setting in /etc/sysctl.conf. You can also change by the following command:

echo 262144 > /proc/sys/vm/max_map_count

For more info, please check Linux kernel documentation:

max_map_count: This file contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of calling malloc, directly by mmap and mprotect, and also when loading shared libraries. While most applications need less than a thousand maps, certain programs, particularly malloc debuggers, may consume lots of them, e.g., up to one or two maps per allocation. The default value is 65536.

The second thing you should take into account is JVM minimum heap size and maximum heap size. You can change this values on your machine in /etc/elasticsearch/jvm.options. To choose suitable values, you can find good rules of thumbs on Elastic set JVM heap size page.

like image 22
Soroush Nejad Avatar answered Nov 13 '22 16:11

Soroush Nejad


This answer worked for me.

I changed the initial size of the total heap space and maximum size of the total heap space by changing below values of

/etc/elasticsearch/jvm.options

From
-Xms1g
-Xmx1g

To
-Xms256m
-Xmx256m

like image 2
MudithaE Avatar answered Nov 13 '22 16:11

MudithaE


Edit jvm.options:

sudo nano /etc/elasticsearch/jvm.options

Change JVM heap size from

Xms2g 
Xmx2g

to

Xms200m
Xmx200m

Reason: Elastic Search is trying to allocate 2GB heapsize (by default) which is bound to fail. Hence we changed it to 200mb.

like image 1
Vikrant Sharma Avatar answered Nov 13 '22 16:11

Vikrant Sharma