Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

java.lang.OutOfMemoryError: unable to create new native thread for big data set

Tags:

hadoop

hive

I have running hive query which running fine for small dataset. but i am running for 250 million records i have getting below errors in logs

 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError:   unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:640)
    at org.apache.hadoop.mapred.Task$TaskReporter.startCommunicationThread(Task.java:725)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:362)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)



 2013-03-18 14:12:58,907 WARN org.apache.hadoop.mapred.Child: Error running child
 java.io.IOException: Cannot run program "ln": java.io.IOException: error=11, Resource temporarily unavailable
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at java.lang.Runtime.exec(Runtime.java:593)
    at java.lang.Runtime.exec(Runtime.java:431)
    at java.lang.Runtime.exec(Runtime.java:369)
    at org.apache.hadoop.fs.FileUtil.symLink(FileUtil.java:567)
    at org.apache.hadoop.mapred.TaskRunner.symlink(TaskRunner.java:787)
    at org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:752)
    at org.apache.hadoop.mapred.Child.main(Child.java:225)
 Caused by: java.io.IOException: java.io.IOException: error=11, Resource temporarily unavailable
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 7 more
2013-03-18 14:12:58,911 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task
2013-03-18 14:12:58,911 INFO org.apache.hadoop.mapred.Child: Error cleaning up
  java.lang.NullPointerException
    at org.apache.hadoop.mapred.Task.taskCleanup(Task.java:1048)
    at org.apache.hadoop.mapred.Child.main(Child.java:281)

need help on this.

like image 342
hjamali52 Avatar asked Mar 19 '13 08:03

hjamali52


2 Answers

I've experienced this with MapReduce in general. In my experience it's not actually an Out of Memory error - the system is running out of file descriptors to start threads, which is why it says "unable to create new native thread".

The fix for us (on Linux) was to increase the ulimit, which was set to 1024, to 2048 via: ulimit -n 2048. You will need to have permissions to do this - either sudo or root access or have a hard limit of 2048 or higher so you can set it as your own user on the system. You can do this in your .profile or .bashrc settings file.

You can check your current settings with ulimit -a. See this reference for more details: https://stackoverflow.com/a/34645/871012

I've also seen many others talk about changing the /etc/security/limits.conf file, but I haven't had to do that yet. Here is a link talking about it: https://stackoverflow.com/a/8285278/871012

like image 102
quux00 Avatar answered Oct 04 '22 02:10

quux00


If your Job is failing because of OutOfMemmory on nodes you can tweek your number of max maps and reducers and the JVM opts for each. mapred.child.java.opts (the default is 200Xmx) usually has to be increased based on your data nodes specific hardware.

like image 32
Gargi Avatar answered Oct 04 '22 01:10

Gargi