I am trying to run a MapReduce job with Hadoop, YARN and Accumulo.
I am getting the following output that I cannot track down the issue. Looks to be a YARN issue, but I am not sure what it is looking for. I have a nmPrivate folder at location $HADOOP_PREFIX/grid/hadoop/hdfs/yarn/logs. Is this the folder it says that it cannot find?
14/03/31 08:48:46 INFO mapreduce.Job: Job job_1395942264921_0023 failed with state FAILED due to: Application application_1395942264921_0023 failed 2 times due to AM Container for appattempt_1395
942264921_0023_000002 exited with exitCode: -1000 due to: Could not find any valid local directory for nmPrivate/container_1395942264921_0023_02_000001.tokens
.Failing this attempt.. Failing the application.
Hadoop/YARN job FAILED - "exited with exitCode: -1000 due to: Could not find any valid local directory for nmPrivate..." Bookmark this question. Show activity on this post. I am trying to run a MapReduce job with Hadoop, YARN and Accumulo.
Yarn is the parallel processing framework for implementing distributed computing clusters that processes huge amounts of data over multiple compute nodes. Hadoop Yarn allows for a compute job to be segmented into hundreds and thousands of tasks. In addition to resource management, Yarn also offers job scheduling.
In a Hadoop eco-system, no.of jobs are executing in a fraction of time in that time. I am trying to execute the Hive job for Data validation in Hive server in Production server. While executing a Hive job in the hive command line I got this type of error.
Actually this is due to the permission issues on some of the yarn local directories. I started using LinuxContainerExecutor (in non secure mode with nonsecure-mode.local-user as kailash) and made corresponding changes.
When i test the spark-submit-on-yarn in the cluster mode:
spark-submit --master yarn --deploy-mode cluster --class org.apache.spark.examples.SparkPi /usr/local/install/spark-2.2.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.0.jar 100
i gotten the same error:
Application application_1532249549503_0007 failed 2 times due to AM Container for appattempt_1532249549503_0007_000002 exited with exitCode: -1000 Failing this attempt.Diagnostics: java.io.IOException: Resource file:/usr/local/install/spark-2.2.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.2.0.jar changed on src filesystem (expected 1531576498000, was 1531576511000
there have one sugesstion to desolve this kind of error,to revise your core-site.xml or other conf of the HADOOP.
Finally, i fixed the error by set the property fs.defaultFS
in the the $HADOOP_HOME/etc/hadoop/core-site.xml
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With