I'm using Hadoop 2.2.0 in a Cluster setup and I repeatedly get the following error, the Exception is produced in the name node olympus under file /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-secondarynamenode-olympus.log
e.g.
2014-02-12 16:19:59,013 INFO org.mortbay.log: Started SelectChannelConnector@olympus:50090
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Web server init done
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Secondary Web-server up at: olympus:50090
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period :3600 secs (60 min)
2014-02-12 16:19:59,013 INFO org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Log Size Trigger :1000000 txns
2014-02-12 16:20:59,161 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -47 namespaceID = 291272852 cTime = 0 ; clusterId = CID-e3e4ac32-7384-4a1f-9dce-882a6e2f4bd4 ; blockpoolId = BP-166254569-192.168.92.21-1392217748925.
Expecting respectively: -47; 431978717; 0; CID-85b65e19-4030-445b-af8e-5933e75a6e5a; BP-1963497814-192.168.92.21-1392217083597.
at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:133)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:519)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:380)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:346)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:342)
at java.lang.Thread.run(Thread.java:744)
2014-02-12 16:21:59,183 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
java.io.IOException: Inconsistent checkpoint fields.
LV = -47 namespaceID = 291272852 cTime = 0 ; clusterId = CID-e3e4ac32-7384-4a1f-9dce-882a6e2f4bd4 ; blockpoolId = BP-166254569-192.168.92.21-1392217748925.
Expecting respectively: -47; 431978717; 0; CID-85b65e19-4030-445b-af8e-5933e75a6e5a; BP-1963497814-192.168.92.21-1392217083597.
at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:133)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:519)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:380)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:346)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:456)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:342)
at java.lang.Thread.run(Thread.java:744)
Can anyone advice what's wrong here?
I had the same error and it went when I deleted the [hadoop temporary directory] /dfs/namesecondary
directory.
For me [hadoop temporary directory] is the value of hadoop.tmp.dir
in core-site.xml
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With