Troubleshooting Hive Action In Oozie
January 26, 2014Configuring MultipleInputs-InputFormats-Mappers In Oozie MapReduce Action
February 8, 2014This post explains the fix when you see the below error when starting Datanode.
2013-12-14 23:39:09,354 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-hadoop-user/dfs/data: namenode namespaceID = 2130233605; datanode namespaceID = 692534382 at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
This is a well documented issue. Check this JIRA
Reason
When Namenode is formatted a new namespaceID is generated and assigned to the Namenode. Datanode also persistently store the same namespaceID and has to match with the namespaceID stored in Namenode. During Datanode startup the namespaceID of Namenode and Datanode are compared and if there is a mismatch between the two the above error is thrown. You will see this error when Namenode is formatted with out formatting the Datanode. So ideally Datanodes should be reformatted whenever the Namenode is reformatted.
Quick Fix
Detailed fix is described in the JIRA link above. However a quick fix is to make the namespaceID match if you have only few Datanodes in your cluster.
1. From the error above, we can find the location of the VERSION file. Or refer the configuration files to find the locations.
Namenode - /tmp/hadoop-hadoop-user/dfs/name/current/VERSION Datanode - /tmp/hadoop-hadoop-user/dfs/data/current/VERSION2. Open the VERSION file from the Namenode location and it should look like below
#Sun Feb 02 18:08:13 UTC 2014 namespaceID=370861006 cTime=0 storageType=NAME_NODE layoutVersion=-413. Copy the namespaceID
4. Open the VERSION file from the Datanode location (in Datanode) and replace the namespaceID with the namespackeID recorded from Step 3.
5. Restart Hadoop cluster.