Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Failed to start the namenode: java.net.BindException: Address already in use

Tags:

java

hadoop

I am trying to start the namenode but it keeps showing: Failed to start namenode. java.net.BindException: Address already in use;.

netstat -a | grep 9000

returns

tcp 0 0 :9000 *: LISTEN
tcp6 0 0 [::]:9000 [::]:* LISTEN

Is this normal or do I need to kill one of the processes?

Name Node was up and running after installation and suddenly stopped working after I ran the WordCount job. I've tried restarting the VM and formating the namenode multiple times but that didn't help.

The hdfs-site.xml is given below:

dfs.replication 1 dfs.namenode.name.dir file:///usr/local/hdfs/namenode dfs.datanode.data.dir file:///usr/local/hdfs/datanode

namenode logs are given below:

2015-07-10 00:27:02,513 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-07-10 00:27:02,538 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-07-10 00:27:07,549 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-07-10 00:27:09,284 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-07-10 00:27:09,285 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-07-10 00:27:09,339 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
2015-07-10 00:27:09,340 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
2015-07-10 00:27:12,475 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-07-10 00:27:16,632 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-07-10 00:27:17,491 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-07-10 00:27:17,702 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-07-10 00:27:17,876 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-07-10 00:27:17,941 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-07-10 00:27:17,977 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-07-10 00:27:18,441 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-07-10 00:27:18,525 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-07-10 00:27:18,747 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-07-10 00:27:18,760 INFO org.mortbay.log: jetty-6.1.26
2015-07-10 00:27:20,832 INFO org.mortbay.log: Started [email protected]:50070
2015-07-10 00:27:23,404 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-10 00:27:23,416 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-10 00:27:24,034 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-07-10 00:27:24,036 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-07-10 00:27:24,773 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-07-10 00:27:24,776 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-07-10 00:27:24,852 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-07-10 00:27:24,854 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Jul 10 00:27:24
2015-07-10 00:27:24,867 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-07-10 00:27:24,883 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:24,900 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-07-10 00:27:24,901 INFO org.apache.hadoop.util.GSet: capacity = 2^22 = 4194304 entries
2015-07-10 00:27:25,563 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-07-10 00:27:25,564 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-07-10 00:27:25,638 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = joe (auth:SIMPLE)
2015-07-10 00:27:25,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-07-10 00:27:25,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-07-10 00:27:25,639 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-07-10 00:27:25,658 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-07-10 00:27:26,354 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-07-10 00:27:26,354 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:26,355 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-07-10 00:27:26,355 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-07-10 00:27:26,993 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-07-10 00:27:26,994 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-07-10 00:27:26,994 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-07-10 00:27:26,994 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-07-10 00:27:27,064 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-07-10 00:27:27,069 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:27,070 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-07-10 00:27:27,070 INFO org.apache.hadoop.util.GSet: capacity = 2^19 = 524288 entries
2015-07-10 00:27:27,083 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-07-10 00:27:27,085 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-07-10 00:27:27,085 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-07-10 00:27:27,105 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-07-10 00:27:27,105 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-07-10 00:27:27,105 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-07-10 00:27:27,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-07-10 00:27:27,113 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: VM type = 32-bit
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-07-10 00:27:27,197 INFO org.apache.hadoop.util.GSet: capacity = 2^16 = 65536 entries
2015-07-10 00:27:27,403 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/hdfs/namenode/in_use.lock acquired by nodename 11822@joe-virtual-machine
2015-07-10 00:27:27,882 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /usr/local/hdfs/namenode/current
2015-07-10 00:27:28,446 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-07-10 00:27:28,758 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-07-10 00:27:28,784 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /usr/local/hdfs/namenode/current/fsimage_0000000000000000000
2015-07-10 00:27:28,826 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@fd6cd8 expecting start txid #1
2015-07-10 00:27:28,840 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /usr/local/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002
2015-07-10 00:27:28,912 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream '/usr/local/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002' to transaction ID 1
2015-07-10 00:27:29,079 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /usr/local/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002 of size 42 edits # 2 loaded in 0 seconds
2015-07-10 00:27:29,164 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-07-10 00:27:29,174 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 3
2015-07-10 00:27:29,854 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-07-10 00:27:29,855 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 2611 msecs
2015-07-10 00:27:33,403 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to localhost:9000
2015-07-10 00:27:33,490 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-07-10 00:27:33,625 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-07-10 00:27:33,628 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 3
2015-07-10 00:27:33,639 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 57
2015-07-10 00:27:33,642 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /usr/local/hdfs/namenode/current/edits_inprogress_0000000000000000003 -> /usr/local/hdfs/namenode/current/edits_0000000000000000003-0000000000000000004
2015-07-10 00:27:33,781 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-07-10 00:27:33,788 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2015-07-10 00:27:33,885 INFO org.mortbay.log: Stopped [email protected]:50070
2015-07-10 00:27:33,905 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-07-10 00:27:33,907 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-07-10 00:27:33,907 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-07-10 00:27:33,970 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.net.BindException: Problem binding to [localhost:9000] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721)
at org.apache.hadoop.ipc.Server.bind(Server.java:425)
at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:938)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:343)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:672)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:810)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:794)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:408)
... 13 more
2015-07-10 00:27:34,004 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-07-10 00:27:34,007 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at joe-virtual-machine/192.168.197.146
************************************************************/

Thanks for any help!

-Sam

like image 771
Sam Avatar asked Jul 15 '15 20:07

Sam


1 Answers

The issue was resolved by changing the port in core-site.xml from 9000 to 9001 as shown below.

<property>
   <name>fs.default.name</name>
  <value>hdfs://localhost:9001</value>
</property>

I formatted the namenode and restarted hadoop. The following services showed up as expected.

joe@joe-virtual-machine:~$ jps
3387  ResourceManager
3935  Jps
2850  NameNode
3163  SecondaryNameNode
2981  DataNode
3517  NodeManager

joe@joe-virtual-machine:~$ netstat -a - p | grep 9001
tcp        0      0 localhost:9001          *:*                     LISTEN     
tcp        0      0 localhost:54460         localhost:9001          ESTABLISHED
tcp        0      0 localhost:9001          localhost:54460         ESTABLISHED
tcp        0      0 localhost:54598         localhost:9001          TIME_WAIT  
joe@joe-virtual-machine:~$ netstat -a - p | grep 9000
tcp        0      0 *:9000                  *:*                     LISTEN     
tcp6       0      0 [::]:9000               [::]:*                  LISTEN     

However, I would still like to understand why we couldn’t use 9000 port even though no other process was running in that port (netstat).

like image 68
Sam Avatar answered Sep 24 '22 13:09

Sam