Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

httpfs error Operation category READ is not supported in state standby

I am working on hadoop apache 2.7.1 and I have a cluster that consists of 3 nodes

nn1
nn2
dn1

nn1 is the dfs.default.name, so it is the master name node.

I have installed httpfs and started it of course after restarting all the services. When nn1 is active and nn2 is standby I can send this request

http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root

from my browser and a dialog of open or save for this file appears, but when I kill the name node running on nn1 and start it again as normal then because of high availability nn1 becomes standby and nn2 becomes active.

So here httpfs should work, even if nn1 becomes stand by, but sending the same request now

http://nn1:14000/webhdfs/v1/aloosh/oula.txt?op=open&user.name=root

gives me the error

{"RemoteException":{"message":"Operation category READ is not supported in state standby","exception":"RemoteException","javaClassName":"org.apache.hadoop.ipc.RemoteException"}}

Shouldn't httpfs overcome nn1 standby status and bring the file? Is that because of a wrong configuration, or is there any other reason?

My core-site.xml is

<property>
       <name>hadoop.proxyuser.root.hosts</name>
                <value>*</value>
       </property>

        <property>
                <name>hadoop.proxyuser.root.groups</name>
                <value>*</value>
        </property>
like image 392
oula alshiekh Avatar asked Apr 11 '17 08:04

oula alshiekh


People also ask

Is read supported in state standby?

Mute Printer Friendly Page Operation category READ is not supported in state standby. Former Member Created ‎12-09-201503:05 PM Mark as New Bookmark Subscribe Mute Subscribe to RSS Feed Permalink Print Email to a Friend Report Inappropriate Content 12-09-2015 03:05:27 I am trying to build a HA cluster with 2 namenodes (1 active and 1 standby).

Is operation category read supported in state 35008?

Operation category READ is not supported in state ... - Cloudera Community - 35008 I am trying to build a HA cluster with 2 namenodes (1 active and 1 standby).   This is the hdfs-site.xml that - 35008 Support Questions Find answers, ask questions, and share your expertise All communityThis categoryThis boardCommunity ArticlesUserscancel

What is the name node in standby mode error?

The error occurs when the name node is in standby mode. In an HA-enabled cluster, DFS clients do not know in advance which name node is active at a given time. So when a client contacts a name node that is in standby mode, it refuses the READ or WRITE operation and logs the error message.

Is httpfs high availability aware yet?

It looks like HttpFs is not High Availability aware yet. This could be due to the missing configurations required for the Clients to connect with the current Active Namenode.


1 Answers

It looks like HttpFs is not High Availability aware yet. This could be due to the missing configurations required for the Clients to connect with the current Active Namenode.

Ensure the fs.defaultFS property in core-site.xml is configured with the correct nameservice ID.

If you have the below in hdfs-site.xml

<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>

then in core-site.xml, it should be

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>

Also configure the name of the Java class which will be used by the DFS Client to determine which NameNode is the currently Active and is serving client requests.

Add this property to hdfs-site.xml

<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>            
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

Restart the Namenodes and HttpFs after adding the properties in all nodes.

like image 70
franklinsijo Avatar answered Jan 01 '23 21:01

franklinsijo