Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hdfs put VS webhdfs

I'm loading 28 GB file in hadoop hdfs using webhdfs and it takes ~25 mins to load.

I tried loading same file using hdfs put and It took ~6 mins. Why there is so much difference in performance?

What is recommended to use? Can somebody explain or direct me to some good link it will be really helpful.

Below us the command I'm using

curl -i --negotiate -u: -X PUT "http://$hostname:$port/webhdfs/v1/$destination_file_location/$source_filename.temp?op=CREATE&overwrite=true"

this will redirect to a datanode address which I use in next step to write the data.

like image 373
chhaya vishwakarma Avatar asked Jul 23 '15 07:07

chhaya vishwakarma


People also ask

What is WebHDFS in Hadoop?

WebHDFS provides web services access to data stored in HDFS. At the same time, it retains the security the native Hadoop protocol offers and uses parallelism, for better throughput. To enable WebHDFS (REST API) in the name node and data nodes, you must set the value of dfs. webhdfs.

What is WebHDFS REST API?

WEBHDFS is a REST API that supports HTTP operations like GET POST, PUT, and DELETE. It allows client applications to access HDFS data and execute HDFS operations via HTTP or HTTPs.

What is HttpFS?

HttpFS is a server that provides a REST HTTP gateway supporting all HDFS File System operations (read and write). And it is interoperable with the webhdfs REST HTTP API.


1 Answers

Hadoop provides several ways of accessing HDFS

All of the following support almost all features of the filesystem -

1. FileSystem (FS) shell commands: Provides easy access of Hadoop file system operations as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS.
This needs hadoop client to be installed and involves the client to write blocks directly to one Data Node. All versions of Hadoop do not support all options for copying between filesystems.

2. WebHDFS: It defines a public HTTP REST API, which permits clients to access Hadoop from multiple languages without installing Hadoop, Advantage being language agnostic way(curl, php etc....).
WebHDFS needs access to all nodes of the cluster and when some data is read, it is transmitted from the source node directly but **there is a overhead of http ** (1)FS Shell but works agnostically and no problems with different hadoop cluster and versions.

3. HttpFS. Read and write data to HDFS in a cluster behind a firewall. Single node will act as GateWay node through which all the data will be transfered and performance wise I believe this can be even slower but preferred when needs to pull the data from public source into a secured cluster.

So choose rightly!.. Going down the list will always be an alternative when the choice above it is not available to you.

like image 110
rbyndoor Avatar answered Sep 27 '22 15:09

rbyndoor