Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can Apache YARN be used without HDFS?

I want to use Apache YARN as a cluster and resource manager for running a framework where resources would be shared across different task of the same framework. I want to use my own distributed off-heap file system.

  1. Is it possible to use any other distributed file system with YARN other than HDFS?

  2. If yes, what HDFS APIs need to be implemented?

  3. What Hadoop components are required to run YARN?
like image 561
Amar Gajbhiye Avatar asked Mar 02 '17 08:03

Amar Gajbhiye


People also ask

Can we run Apache spark without Hadoop?

You can Run Spark without Hadoop in Standalone Mode Spark and Hadoop are better together Hadoop is not essential to run Spark. If you go by Spark documentation, it is mentioned that there is no need for Hadoop if you run Spark in a standalone mode. In this case, you need resource managers like CanN or Mesos only.

Can hive work without HDFS?

Hive can store the data in external tables so it's not mandatory to used HDFS also it support file formats such as ORC, Avro files, Sequence File and Text files, etc.

What is the difference between HDFS and YARN?

YARN is a generic job scheduling framework and HDFS is a storage framework. YARN in a nut shell has a master(Resource Manager) and workers(Node manager), The resource manager creates containers on workers to execute MapReduce jobs, spark jobs etc.

Can I use HDFS without Hadoop?

No, you cannot download HDFS alone because Hadoop 2. X has four core components: HDFS – It is the core component of Hadoop Ecosystem which is used to store a huge amount of data. Map Reduce – It is used for processing of large distributed datasets parallelly.


1 Answers

There's some different questions here

Can you use YARN to deploy apps using something like S3 to propagate the binaries?

Yes: it's how LinkedIn have deployed Samza in the past, using http:// downloads. Samza does not need a cluster filesystem, so there is no hdfs running in cluster, just local file:// filesystems, one per host.

Applications which need a cluster fileystems wouldn't work in such a cluster.

Can you bring up a YARN cluster with an alternative filesystem?

Yes.

For what "filesystem" is, look at the Filesystem Specification. You need a consistent view across the filesytem: newly create files list(), deleted ones aren't found, updates immediately visible. And rename() of files and directories must be an atomic operation, ideally O(1). It's used for atomic commits of work, checkpoints, ... Oh, and for HBase, append() is needed.

MapR does this, Redhat with GlusterFS; IBM and EMC for theirs. Do bear in mind here that pretty much everything is tested on HDFS; you'd better hope the other cluster FS has done the testing (or someone has done it for them, such as Hortonworks or Cloudera).

Can you bring up a YARN cluster using an object store as the underlying FS.

It depends on whether or not the FS offers a consistent filesystem view, rather than some eventual consistency world view. HBase is the real test here.

  1. Microsoft Azure Storage is consistent, has leases for obtaining exclusive access to bits of the FS and rename()s really fast. In Azure it completely replaces HDFS.
  2. Google cloud storage announced on Mar 1 2017 that GCS offers consistency. Maybe it can be used as a replacement now; no experience there.
  3. Amazon EMR does offer s3 as a replacement using (a) dynamo for the consistent metadata and (b) doing horrible things to get HBase to work.
  4. The ASF's own S3 client, S3a, can't be used as a replacement. We in the team working on it have been focusing on read and write perf as a source and final destination of data; in s3guard adding the dynamo layer and in the s3guard committer, on being able to use it as a high performance destination of work (resilient to failures while avoiding rename()).

Can the new distributed Filesystem you are writing be used as a replacement for HDFS?

Well, you can certainly try!

First get all the filesystem contract tests to work, which measure basic API compliance. Then look at all the Apache Bigtop tests, which do system integration. I recommend you avoid HBase & Accumulo initially, focus on: Mapreduce, Hive, spark, Flink.

Don't be afraid to get on the Hadoop common-dev & bigtop lists and ask questions.

like image 105
stevel Avatar answered Oct 02 '22 20:10

stevel