Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Install Spark on an existing Hadoop cluster

I am not a system administrator, but I may need to do some administrative task and hence need some help.

We have a (remote) Hadoop cluster and people usually run map-reduce jobs on the cluster.

I am planning to install Apache Spark on the cluster so that all the machines in the cluster may be utilized. This should be possible and I have read from http://spark.apache.org/docs/latest/spark-standalone.html "You can run Spark alongside your existing Hadoop cluster by just launching it as a separate service on the same machines..."

If you have done this before, please give me the detailed steps so that the Spark cluster may be created.

like image 866
PTDS Avatar asked Jul 08 '16 05:07

PTDS


People also ask

How Spark can be deployed on a Hadoop cluster?

In particular, there are three ways to deploy Spark in a Hadoop cluster: standalone, YARN, and SIMR. Standalone deployment: With the standalone deployment one can statically allocate resources on all or a subset of machines in a Hadoop cluster and run Spark side by side with Hadoop MR.

Can you run Spark on Hadoop?

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat.

Do you need to install Spark on all nodes of YARN cluster?

No, it is not necessary to install Spark on all the 3 nodes. Since spark runs on top of Yarn, it utilizes yarn for the execution of its commands over the cluster's nodes. So, you just have to install Spark on one node.

Do I need to install Hadoop for Spark?

Yes, Apache Spark can run without Hadoop, standalone, or in the cloud. Spark doesn't need a Hadoop cluster to work. Spark can read and then process data from other file systems as well.

How do I run Spark application in cluster mode?

In cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. In client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.


1 Answers

If you have Hadoop already installed on your cluster and want to run spark on YARN it's very easy:

Step 1: Find the YARN Master node (i.e. which runs the Resource Manager). The following steps are to be performed on the master node only.

Step 2: Download the Spark tgz package and extract it somewhere.

Step 3: Define these environment variables, in .bashrc for example:

# Spark variables
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=<extracted_spark_package>
export PATH=$PATH:$SPARK_HOME/bin

Step 4: Run your spark job using the --master option to yarn-client or yarn-master:

spark-submit \
--master yarn-client \
--class org.apache.spark.examples.JavaSparkPi \
$SPARK_HOME/lib/spark-examples-1.5.1-hadoop2.6.0.jar \
100

This particular example uses a pre-compiled example job which comes with the Spark installation.

You can read this blog post I wrote for more details on Hadoop and Spark installation on a cluster.

You can read the post which follows to see how to compile and run your own Spark job in Java. If you want to code jobs in Python or Scala, its convenient to use a notebook like IPython or Zeppelin. Read more about how to use those with your Hadoop-Spark cluster here.

like image 82
Nicomak Avatar answered Oct 14 '22 04:10

Nicomak