I just downloaded the latest version of spark and when I started the spark shell I got the following error:
java.net.BindException: Failed to bind to: /192.168.1.254:0: Service 'sparkDriver' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
...
...
java.lang.NullPointerException
at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:193)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:71)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:9)
...
...
<console>:10: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:10: error: not found: value sqlContext
import sqlContext.sql
^
Is there something that I missed in setting up spark?
Launch Spark Shell (spark-shell) Command Go to the Apache Spark Installation directory from the command line and type bin/spark-shell and press enter, this launches Spark shell and gives you a scala prompt to interact with Spark in scala language.
Launch PySpark Shell Command Go to the Spark Installation directory from the command line and type bin/pyspark and press enter, this launches pyspark shell and gives you a prompt to interact with Spark in Python language.
Initializing Spark The first thing a Spark program must do is to create a SparkContext object, which tells Spark how to access a cluster. To create a SparkContext you first need to build a SparkConf object that contains information about your application. Only one SparkContext may be active per JVM.
Spark Shell Commands are the command-line interfaces that are used to operate spark processing. Spark Shell commands are useful for processing ETL and Analytics through Machine Learning implementation on high volume datasets with very less time.
Try setting the Spark env variable SPARK_LOCAL_IP
to a local IP address.
In my case, I was running Spark on an Amazon EC2 Linux instance. spark-shell
stopped working, with an error message similar to yours. I was able to fix it by adding a setting like the following to the Spark config file spark-env.conf
.
export SPARK_LOCAL_IP=172.30.43.105
Could also set it in ~/.profile or ~/.bashrc.
Also check host settings in /etc/hosts
See SPARK-8162.
It looks like it only affects 1.4.1 and 1.5.0 - you're probably best off running the latest release (1.4.0 at time of writing).
I was experiencing the same issue. First got to .bashrc and put
export SPARK_LOCAL_IP=172.30.43.105
then goto
cd $HADOOP_HOME/bin
then run the following command
hdfs dfsadmin -safemode leave
This just switches your safemode of namenode off.
Then delete the metastore_db folder from the spark home folder or /bin. It will be generally be in a folder from which you generally start a spark session.
then I ran my spark-shell using this
spark-shell --master "spark://localhost:7077"
and voila I didnot get the sqlContext.implicits._ error.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With