I am new to spark. I have downloaded the spark version 1.3.1 prebuilt for hadoop version 2.6 . I extracted and navigated to the folder and typed in the below command : ./bin/spark-shell for which i get an error which says spark-shell command not found. I did the same on windows using git bash for which I get an error saying
spark-submit : line 26 tput command not found
is there something else I need to do before trying to run spark?
On Windows, in regular cmd prompt, use spark-shell.cmd
.
On Linux, in terminal, cd
to your Spark root (yours should be named spark-1.3.1-bin-hadoop2.6
if you kept the original name) and then execute:
./bin/spark-shell
Have you recently changed your .bash_profile? Any problems with other commands? Try just typing e.g. tar
in your shell. All good or not?
EDIT (after the first comment below):
Here's how to start the REPL on Linux (logging level set to errors only).Spark
is just a symlink to a Spark version I want to use, ignore that and take it as your Spark home dir:
And here's Windows:
You almost cannot do anything wrong how straightforward it is :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With