Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Increase memory available to Spark shell

I'm attempting to install Apache Spark on Raspberry Pi1 Model B+

Once I start the command shell and try command :

val l = sc.parallelize(List()).collect

I receive exception :

scala> val l = sc.parallelize(List()).collect
15/03/22 19:52:44 INFO SparkContext: Starting job: collect at <console>:21
15/03/22 19:52:44 INFO DAGScheduler: Got job 0 (collect at <console>:21) with 1 output partitions (allowLocal=false)
15/03/22 19:52:44 INFO DAGScheduler: Final stage: Stage 0(collect at <console>:21)
15/03/22 19:52:44 INFO DAGScheduler: Parents of final stage: List()
15/03/22 19:52:44 INFO DAGScheduler: Missing parents: List()
15/03/22 19:52:44 INFO DAGScheduler: Submitting Stage 0 (ParallelCollectionRDD[0] at parallelize at <console>:21), which has no missing parents
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGILL (0x4) at pc=0x9137c074, pid=3596, tid=2415826032
#
# JRE version: Java(TM) SE Runtime Environment (8.0-b132) (build 1.8.0-b132)
# Java VM: Java HotSpot(TM) Client VM (25.0-b70 mixed mode linux-arm )
# Problematic frame:
# C  [snappy-unknown-b62d2fa0-8fdd-4b4b-8c2c-2f24ddaeee74-libsnappyjava.so+0x1074]  _init+0x1a7
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/pi/spark-1.3.0-bin-hadoop2.4/bin/hs_err_pid3596.log
./spark-shell: line 55:  3596 Segmentation fault      "$FWDIR"/bin/spark-submit --class org.apache.spark.repl.Main "${SUBMISSION_OPTS[@]}" spark-shell "${APPLICATION_OPTS[@]}"

When starting command shell I allow disk memory utilization :

./spark-shell --conf StorageLevel=MEMORY_AND_DISK

But still receive same exception.

When start spark shell there is 267MB memory available :

15/03/22 17:09:49 INFO MemoryStore: MemoryStore started with capacity 267.3 MB

Should this be enough memory to run Spark commands in shell ?

Is this correct command to start spark shell which spills unavailable memory to disk : ./spark-shell --conf StorageLevel=MEMORY_AND_DISK ?

Update:

I've tried :

./spark-shell --conf spark.driver.memory=256m

val l = sc.parallelize(List()).collect

But same result

like image 869
blue-sky Avatar asked Mar 22 '15 20:03

blue-sky


1 Answers

Try the --driver-memory option to set the memory for the driver process. Example:

./spark-shell --driver-memory 2g

For 2 GB of memory.

like image 174
Ramón J Romero y Vigil Avatar answered Oct 04 '22 01:10

Ramón J Romero y Vigil