Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Error when using multiple python files spark-submit

I have a spark app which is composed of multiple files.

When I launch Spark using:

../hadoop/spark-install/bin/spark-submit main.py --py-files /home/poiuytrez/naive.py,/home/poiuytrez/processing.py,/home/poiuytrez/settings.py  --master spark://spark-m:7077

I am getting an error:

15/03/13 15:54:24 INFO TaskSetManager: Lost task 6.3 in stage 413.0 (TID 5817) on executor spark-w-3.c.databerries.internal: org.apache.spark.api.python.PythonException (Traceback (most recent call last):   File "/home/hadoop/spark-install/python/pyspark/worker.py", line 90, in main
    command = pickleSer._read_with_length(infile)   File "/home/hadoop/spark-install/python/pyspark/serializers.py", line 151, in _read_with_length
    return self.loads(obj)   File "/home/hadoop/spark-install/python/pyspark/serializers.py", line 396, in loads
    return cPickle.loads(obj) ImportError: No module named naive

It is weird because I do not serialize anything. naive.py is also available on every machine at the same path.

Any insight on what could be going on? The issue does not happen on my laptop.

PS : I am using Spark 1.2.0.

like image 334
poiuytrez Avatar asked Mar 13 '15 16:03

poiuytrez


2 Answers

You are probably importing the module naive at the top of a class or script and then using something in that module inside of a transformation of an RDD. This probably looks something like the following in your code:

import naive

def my_fxn(record):
    naive.some_obj_or_fxn()
    ...etc...

...etc..
myRdd.map(my_fxn)

PySpark tries to pickle and unpickle all modules imported at the top of your class/script if you write your functions like this. Instead, you should import your modules inside the function that uses them, like this:

def my_fxn(record):
    import naive
    naive.some_obj_or_fxn()
    ...etc...
like image 155
j_houg Avatar answered Oct 19 '22 14:10

j_houg


First, you don't need to put naive.py to any slave. I have fixed this issue by 2 methods:

method-1

Just putting main.py at the end of the cmd line.

../hadoop/spark-install/bin/spark-submit --master spark://spark-m:7077  --py-files /home/poiuytrez/naive.py,/home/poiuytrez/processing.py,/home/poiuytrez/settings.py main.py  

OR method-2

use sc.addPyFile('py_file_name') at main.py

sc.addPyFile('/home/poiuytrez/naive.py')
sc.addPyFile('/home/poiuytrez/processing.py')
sc.addPyFile('/home/poiuytrez/settings.py')
like image 4
ybdesire Avatar answered Oct 19 '22 14:10

ybdesire