Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to dump a file to a Hadoop HDFS directory using Python pickle?

I am on a VM in a directory that contains my Python (2.7) class. I am trying to pickle an instance of my class to a directory in my HDFS.

I'm trying to do something along the lines of:

import pickle

my_obj = MyClass() # the class instance that I want to pickle

with open('hdfs://domain.example.com/path/to/directory/') as hdfs_loc:
    pickle.dump(my_obj, hdfs_loc)

From what research I've done, I think something like snakebite might be able to help...but does anyone have more concrete suggestions?

like image 940
J. Appleseed Avatar asked Oct 18 '22 08:10

J. Appleseed


1 Answers

If you use PySpark, then you can use the saveAsPickleFile method:

temp_rdd = sc.parallelize(my_obj)
temp_rdd.coalesce(1).saveAsPickleFile("/test/tmp/data/destination.pickle")
like image 165
Rene B. Avatar answered Oct 21 '22 03:10

Rene B.