Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I make Hadoop find imported Python modules when using Python UDFs in Pig?

I am using Pig (0.9.1) with UDFs written in Python. The Python scripts import modules from the standard Python library. I have been able to run the Pig scrips that call the Python UDFs successfully in local mode, but when I run on the cluster it appears Pig's generated Hadoop job is unable to find the imported modules. What needs to be done?

For example:

  • Does python (or jython) need to be installed on each task tracker node?
  • Do the python (or jython) modules need to be installed on each task tracker node?
  • Do the task tracker nodes need to know how to find the modules?
  • If so, how do you specify the path (via an environment variable - how is that done for the task tracker)?
like image 796
Ben Lever Avatar asked Oct 20 '11 05:10

Ben Lever


People also ask

Which keyword allows us to load a module in Python?

The import keyword is used to import modules.

What are Python moduls?

In Python, Modules are simply files with the “. py” extension containing Python code that can be imported inside another Python Program. In simple terms, we can consider a module to be the same as a code library or a file that contains a set of functions that you want to include in your application.


Video Answer


1 Answers

Does python (or jython) need to be installed on each task tracker node?

Yes, since it's executed in task trackers.

Do the python (or jython) modules need to be installed on each task tracker node?

If you are using a 3rd party module, it should be installed in task trackers as well (like geoip, etc).

Do the task tracker nodes need to know how to find the modules? If so, how do you specify the path (via an environment variable - how is that done for the task tracker)?

As an answer from the book "Programming Pig" :

register is also used to locate resources for Python UDFs that you use in your Pig Latin scripts. In this case you do not register a jar, but rather a Python script that contains your UDF. The Python script must be in your current directory.

And also this one is important :

A caveat, Pig does not trace dependencies inside your Python scripts and send the needed Python modules to your Hadoop cluster. You are required to make sure the modules you need reside on the task nodes in your cluster and that the PYTHONPATH environment variable is set on those nodes such that your UDFs will be able to find them for import. This issue has been fixed after 0.9, but as of this writing not yet released.

And if you are using jython :

Pig does not know where on your system the Jython interpreter is, so you must include jython.jar in your classpath when invoking Pig. This can be done by setting the PIG_CLASSPATH environment variable.

As a summary, if you are using streaming then you can use "SHIP" command in pig which would send your executable files to cluster. if you are using UDF, as long as it can be compiled(check the note about jython) and doesn't have 3rd party dependency in it (which you didn't already put in PYTHONPATH / or installed in cluster), the UDF would be shipped to cluster when executed. (As a tip, it would make your life much more easier if you put your simple UDF dependencies in the same folder with pig script when registering)

Hope these would clear things.

like image 96
frail Avatar answered Oct 14 '22 08:10

frail