Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How can I iterately process all files under one directory using mrjob

I am using mrjob to process a batch of files and get some statistics. I know I can run mapreduce job on a single file, like

python count.py < some_input_file > output

But how can I feed a directory of files to the script? The file directory structure is like this folder/subfolders/files, is there any suggestion?

like image 558
Chunliang Lyu Avatar asked Dec 07 '12 11:12

Chunliang Lyu


1 Answers

Well, finally I find that I can specify a directory as the input path and Hadoop will process all files in that directory.

Further in my case, I have sub-directories containing the input files. Hadoop will not transverse directory recursively and will raise error by default. A common trick is to use wildcard glob like

python count.py hdfs://master-host/directory/*/*.txt > result
like image 152
Chunliang Lyu Avatar answered Oct 28 '22 02:10

Chunliang Lyu