Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark iterate HDFS directory

I have a directory of directories on HDFS, and I want to iterate over the directories. Is there any easy way to do this with Spark using the SparkContext object?

like image 522
Jon Avatar asked Nov 19 '14 18:11

Jon


2 Answers

You can use org.apache.hadoop.fs.FileSystem. Specifically, FileSystem.listFiles([path], true)

And with Spark...

FileSystem.get(sc.hadoopConfiguration).listFiles(..., true) 

Edit

It's worth noting that good practice is to get the FileSystem that is associated with the Path's scheme.

path.getFileSystem(sc.hadoopConfiguration).listFiles(path, true) 
like image 56
Mike Park Avatar answered Oct 19 '22 05:10

Mike Park


Here's PySpark version if someone is interested:

    hadoop = sc._jvm.org.apache.hadoop      fs = hadoop.fs.FileSystem     conf = hadoop.conf.Configuration()      path = hadoop.fs.Path('/hivewarehouse/disc_mrt.db/unified_fact/')      for f in fs.get(conf).listStatus(path):         print(f.getPath(), f.getLen()) 

In this particular case I get list of all files that make up disc_mrt.unified_fact Hive table.

Other methods of FileStatus object, like getLen() to get file size are described here:

Class FileStatus

like image 29
Tagar Avatar answered Oct 19 '22 04:10

Tagar