Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to avoid "Not a file" exceptions when reading from HDFS with spark

I copy a tree of files from S3 to HDFS with S3DistCP in an initial EMR step. hdfs dfs -ls -R hdfs:///data_dir shows the expected files, which look something like:

/data_dir/year=2015/
/data_dir/year=2015/month=01/
/data_dir/year=2015/month=01/day=01/
/data_dir/year=2015/month=01/day=01/data01.12345678
/data_dir/year=2015/month=01/day=01/data02.12345678
/data_dir/year=2015/month=01/day=01/data03.12345678

The 'directories' are listed as zero-byte files.

I then run a spark step which needs to read these files. The loading code is thus:

sqlctx.read.json('hdfs:///data_dir, schema=schema)

The job fails with a java exception

java.io.IOException: Not a file: hdfs://10.159.123.38:9000/data_dir/year=2015

I had (perhaps naively) assumed that spark would recursively descend the 'dir tree' and load the data files. If I point to S3 it loads the data successfully.

Am I misunderstanding HDFS? Can I tell spark to ignore zero-byte files? Can i use S3DistCp to flatten the tree?

like image 626
Rob Cowie Avatar asked Oct 03 '15 10:10

Rob Cowie


2 Answers

In Hadoop configuration for current spark context, configure "recursive" read for Hadoop InputFormat before to get the sql ctx

val hadoopConf = sparkCtx.hadoopConfiguration
hadoopConf.set("mapreduce.input.fileinputformat.input.dir.recursive", "true")

This will give the solution for "not a file". Next, to read multiple files:

Hadoop job taking input files from multiple directories

or union the list of files into single dataframe :

Read multiple files from a directory using Spark

like image 145
Elena Viter Avatar answered Oct 12 '22 23:10

Elena Viter


Problem solved with :

spark-submit ...
    --conf spark.hadoop.mapreduce.input.fileinputformat.input.dir.recursive=true \
    --conf spark.hive.mapred.supports.subdirectories=true \
    ...
like image 45
Indent Avatar answered Oct 12 '22 22:10

Indent