I am trying to access gz files on s3 that start with _
in Apache Spark. Unfortunately spark deems these files invisible and returns Input path does not exist: s3n:.../_1013.gz
. If I remove the underscore it finds the file just fine.
I tried adding a custom PathFilter to the hadoopConfig:
package CustomReader
import org.apache.hadoop.fs.{Path, PathFilter}
class GFilterZip extends PathFilter {
override def accept(path: Path): Boolean = {
true
}
}
// in spark settings
sc.hadoopConfiguration.setClass("mapreduce.input.pathFilter.class", classOf[CustomReader.GFilterZip], classOf[org.apache.hadoop.fs.PathFilter])
but I still have the same problem. Any ideas?
System: Apache Spark 1.6.0 with Hadoop 2.3
Files started with _ and . are hidden files.
And the hiddenFileFilter will be always applied. It is added inside method org.apache.hadoop.mapred.FileInputFormat.listStatus
check this answer, which files ignored as input by mapper?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With