I have a simple file filter which basically selects files from a particular date.
In Hadoop I would set the PathFilter
class to the InputFormat
parameter using setInputPathFilter
. How can I perform this in Spark?
public class FilesFilter extends Configured implements PathFilter {
@Override
public boolean accept(Path path) {
try {
if (fs.isDirectory(path))
return true;
} catch (IOException e1) {
e1.printStackTrace();
return false;
}
String file_date = "01.30.2015";
SimpleDateFormat sdf = new SimpleDateFormat("MM.dd.yyyy");
Date date = null;
try {
date = sdf.parse(file_date);
} catch (ParseException e1) {
e1.printStackTrace();
}
long dt = date.getTime()/(1000 * 3600 * 24);
try {
FileStatus file = fs.getFileStatus(path);
long time = file.getModificationTime() / (1000 * 3600 * 24);
return time == dt;
} catch (IOException e) {
e.printStackTrace();
return false;
}
}
}
Use this:
sc.hadoopConfiguration.setClass("mapreduce.input.pathFilter.class", classOf[TmpFileFilter], classOf[PathFilter])
Here is my code of TmpFileFilter.scala
, which will omit .tmp
files:
import org.apache.hadoop.fs.{Path, PathFilter}
class TmpFileFilter extends PathFilter {
override def accept(path : Path): Boolean = !path.getName.endsWith(".tmp")
}
You can define your own PathFilter
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With