To access the file in Spark jobs, use SparkFiles. get(fileName) to find its download location. A directory can be given if the recursive option is set to true. Currently directories are only supported for Hadoop-supported filesystems.
If you use the Databricks Connect client library you can read local files into memory on a remote Databricks Spark cluster. See details here. The alternative is to use the Databricks CLI (or REST API) and push local data to a location on DBFS, where it can be read into Spark from within a Databricks notebook.
Try explicitly specify sc.textFile("file:///path to the file/")
. The error occurs when Hadoop environment is set.
SparkContext.textFile internally calls org.apache.hadoop.mapred.FileInputFormat.getSplits
, which in turn uses org.apache.hadoop.fs.getDefaultUri
if schema is absent. This method reads "fs.defaultFS" parameter of Hadoop conf. If you set HADOOP_CONF_DIR environment variable, the parameter is usually set as "hdfs://..."; otherwise "file://".
gonbe's answer is excellent. But still I want to mention that file:///
= ~/../../
, not $SPARK_HOME
. Hope this could save some time for newbs like me.
While Spark supports loading files from the local filesystem, it requires that the files are available at the same path on all nodes in your cluster.
Some network filesystems, like NFS, AFS, and MapR’s NFS layer, are exposed to the user as a regular filesystem.
If your data is already in one of these systems, then you can use it as an input by just specifying a file:// path; Spark will handle it as long as the filesystem is mounted at the same path on each node. Every node needs to have the same path
rdd = sc.textFile("file:///path/to/file")
If your file isn’t already on all nodes in the cluster, you can load it locally on the driver without going through Spark and then call parallelize to distribute the contents to workers
Take care to put file:// in front and the use of "/" or "\" according to OS.
Attention:
Make sure that you run spark in local mode when you load data from local(sc.textFile("file:///path to the file/")
) or you will get error like this Caused by: java.io.FileNotFoundException: File file:/data/sparkjob/config2.properties does not exist
.
Becasuse executors which run on different workers will not find this file in it's local path.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With