I have experienced a problem with SBT loading in a local CSV file. Basically, I've written a Spark program in Scala Eclipse which reads the following file:
val searches = sc.textFile("hdfs:///data/searches")
This works fine on hdfs, but for de-bug reasons, I wish to load in this file from a local directory, which I have set-up to be in the project directory.
So I tired the following:
val searches = sc.textFile("file:///data/searches")
val searches = sc.textFile("./data/searches")
val searches = sc.textFile("/data/searches")
None of which allows me to read the file from local, and all of them returns this error on SBT:
Exception in thread "main" java.io.IOException: Incomplete HDFS URI, no host: hdfs:/data/pages
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:143)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:179)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.FlatMappedRDD.getPartitions(FlatMappedRDD.scala:30)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1135)
at org.apache.spark.rdd.RDD.count(RDD.scala:904)
at com.user.Result$.get(SparkData.scala:200)
at com.user.StreamingApp$.main(SprayHerokuExample.scala:35)
at com.user.StreamingApp.main(SprayHerokuExample.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
In the error report, at com.user.Result$.get(SparkData.scala:200) is the line where sc.textFile is called. It seems to run in Hadoop environment by default. Is there anything I could do to read this file locally?
Edit: While on Local, I've reconfigured build.sbt with:
submit <<= inputTask{(argTask:TaskKey[Seq[String]]) => {
(argTask,mainClass in Compile,assemblyOutputPath in assembly,sparkHome) map {
(args,main,jar,sparkHome) => {
args match {
case List(output) => {
val sparkCmd = sparkHome+"/bin/spark-submit"
Process(
sparkCmd :: "--class" :: main.get :: "--master" :: "local[4]" ::
jar.getPath :: "local[4]" :: output :: Nil)!
}
case _ => Process("echo" :: "Usage" :: Nil) !
}
}
}}}
The submit command is what I use to run the code.
Solution Found: So it turns out that file:///path/ is the correct way to do it, but in my case, the full path worked: i.e. home/projects/data/searches. While just putting data/searches did not (despite working under home/projects directory).
Use:
val searches = sc.textFile("hdfs://host:port_no/data/searches")
Default
host: master
port_no: 9000
This should work:
sc.textFile("file:///data/searches")
from you error it seems like spark is loading Hadoop config, this can accure when you have a Hadoop conf file or a Hadoop environment variable set (like HADOOP_CONF_DIR)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With