I did come across a mini tutorial for data preprocessing using spark here: http://ampcamp.berkeley.edu/big-data-mini-course/featurization.html
However, this discusses only about textfile parsing. Is there a way to parse xml files from spark system?
Though there is nothing wrong with this approach, Spark also supports a library provided by Databricks that can process a format-free XML file in a distributed way.
Just about every browser can open an XML file. In Chrome, just open a new tab and drag the XML file over. Alternatively, right click on the XML file and hover over "Open with" then click "Chrome". When you do, the file will open in a new tab.
It looks like somebody made an xml datasource for apache-spark.
https://github.com/databricks/spark-xml
This supports to read XML files by specifying tags and infer types e.g.
import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val df = sqlContext.read
.format("com.databricks.spark.xml")
.option("rowTag", "book")
.load("books.xml")
You can also use it with spark-shell
as below:
$ bin/spark-shell --packages com.databricks:spark-xml_2.11:0.3.0
I have not used it myself, but the way would be same as you do it for hadoop. For example you can use StreamXmlRecordReader and process the xmls. The reason you need a record reader is you would like to control the record boundries for each element processed otherwise the default used would process line because it uses LineRecordReader. It would be helpful to get yourself more familiar with concept of recordReader in hadoop.
And ofcourse you will have to use SparkContext's hadoopRDD or hadoopFile methods with option to pass a InputFormatClass. Incase java is your preferred language, similar alternatives exist.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With