Based on Spark - load CSV file as DataFrame?
Is it possible to specify options using SQL to set the delimiter, null character, and quote?
val df = spark.sql("SELECT * FROM csv.`csv/file/path/in/hdfs`")
I know it can be done using spark.read.format("csv").option("delimiter", "|")
, but ideally I wouldn't have to.
Updated Information
It seems that I have to pass the path using back-ticks.
When I attempting to pass OPTIONS
== SQL ==
SELECT * FROM
csv.`csv/file/path/in/hdfs` OPTIONS (delimiter , "|" )
-----------------------------------^^^
Error in query:
mismatched input '(' expecting {<EOF>, ',', 'WHERE', 'GROUP', 'ORDER',
'HAVING', 'LIMIT', 'JOIN', 'CROSS', 'INNER', 'LEFT', 'RIGHT', 'FULL',
'NATURAL', 'LATERAL', 'WINDOW', 'UNION', 'EXCEPT', 'MINUS',
'INTERSECT', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'ANTI'}
Althoguh not a one line souliton, following might work for you:
spark.sql("CREATE TABLE some_table USING com.databricks.spark.csv OPTIONS (path \"csv/file/path/in/hdfs\", delimeter \"|\")");
val df = spark.sql("SELECT * FROM some_table");
Of course you can skip the second step of loading into dataframe if you want to perform some SQL operation directly on some_table
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With