I use Spark 2.2.0
I am reading a csv file as follows:
val dataFrame = spark.read.option("inferSchema", "true")
.option("header", true)
.option("dateFormat", "yyyyMMdd")
.csv(pathToCSVFile)
There is one date column in this file, and all records has a value equal to 20171001
for this particular column.
The issue is that spark is inferring that that the type of this column is integer
rather than date
. When I remove the "inferSchema"
option, the type of that column is string
.
There is no null
values, nor any wrongly formatted line in this file.
What is the reason/solution for this issue?
If my understanding is correct, the code implies the following order of type inference (with the first types being checked against first):
NullType
IntegerType
LongType
DecimalType
DoubleType
TimestampType
BooleanType
StringType
With that, I think the issue is that 20171001
matches IntegerType
before even considering TimestampType
(which uses timestampFormat
not dateFormat
option).
One solution would be to define the schema and use it with schema
operator (of DataFrameReader
) or let Spark SQL infer the schema and use cast
operator.
I'd choose the former if the number of fields is not high.
In this case you simply cannot depend on the schema inference due to format ambiguity.
Since input can be parsed both as IntegerType
(or any higher precision numeric format) as well as TimestamType
and the former one has higher precedence (internally Spark tries IntegerType
-> LongType
-> DecimaType
-> DoubleType
-> TimestampType
) inference mechanism will never reach TimestampType
case.
To be specific, with schema inference enabled, Spark will call tryParseInteger
, which will correctly parse the input and stop. Subsequent call will match the second case and finish at the same tryParseInteger
call.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With