I have an error in my code. The code is dumping some data into Redshift database.
After some investigation I found an easy way to reproduce it in spark console.
This is working fine:
scala> Seq("France", "Germany").toDF.agg(avg(lit(null))).write.csv("1.csv")
scala>
But if I replace avg with max I got an error "CSV data source does not support null data type."
scala> Seq("France", "Germany").toDF.agg(max(lit(null))).write.csv("2.csv")
java.lang.UnsupportedOperationException: CSV data source does not support null data type.
What's wrong with max ?
The error is correct as AVG returns the DOUBLE datatype
Seq("France", "Germany").toDF.agg(avg(lit(null)).alias("col1")).printSchema

where as MAX returns the type as null
Seq("France", "Germany").toDF.agg(max(lit(null)).alias("col1")).printSchema

so while you are writing the dataframe having MAX its throwing the error, if you want to save the dataframe with the max explicitely convert it into another type
Seq("France", "Germany").toDF.agg(max(lit(null)).alias("col1").cast(DoubleType)).write.csv("path")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With