Let me break this problem down to a smaller chunk. I have a DataFrame in PySpark, where I have a column arrival_date
in date
format -
from pyspark.sql.functions import to_date
values = [('22.05.2016',),('13.07.2010',),('15.09.2012',),(None,)]
df = sqlContext.createDataFrame(values,['arrival_date'])
#Following code line converts String into Date format
df = df.withColumn('arrival_date',to_date(col('arrival_date'),'dd.MM.yyyy'))
df.show()
+------------+
|arrival_date|
+------------+
| 2016-05-22|
| 2010-07-13|
| 2012-09-15|
| null|
+------------+
df.printSchema()
root
|-- arrival_date: date (nullable = true)
After applying a lot of transformations to the DataFrame, I finally wish to fill in the missing dates, marked as null
with 01-01-1900
.
One method to do this is to convert the column arrival_date
to String
and then replace missing values this way - df.fillna('1900-01-01',subset=['arrival_date'])
and finally reconvert this column to_date
. This is very unelegant.
The following code line doesn't work, as expected and I get an error-
df = df.fillna(to_date(lit('1900-01-01'),'yyyy-MM-dd'), subset=['arrival_date'])
The documentation says The value must be of the following type: Int, Long, Float, Double, String, Boolean.
Another way is by using withColumn()
and when()
-
df = df.withColumn('arrival_date',when(col('arrival_date').isNull(),to_date(lit('01.01.1900'),'dd.MM.yyyy')).otherwise(col('arrival_date')))
Is there a way, where I could directly assign a date of my choice to a date
formatted column by using some function?
Anyone has any better suggestion?
The second way should be the way to do it, but you don't have to use to_date to transform between string and date, just use datetime.date(1900, 1, 1).
import datetime as dt
df = df.withColumn('arrival_date', when(col('arrival_date').isNull(), dt.date(1900, 1, 1)).otherwise(col('arrival_date')))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With