I am using PySpark. I have a column ('dt') in a dataframe ('canon_evt') that this a timestamp. I am trying to remove seconds from a DateTime value. It is originally read in from parquet as a String. I then try to convert it to Timestamp via
canon_evt = canon_evt.withColumn('dt',to_date(canon_evt.dt))
canon_evt= canon_evt.withColumn('dt',canon_evt.dt.astype('Timestamp'))
Then I would like to remove the seconds. I tried 'trunc', 'date_format' or even trying to concatenate pieces together like below. I think it requires some sort of map and lambda combination, but I'm not certain whether Timestamp is an appropriate format, and whether it's possible to get rid of seconds.
canon_evt = canon_evt.withColumn('dyt',year('dt') + '-' + month('dt') +
'-' + dayofmonth('dt') + ' ' + hour('dt') + ':' + minute('dt'))
[Row(dt=datetime.datetime(2015, 9, 16, 0, 0),dyt=None)]
sql. functions. second() to get the seconds from your timestamp column. Once you have the seconds part you can take this number, divide by 30, round it, and multiply by 30 to get the "new" second.
Truncating Date using trunc() Spark SQL function Spark SQL DateFrame functions provide trunc() function to truncate Date at Year and Month units and returns Date in Spark DateType format “yyyy-MM-dd”. Note that Day doesn't support by trunc() function and it returns null when used.
The to_date() function in Apache PySpark is popularly used to convert Timestamp to the date. This is mostly achieved by truncating the Timestamp column's time part. The to_date() function takes TimeStamp as it's input in the default format of "MM-dd-yyyy HH:mm:ss. SSS".
Spark >= 2.3
You can use date_trunc
df.withColumn("dt_truncated", date_trunc("minute", col("dt"))).show()
## +-------------------+-------------------+
## | dt| dt_truncated|
## +-------------------+-------------------+
## |1970-01-01 00:00:00|1970-01-01 00:00:00|
## |2015-09-16 05:39:46|2015-09-16 05:39:00|
## |2015-09-16 05:40:46|2015-09-16 05:40:00|
## |2016-03-05 02:00:10|2016-03-05 02:00:00|
## +-------------------+-------------------+
Spark < 2.3
Converting to Unix timestamps and basic arithmetics should to the trick:
from pyspark.sql import Row
from pyspark.sql.functions import col, unix_timestamp, round
df = sc.parallelize([
Row(dt='1970-01-01 00:00:00'),
Row(dt='2015-09-16 05:39:46'),
Row(dt='2015-09-16 05:40:46'),
Row(dt='2016-03-05 02:00:10'),
]).toDF()
## unix_timestamp converts string to Unix timestamp (bigint / long)
## in seconds. Divide by 60, round, multiply by 60 and cast
## should work just fine.
##
dt_truncated = ((round(unix_timestamp(col("dt")) / 60) * 60)
.cast("timestamp"))
df.withColumn("dt_truncated", dt_truncated).show(10, False)
## +-------------------+---------------------+
## |dt |dt_truncated |
## +-------------------+---------------------+
## |1970-01-01 00:00:00|1970-01-01 00:00:00.0|
## |2015-09-16 05:39:46|2015-09-16 05:40:00.0|
## |2015-09-16 05:40:46|2015-09-16 05:41:00.0|
## |2016-03-05 02:00:10|2016-03-05 02:00:00.0|
## +-------------------+---------------------+
This question was asked a few years ago, but if anyone else comes across it, as of Spark v2.3 this has been added as a feature. Now this is as simple as (assumes canon_evt
is a dataframe with timestamp column dt
that we want to remove the seconds from)
from pyspark.sql.functions import date_trunc
canon_evt = canon_evt.withColumn('dt', date_trunc('minute', canon_evt.dt))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With