Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pyspark from_unixtime (unix_timestamp) does not convert to timestamp

Tags:

date

pyspark

I am using Pyspark with Python 2.7. I have a date column in string (with ms) and would like to convert to timestamp

This is what I have tried so far

df = df.withColumn('end_time', from_unixtime(unix_timestamp(df.end_time, '%Y-%M-%d %H:%m:%S.%f')) )

printSchema() shows end_time: string (nullable = true)

when I expended timestamp as the type of variable

like image 580
qqplot Avatar asked Jan 24 '19 01:01

qqplot


People also ask

How do I convert a column to timestamp in PySpark?

PySpark to_timestamp() – Convert String to Timestamp typeUse <em>to_timestamp</em>() function to convert String to Timestamp (TimestampType) in PySpark. The converted time would be in a default format of MM-dd-yyyy HH:mm:ss.

How do I convert Unix epoch to timestamp in PySpark?

In PySpark SQL, unix_timestamp() is used to get the current time and to convert the time string in a format yyyy-MM-dd HH:mm:ss to Unix timestamp (in seconds) and from_unixtime() is used to convert the number of seconds from Unix epoch ( 1970-01-01 00:00:00 UTC ) to a string representation of the timestamp.

How do I change the timestamp on PySpark?

The to_date() function in Apache PySpark is popularly used to convert Timestamp to the date. This is mostly achieved by truncating the Timestamp column's time part. The to_date() function takes TimeStamp as it's input in the default format of "MM-dd-yyyy HH:mm:ss. SSS".

How do you remove T and Z from timestamp in PySpark?

We can use either to_timestamp, from_unixtime(unix_timestamp()) functions for this case. Try with "yyyy-MM-dd'T'hh:mm'Z'" enclosing T , Z in single quotes!


2 Answers

Try using from_utc_timestamp:

from pyspark.sql.functions import from_utc_timestamp

df = df.withColumn('end_time', from_utc_timestamp(df.end_time, 'PST')) 

You'd need to specify a timezone for the function, in this case I chose PST

If this does not work please give us an example of a few rows showing df.end_time

like image 77
Tanjin Avatar answered Sep 27 '22 22:09

Tanjin


Create a sample dataframe with Time-stamp formatted as string:

import pyspark.sql.functions as F
df = spark.createDataFrame([('22-Jul-2018 04:21:18.792 UTC', ),('23-Jul-2018 04:21:25.888 UTC',)], ['TIME'])
df.show(2,False)
df.printSchema()

Output:

+----------------------------+
|TIME                        |
+----------------------------+
|22-Jul-2018 04:21:18.792 UTC|
|23-Jul-2018 04:21:25.888 UTC|
+----------------------------+
root
|-- TIME: string (nullable = true)

Converting string time-format (including milliseconds ) to unix_timestamp(double). Since unix_timestamp() function excludes milliseconds we need to add it using another simple hack to include milliseconds. Extracting milliseconds from string using substring method (start_position = -7, length_of_substring=3) and Adding milliseconds seperately to unix_timestamp. (Cast to substring to float for adding)

df1 = df.withColumn("unix_timestamp",F.unix_timestamp(df.TIME,'dd-MMM-yyyy HH:mm:ss.SSS z') + F.substring(df.TIME,-7,3).cast('float')/1000)

Converting unix_timestamp(double) to timestamp datatype in Spark.

df2 = df1.withColumn("TimestampType",F.to_timestamp(df1["unix_timestamp"]))
df2.show(n=2,truncate=False)

This will give you following output

+----------------------------+----------------+-----------------------+
|TIME                        |unix_timestamp  |TimestampType          |
+----------------------------+----------------+-----------------------+
|22-Jul-2018 04:21:18.792 UTC|1.532233278792E9|2018-07-22 04:21:18.792|
|23-Jul-2018 04:21:25.888 UTC|1.532319685888E9|2018-07-23 04:21:25.888|
+----------------------------+----------------+-----------------------+

Checking the Schema:

df2.printSchema()


root
 |-- TIME: string (nullable = true)
 |-- unix_timestamp: double (nullable = true)
 |-- TimestampType: timestamp (nullable = true)
like image 34
Sangram Gaikwad Avatar answered Sep 27 '22 21:09

Sangram Gaikwad