Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pandas to spark data frame converts datetime datatype to bigint

I have a pandas data frame in pyspark. I want to create/load this data frame into a hive table.

pd_df = pandas data frame

id                    int64
TEST_TIME             datetime64[ns]
status_time           object
GROUP                 object
test_type             object
dtype: object

    id TEST_TIME            status_time                 GROUP       test_type

0   1 2017-03-12 02:19:51                                       Driver started
1   2 2017-03-12 02:19:53  2017-03-11 18:13:43.577   ALARM      AL_PT2334_L
2   3 2017-03-12 02:19:53  2017-03-11 18:13:43.577   ALARM      AL_Turb_CNet_Ch_A_Fault
3   4 2017-03-12 02:19:53  2017-03-11 18:13:43.577   ALARM      AL_Encl_Fire_Sys_Trouble
4   5 2017-03-12 02:19:54  2017-03-11 18:13:44.611  STATUS      ST_Engine_Turning_Mode

Now I converted the pandas data frame to spark data frame like below.

spark_df = sqlContext.createDataFrame(pd_df)


+---+-------------------+--------------------+------+--------------------+
| id|          TEST_TIME|         status_time| GROUP|           test_type|
+---+-------------------+--------------------+------+--------------------+
|  1|1489285191000000000|                    |      |      Driver started|
|  2|1489285193000000000|2017-03-11 18:13:...| ALARM|         AL_PT2334_L|
|  3|1489285193000000000|2017-03-11 18:13:...| ALARM|AL_Turb_CNet_Ch_A...|
|  4|1489285193000000000|2017-03-11 18:13:...| ALARM|AL_Encl_Fire_Sys_...|
|  5|1489285194000000000|2017-03-11 18:13:...|STATUS|ST_Engine_Turning...|
+---+-------------------+--------------------+------+--------------------+

DataFrame[id: bigint, TEST_TIME: bigint, status_time: string, GROUP: string, test_type: string]

I want the TEST_TIME column to be a timestamp column but I am getting bigint.

I want the timestamp to be exactly like in pd_df even in spark_df.

I have done like below while converting pandas dataframe to spark dataframe

spark_df = sqlContext.createDataFrame(pd_df).withColumn("TEST_TIME", (F.unix_timestamp("TEST_TIME") + 28800).cast('timestamp'))

I got below error

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/spark/python/pyspark/sql/dataframe.py", line 1314, in withColumn
    return DataFrame(self._jdf.withColumn(colName, col._jc), self.sql_ctx)
  File "/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/lib/spark/python/pyspark/sql/utils.py", line 51, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"cannot resolve 'unixtimestamp(TEST_TIME,yyyy-MM-dd HH:mm:ss)' due to data type mismatch: argument 1 requires (string or date or timestamp) type, however, 'TEST_TIME' is of bigint type.;"

How can I achieve what I want

like image 718
User12345 Avatar asked Dec 24 '17 06:12

User12345


1 Answers

Convert your pandas dataframe column of type datetime64 to python datetime object, like this: pd_df['TEST_TIME'] = pandas.Series(pd_df['TEST_TIME'].dt.to_pydatetime(), dtype=object)

And then create the spark dataframe as you were doing.

like image 80
Lokesh Yadav Avatar answered Nov 23 '22 13:11

Lokesh Yadav