Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to properly insert pandas NaT datetime values to my postgresql table

I am tying to bulk insert a dataframe to my postgres dB. Some columns in my dataframe are date types with NaT as a null value. Which is not supported by PostgreSQL, I've tried to replace NaT (using pandas) with other NULL type identifies but that did not work during my inserts.

I used df = df.where(pd.notnull(df), 'None') to replace all the NaTs, Example of errors that keep coming up due to datatype issues.

Error: invalid input syntax for type date: "None"
LINE 1: ...0,1.68757,'2022-11-30T00:29:59.679000'::timestamp,'None','20...

My driver and insert statement to postgresql dB:

def execute_values(conn, df, table):
    """
    Using psycopg2.extras.execute_values() to insert the dataframe
    """
    # Create a list of tupples from the dataframe values
    tuples = [tuple(x) for x in df.to_numpy()]
    # Comma-separated dataframe columns
    cols = ','.join(list(df.columns))
    # SQL quert to execute
    query  = "INSERT INTO %s(%s) VALUES %%s" % (table, cols)
    cursor = conn.cursor()
    try:
        extras.execute_values(cursor, query, tuples)
        conn.commit()
    except (Exception, psycopg2.DatabaseError) as error:
        print("Error: %s" % error)
        conn.rollback()
        cursor.close()
        return 1
    print("execute_values() done")
    cursor.close()

Info about my dataframe: for this case the culprits are the datetime columns only.

enter image description here

how is this commonly solved?

like image 981
andres Avatar asked Oct 19 '20 21:10

andres


2 Answers

You're re-inventing the wheel. Just use pandas' to_sql method and it will

  • match up the column names, and
  • take care of the NaT values.

Use method="multi" to give you the same effect as psycopg2's execute_values.

from pprint import pprint

import pandas as pd
import sqlalchemy as sa

table_name = "so64435497"
engine = sa.create_engine("postgresql://scott:[email protected]/test")
with engine.begin() as conn:
    # set up test environment
    conn.exec_driver_sql(f"DROP TABLE IF EXISTS {table_name}")
    conn.exec_driver_sql(
        f"CREATE TABLE {table_name} ("
        "id integer PRIMARY KEY GENERATED ALWAYS AS IDENTITY, "
        "txt varchar(50), "
        "txt2 varchar(50), "
        "dt timestamp)"
    )
    df = pd.read_csv(r"C:\Users\Gord\Desktop\so64435497.csv")
    df["dt"] = pd.to_datetime(df["dt"])
    print(df)
    """console output:
                       dt  txt2  txt
    0 2020-01-01 00:00:00  foo2  foo
    1                 NaT  bar2  bar
    2 2020-01-02 03:04:05  baz2  baz
    """

    # run test
    df.to_sql(
        table_name, conn, index=False, if_exists="append", method="multi"
    )
    pprint(
        conn.exec_driver_sql(
            f"SELECT id, txt, txt2, dt FROM {table_name}"
        ).all()
    )
    """console output:
    [(1, 'foo', 'foo2', datetime.datetime(2020, 1, 1, 0, 0)),
     (2, 'baz', 'baz2', None),
     (3, 'bar', 'bar2', datetime.datetime(2020, 1, 2, 3, 4, 5))]
    """
like image 159
Gord Thompson Avatar answered Oct 07 '22 19:10

Gord Thompson


Regarding your original update statement:
df = df.where(pd.notnull(df), 'None')

What's happening here is you are replacing the values with the STRING 'None' and not the special Python object None. Then in the following insert statement, it tries to insert the string 'None' into a timestamp field and throws an error.

What's funny is that the version of this you'd expect to work:
df = df.where(pd.notnull(df), None)
does not actually seem to work as expected for NaT values for reasons I don't fully understand. (See example below)

But what DOES seem to work is this statement (assuming you have numpy imported as np):
df = df.replace({np.NaN: None})
So if you do THAT, then the NaN and NaT values all convert to Python None and then psycopg2 (or probably any other db connector) will correctly treat those values as SQL Nulls on inserts.

Here's some example code to illustrate:

import datetime as dt
import pandas as pd
import numpy as np
data = [
    ['one', 1.0, pd.NaT],
    ['two', np.NaN, dt.datetime(2019, 2, 2)],
    [None, 3.0, dt.datetime(2019, 3, 3)]
    ]
df = pd.DataFrame(data, columns=["Name", "Value", "Event_date"])

Got our basic dataframe:

>>> df
   Name  Value Event_date
0   one    1.0        NaT
1   two    NaN 2019-02-02
2  None    3.0 2019-03-03

As mentioned above, this update leaves the NaT in it for some reason:

>>> df.where(pd.notnull(df), None)
   Name Value Event_date
0   one   1.0        NaT
1   two  None 2019-02-02
2  None   3.0 2019-03-03

But this version gets both the NaNs and NaTs and leaves the expected Nones:

>>> df.replace({np.NaN: None})
   Name Value           Event_date
0   one   1.0                 None
1   two  None  2019-02-02 00:00:00
2  None   3.0  2019-03-03 00:00:00

The accepted answer is probably the "better" way if you can use sqlalchemy for what you want to do, but if you gotta do it the hard way, this worked for me.

H/T to the discussion in this pandas issue for much of the details of this answer.

like image 5
sql_knievel Avatar answered Oct 07 '22 18:10

sql_knievel