Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

writing pandas dataframe with timedeltas to parquet

I can't seem to write a pandas dataframe containing timedeltas to a parquet file through pyarrow.

The pyarrow documentation specifies that it can handle numpy timedeltas64 with ms precision. However, when I build a dataframe from numpy's timedelta64[ms] the datatype of that column is timedelta64[ns].

Pyarrow then throws an error because of this.

Is this a bug in pandas or pyarrow? Is there an easy fix for this?

The following code:

df = pd.DataFrame({
    'timedelta': np.arange(start=0, stop=1000,
        step=10,
         dtype='timedelta64[ms]')
})

print(df.timedelta.dtypes)

df.to_parquet('test.parquet', engine='pyarrow', compression='gzip')

produces the following output: timedelta64[ns] and error:

---------------------------------------------------------------------------
ArrowNotImplementedError                  Traceback (most recent call last)
<ipython-input-41-7df28b306c1e> in <module>()
      3                                    step=10,
      4                                    dtype='timedelta64[ms]')
----> 5             }).to_parquet('test.parquet', engine='pyarrow', compression='gzip')

~/miniconda3/envs/myenv/lib/python3.6/site-packages/pandas/core/frame.py in to_parquet(self, fname, engine, compression, **kwargs)
   1940         from pandas.io.parquet import to_parquet
   1941         to_parquet(self, fname, engine,
-> 1942                    compression=compression, **kwargs)
   1943 
   1944     @Substitution(header='Write out the column names. If a list of strings '

~/miniconda3/envs/myenv/lib/python3.6/site-packages/pandas/io/parquet.py in to_parquet(df, path, engine, compression, **kwargs)
    255     """
    256     impl = get_engine(engine)
--> 257     return impl.write(df, path, compression=compression, **kwargs)
    258 
    259 

~/miniconda3/envs/myenv/lib/python3.6/site-packages/pandas/io/parquet.py in write(self, df, path, compression, coerce_timestamps, **kwargs)
    116 
    117         else:
--> 118             table = self.api.Table.from_pandas(df)
    119             self.api.parquet.write_table(
    120                 table, path, compression=compression,

table.pxi in pyarrow.lib.Table.from_pandas()

~/miniconda3/envs/myenv/lib/python3.6/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads)
    369         arrays = [convert_column(c, t)
    370                   for c, t in zip(columns_to_convert,
--> 371                                   convert_types)]
    372     else:
    373         from concurrent import futures

~/miniconda3/envs/myenv/lib/python3.6/site-packages/pyarrow/pandas_compat.py in <listcomp>(.0)
    368     if nthreads == 1:
    369         arrays = [convert_column(c, t)
--> 370                   for c, t in zip(columns_to_convert,
    371                                   convert_types)]
    372     else:

~/miniconda3/envs/myenv/lib/python3.6/site-packages/pyarrow/pandas_compat.py in convert_column(col, ty)
    364 
    365     def convert_column(col, ty):
--> 366         return pa.array(col, from_pandas=True, type=ty)
    367 
    368     if nthreads == 1:

array.pxi in pyarrow.lib.array()

array.pxi in pyarrow.lib._ndarray_to_array()

error.pxi in pyarrow.lib.check_status()

ArrowNotImplementedError: Unsupported numpy type 22
like image 868
Swier Avatar asked Jul 13 '18 19:07

Swier


People also ask

How do I write a Pandas DataFrame to parquet?

Pandas DataFrame: to_parquet() function The to_parquet() function is used to write a DataFrame to the binary parquet format. This function writes the dataframe as a parquet file. File path or Root Directory path. Will be used as Root Directory path while writing a partitioned dataset.

Does pandas use PyArrow?

To interface with pandas, PyArrow provides various conversion routines to consume pandas structures and convert back to them. While pandas uses NumPy as a backend, it has enough peculiarities (such as a different type system, and support for null values) that this is a separate topic from NumPy Integration.

Does pandas support parquet?

Pandas provides a beautiful Parquet interface. Pandas leverages the PyArrow library to write Parquet files, but you can also write Parquet files directly from PyArrow.

Does parquet preserve data type?

parquet has a number of strengths: It preserves type information: Unlike a CSV, parquet files remember what columns are numeric, which are categorical, etc. etc., so when you re-load your data you can be assured it will look the same as it did when you saved it.


1 Answers

fastparquet supports the timedelta type.

First install fastparquet, eg.:

pip install fastparquet

Then you can use this:

df.to_parquet('test.parquet.gzip', engine='fastparquet', compression='gzip')
like image 170
Arjaan Buijk Avatar answered Sep 20 '22 14:09

Arjaan Buijk