Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

problem with reading partitioned parquet files created by Snowflake with pandas or arrow

ArrowInvalid: Unable to merge: Field X has incompatible types: string vs dictionary<values=string, indices=int32, ordered=0>

ArrowInvalid: Unable to merge: Field X has incompatible types: decimal vs int32

I am trying to write the result of a snowflake query on disk and then query that data using arrow and duckdb. I have created a partitioned parquet with the query bellow following this:

COPY INTO 's3://path/to/folder/'
FROM (
    SELECT transaction.TRANSACTION_ID, OUTPUT_SCORE, MODEL_NAME, ACCOUNT_ID, to_char(TRANSACTION_DATE,'YYYY-MM') as SCORE_MTH
    FROM transaction
    )
partition by('SCORE_MTH=' || score_mth || '/ACCOUNT_ID=' || ACCOUNT_ID)
file_format = (type=parquet)
header=true

When I try to read the parquet files I get the following error:

df = pd.read_parquet('path/to/parquet/') # same result using pq.ParquetDataset or pq.read_table as they all use the same function under the hood

ArrowInvalid: Unable to merge: Field SCORE_MTH has incompatible types: string vs dictionary<values=string, indices=int32, ordered=0>

Moreover, following some google search I found this page. Following the instructions: df = pd.read_parquet('path/to/parquet/', use_legacy_dataset=True)

ValueError: Schema in partition[SCORE_MTH=0, ACCOUNT_ID=0] /path/to/parquet was different. 
TRANSACTION_ID: string not null
OUTPUT_SCORE: double
MODEL_NAME: string
ACCOUNT_ID: int32
SCORE_MTH: string

vs

TRANSACTION_ID: string not null
OUTPUT_SCORE: double
MODEL_NAME: string

Also based on what the data type is you may get this error:

ArrowInvalid: Unable to merge: Field X has incompatible types: IntegerType vs DoubleType

or

ArrowInvalid: Unable to merge: Field X has incompatible types: decimal vs int32

This is a known issue.

Any idea how I can read this parquet file?

like image 934
Ehsan Fathi Avatar asked Oct 20 '25 01:10

Ehsan Fathi


2 Answers

I was just dealing with the same issue and for me it worked if I provided the pyarrow schema to the function:

import pandas as pd
import pyarrow as pa

schema = pa.schema([('SCORE_MTH', pa.string()), ('ACCOUNT_ID', pa.int32())])
pd.read_parquet('s3://path/to/folder//', schema=schema)  # works also with filters
like image 165
MasterApprentice Avatar answered Oct 22 '25 16:10

MasterApprentice


My workaround was to use fastparquet instead of pyarrow to read it. Just pip install fastparquet and then:

df = pd.read_parquet('path/to/parquet/', engine="fastparquet")
like image 28
Daniel R Carletti Avatar answered Oct 22 '25 14:10

Daniel R Carletti