I'm trying to write a dataframe as a CSV file on S3 by using the s3fs library and pandas. Despite the documentation, I'm afraid the gzip compression parameter it's not working with s3fs.
def DfTos3Csv (df,file):
with fs.open(file,'wb') as f:
df.to_csv(f, compression='gzip', index=False)
This code saves the dataframe as a new object in S3 but in a plain CSV not in a gzip format. On the other hand, the read functionality it's working OK using this compression parameter.
def s3CsvToDf(file):
with fs.open(file) as f:
df = pd.read_csv(f, compression='gzip')
return df
Suggestions/alternatives to the write issue? Thank you in advance!.
The compression parameter of the function to_csv()
does not work when writing to a stream. You have to do the zipping and uploading separately.
import gzip
import boto3
from io import BytesIO, TextIOWrapper
buffer = BytesIO()
with gzip.GzipFile(mode='w', fileobj=buffer) as zipped_file:
df.to_csv(TextIOWrapper(zipped_file, 'utf8'), index=False)
s3_resource = boto3.resource('s3')
s3_object = s3_resource.Object('bucket_name', 'key')
s3_object.put(Body=buffer.getvalue())
pandas (v1.2.4) can write csv to S3 directly with compression functionality working properly. legacy pandas may have problem with compression. e.g.
your_pandas_dataframe.to_csv('s3://your_bucket_name/your_s3_key.csv.gz',compression="gzip", index=False)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With