Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Redshift COPY command for Parquet format with Snappy compression

I have datasets in HDFS which is in parquet format with snappy as compression codec. As far as my research goes, currently Redshift accepts only plain text, json, avro formats with gzip, lzo compression codecs.

Alternatively, i am converting the parquet format to plain text and changing the snappy codec to gzip using a Pig script.

Is there currently a way to load data directly from parquet files to Redshift?

like image 219
cloudninja Avatar asked Mar 10 '16 06:03

cloudninja


1 Answers

No, there is currently no way to load Parquet format data directly into Redshift.

EDIT: Starting from April 19, 2017 you can use Redshift Spectrum to directly query Parquet data on S3. Therefore you can now "load" from Parquet with INSERT INTO x SELECT * FROM parquet_data http://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html

EDIT 2: Starting from May 17, 2018 (for clusters on version 1.0.2294 or later) you can directly load Parquet and ORC files into Redshift. https://docs.aws.amazon.com/redshift/latest/dg/copy-usage_notes-copy-from-columnar.html

like image 102
Joe Harris Avatar answered Sep 22 '22 14:09

Joe Harris