I want to migrate a table from Amazon RedShift to MySQL, but using "unload" will generate multiple data files which are hard to imported into MySQL directly.
Is there any approach to unload the table to a single CSV file so that I can import it to MySQL directly?
Unload the VENUE table to a JSON file. The following example unloads the VENUE table and writes the data in JSON format to s3://mybucket/unload/ . unload ('select * from venue') to 's3://mybucket/unload/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' JSON; Following are sample rows from the VENUE table.
UNLOAD automatically creates encrypted files using Amazon S3 server-side encryption (SSE), including the manifest file if MANIFEST is used. The COPY command automatically reads server-side encrypted files during the load operation.
Yes, because thats the naming convention unload uses in order to avoid duplicate names which I believe cannot be avoided from our end.
In order to send to a single file use parallel off
unload ('select * from venue')
to 's3://mybucket/tickit/unload/venue_' credentials
'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>'
parallel off;
Also I recommend using Gzip, to make that file even smaller for download.
unload ('select * from venue')
to 's3://mybucket/tickit/unload/venue_' credentials
'aws_access_key_id=<access-key-id>;aws_secret_access_key=<secret-access-key>'
parallel off
gzip;
This is an old question at this point, but I feel like all the existing answers are slightly misleading. If your question is, "Can I absolutely 100% guarantee that Redshift will ALWAYS unload to a SINGLE file in S3?", the answer is simply NO.
That being said, for most cases, you can generally limit your query in such a way that you'll end up with a single file. Per the documentation (https://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html), the main factor in limiting the number of files you generate is the actual raw size in bytes of your export (NOT the number of rows). The limit on the size of an output file generated by the Redshift UNLOAD
command is 6.2GB.
So if you want to try to guarantee that you get a single output file from UNLOAD
, here's what you should try:
PARALLEL OFF
. Parallel is "ON" by default and will generally write to multiple files unless you have a tiny cluster (The number of output files with "PARALLEL ON" set is proportional to the number of slices in your cluster). PARALLEL OFF
will write files serially to S3 instead of in parallel and will only spill over to using multiple files if you exceed the size limit.WHERE
clause or use a LIMIT
clause to keep the number of records down. Unfortunately neither of these techniques are perfect since rows can be of variable size. It's also not clear to me if the GZIP
option affects the output file size spillover limit or not (it's unclear if 6.2GB is the pre-GZIP size limit or the post-GZIP size limit).For me, the UNLOAD command that ending up generating a single CSV file in most cases was:
UNLOAD
('SELECT <fields> FROM <table> WHERE <restrict_query>')
TO 's3://<bucket_name>/<filename_prefix>'
CREDENTIALS 'aws_access_key_id=<access_key>;aws_secret_access_key=<secret_key>'
DELIMITER AS ','
ADDQUOTES
NULL AS ''
PARALLEL OFF;
The other nice side effect of PARALLEL OFF
is that it will respect your ORDER BY
clause if you have one and generate the files in an order that keeps all the records ordered, even across multiple output files.
Addendum: There seems to be some folkloric knowledge around using LIMIT 2147483647
to force the leader node to do all the processing and generate a single output file, but this doesn't seem to be actually documented anywhere in the Redshift documentation and as such, relying on it seems like a bad idea since it could change at any time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With