I have a database table on Redshift I want to UNLOAD to S3 each month [through AWS Pipeline]. I have this code that works, but ideally I'd like to add in the current month to the filename too
UNLOAD ('
select *
from reportingsandbox.tmp_test
')
TO 's3://reporting-team-bucket/importfiles/test_123.csv' CREDENTIALS 'aws_access_key_id=123456678;aws_secret_access_key=abcdefg'
ALLOWOVERWRITE
delimiter ','
PARALLEL OFF ;
I've tried this to add in the month, but it has not worked, do you know if it is possible?
Thanks
's3://reporting-team-bucket/importfiles/test_123{month(myDateTime)}.csv'
As of cluster version 1.0. 3945, Redshift now supports unloading data to S3 with header rows in each file i.e. UNLOAD('select column1, column2 from mytable;') TO 's3://bucket/prefix/' IAM_ROLE '<role arn>' HEADER; Note: you can't use the HEADER option in conjunction with FIXEDWIDTH .
Unload VENUE to a CSV file The following example unloads the VENUE table and writes the data in CSV format to s3://mybucket/unload/ . unload ('select * from venue') to 's3://mybucket/unload/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' CSV; Suppose that the VENUE table contains the following rows.
I worked it out [in aws data pipeline]!
's3://reporting-team-bucket/importfiles/test_123-#{format(@scheduledStartTime,'YYYY-MM-dd-HH')}.csv'
Thanks
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With