I'm getting an error when trying to load a table in Redshift from a CSV file in S3. The error is:
error: S3ServiceException:All access to this object has been disabled,Status 403,Error AllAccessDisabled,Rid FBC64D9377CF9763,ExtRid o1vSFuV8SMtYDjkgKCYZ6VhoHlpzLoBVyXaio6hdSPZ5JRlug+c9XNTchMPzNziD,CanRetry 1
code: 8001
context: Listing bucket=amazonaws.com prefix=els-usage/simple.txt
query: 1122
location: s3_utility.cpp:540
process: padbmaster [pid=6649]
The copy statement used is:
copy public.simple from 's3://amazonaws.com/mypath/simple.txt' CREDENTIALS 'aws_access_key_id=xxxxxxx;aws_secret_access_key=xxxxxx' delimiter ',';
As this is my first attempt at using Redshift and S3, I've kept the simple.txt
file (and its destination table) a single field record. I've run the copy in both Aginity Workbench and SQL Workbench with the same results.
I've clicked the link in the S3 file's property tab and it downloads the simple.txt
file - so it appears the input file is accessible. Just to be sure, I've given it public access.
Unfortunately, I don't see any addition information that would be helpful in debugging this in the Redshift Loads tab.
Can anyone see anything I'm doing incorrectly?
Removing the amazonaws.com from the URL fixed the problem. The resulting COPY statement is now:
copy public.simple from 's3://mypath/simple.txt' CREDENTIALS 'aws_access_key_id=xxxxxxx;aws_secret_access_key=xxxxxx' delimiter ',';
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With