I have uploaded the MySQL csv file / MYSQL zip file of all tables in Amazon S3 bucket. Now I want to link the Amazon Athena with S3 bucket file. But when I write the schema for different tables is showing the same result for the select query of each table. I have search a lot but not able to understand the exact /correct way to do this.
I want to create/update different table schema in Athena with the help of one csv /sql zip file from S3 bucket.
To see a new table column in the Athena Query Editor after you run ALTER TABLE ADD COLUMNS , manually refresh the table list in the editor, and then expand the table again.
You can now run SQL queries on your file-based data sources from S3 in Athena. This comes in very handy when you have to analyse huge data sets which are stored as multiple files in S3. Depending on how your data is distributed across files and in which file format, your queries will be very performant.
Use a CREATE TABLE statement to create an Athena table based on the data. Reference the OpenCSVSerDe class after ROW FORMAT SERDE and specify the character separator, quote character, and escape character in WITH SERDEPROPERTIES , as in the following example.
Amazon Athena will look in a defined directory for data. All data files within that directory will be treated as containing data for a given table.
You use a CREATE TABLE
command to both define the schema and direct Athena to the directory, eg:
CREATE EXTERNAL TABLE test1 (
f1 string,
s2 string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES ("separatorChar" = ",", "escapeChar" = "\\")
LOCATION 's3://my-bucket/data-directory/'
You will need to run a CREATE EXTERNAL TABLE
command for each table, and the data for each table should be in a separate directory. The CSV files can be in ZIP format (which makes it faster and cheaper to query).
As an alternative to writing these table definitions yourself, you can create a crawler in AWS Glue. Point the crawler to the data directory, supply a name and the crawler will examine the data files and create a table definition that matches the files.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With