Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there any way to find table creation date in redshift?

I am having trouble with finding table creation date in Amazon Redshift. I know svv_table_info will give all the info about the table but the creation date.Can anyone help please?

like image 600
Kamlesh Gallani Avatar asked Apr 25 '16 17:04

Kamlesh Gallani


People also ask

How do I find the date a table was created?

Using Table's Properties In the Object Explorer in SQL Server Management Studio, go to the database and expand it. Under the Tables folder select the table name. Right click and select Properties from the menu. You will see the created date of the table in the General section under Description.

How do I get the current date in Redshift SQL?

Query to get only date:SELECT TRUNC(SYSDATE); This query would return only the current date in the format YYYY-MM-DD.

How do I get the Redshift table schema?

To get the column data and schema of a particular table: select * from information_schema. columns where tablename='<<table_name>>'


3 Answers

There is a proper way to get table creation date and time in Redshift, that is not based on query log:

SELECT
TRIM(nspname) AS schema_name,
TRIM(relname) AS table_name,
relcreationtime AS creation_time
FROM pg_class_info
LEFT JOIN pg_namespace ON pg_class_info.relnamespace = pg_namespace.oid
WHERE reltype != 0
AND TRIM(nspname) = 'my_schema';

For some reason it does not work for very old tables. The oldest date I could find on a cluster of mine was in November 2018. Maybe the creation date of tables was not recorded in pg_class_info before that date.

like image 162
A21z Avatar answered Oct 17 '22 14:10

A21z


It looks like no way to get the creation timestamp of tables in Redshift. One workaround is using STL_DDLTEXT table which records a history of DDLs including CREATE TABLE.

Here is an example (test_table is a table name):

dev=> select starttime, endtime, trim(text) as ddl from stl_ddltext where text ilike '%create%table%test_table%' order by endtime desc limit 1;
         starttime          |          endtime           |                                                               ddl
----------------------------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------
 2016-04-25 05:38:11.666338 | 2016-04-25 05:38:11.674947 | CREATE TABLE "test_table" (id int primary key, value varchar(24));
(1 row)

In the above case, starttime or endtime will be a timestamp of the test_table table creation.

NOTE:

  • Redshift does not keep STL_DDLTEXT for a long time, so you cannot use this way permanently.
  • You cannot use this way if the table is created through other ways like renaming a table name.
like image 41
Masashi M Avatar answered Oct 17 '22 14:10

Masashi M


In Redshift the other ways you can get the create time of your table by searching for the start and stop time of any create table sql run in the svl_qlog. There are other tables you can look at to get similar data but the problem with this way is that it's only kept for a couple of days (3 - 5). Although everyone would like metadata stored along with the table itself to query. Amazon recommends to keep this data to export the data to S3 from the logs you want to retain to S3. Then in my opinion you could import these s3 files back into a permanent table you want called aws_table_history or something so that this special data you keep forever.

select * from svl_qlog where substring ilike 'create table%' order by starttime desc limit 100;

select * from stl_query a, stl_querytext b where a.query = b.query and b.text ilike 'create table%' order by a.starttime desc limit 100; 

Or get just the Table name and date like this:

select split_part(split_part(b.text,'table ', 2), ' ', 1) as tablename, 
starttime as createdate 
from stl_query a, stl_querytext b 
where a.query = b.query and b.text ilike 'create table%' order by a.starttime desc;

Export the Create Table data history you want to your created S3 bucket with your keys. The below select statement will output the table name created and the datetime it was created.

Create a temp table with the data you want to export to S3.

create table temp_history as 
(select split_part(split_part(b.text,'table ', 2), ' ', 1) as tablename, starttime as createdate 
from stl_query a, stl_querytext b 
where a.query = b.query 
and b.text ilike 'create table%' order by a.starttime desc);

Then upload this table to S3.

unload ('select * from temp_history') 
to 's3://tablehistory' credentials 'aws_access_key_id=myaccesskey;aws_secret_access_key=mysecretkey' 
DELIMITER '|' NULL AS '' ESCAPE ALLOWOVERWRITE;

Create a new table in AWS Redshift.

CREATE TABLE aws_table_history
(
tablename VARCHAR(150),
createdate DATETIME
);

Then import it back in to your custom table.

copy aws_table_history from 's3://tablehistory' credentials 'aws_access_key_id=MYKEY;aws_secret_access_key=MYID'
emptyasnull
blanksasnull
removequotes
escape
dateformat 'YYYY-MM-DD'
timeformat 'YYYY-MM-DD HH:MI:SS'
maxerror 20;
delimiter '|';

I tested all this and it works for us. I hope this helps some people. Lastly a simpler method would be to use Talend Big Data Open Studio and create a new job grab the component tRedshiftRow and paste the following SQL into it. Then build the job and you can schedule to run the .bat (windows) or .sh (unix) in any environment you want.

INSERT INTO temp_history 
(select split_part(split_part(b.text,'table ', 2), ' ', 1) as tablename, starttime as createdate 
from stl_query a, stl_querytext b 
where a.query = b.query 
and b.text ilike 'create table%' order by a.starttime desc);
COMMIT;
insert into historytable
select distinct s.* 
from temp_history s;
COMMIT;
--remove  duplicates 
DELETE FROM historytable USING historytable a2 
WHERE historytable.tablename = a2.tablename AND
historytable.createdate < a2.createdate;
COMMIT;
---clear everything from prestage
TRUNCATE temp_history;
COMMIT;
like image 17
Mark Lane Avatar answered Oct 17 '22 14:10

Mark Lane