Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a way to show everything that was changed in a PostgreSQL database during a transaction?

I often have to execute complex sql scripts in a single transaction on a large PostgreSQL database and I would like to verify everything that was changed during the transaction.

Verifying each single entry on each table "by hand" would take ages.

Dumping the database before and after the script to plain sql and using diff on the dumps isn't really an option since each dump would be about 50G of data.

Is there a way to show all the data that was added, deleted or modified during a single transaction?

like image 819
Sebastian Stumpf Avatar asked Aug 12 '20 10:08

Sebastian Stumpf


People also ask

How do I see query history in PostgreSQL?

in your command line, try \s . This will list the history of queries, you have executed in the current session.

How do you find when was the table last updated in Postgres?

You can do it via checking last modification time of table's file. In postgresql,every table correspond one or more os files,like this: select relfilenode from pg_class where relname = 'test'; the relfilenode is the file name of table "test".

What is transaction log Postgres?

In PostgreSQL, it is also known as a transaction log. A log is a record of all the events or changes and WAL data is just a description of changes made to the actual data. So, it is 'data about data' or metadata.

Which command is used to tell PostgreSQL to make all changes made to the database permanent?

To complete the transaction and have PostgreSQL make your changes permanent, execute the COMMIT command.


2 Answers

Dude, What are you looking for is the most searchable thing on the internet when it comes to capturing Database changes. It is a kind of version control we can say.

But as long as I know, sadly there are no in-built approaches are available in PostgreSQL or MySql. But you can overcome it by setting/adding some triggers for your most usable operations.

You can create some backup schemas, and tables to capture your changes that are changed(updated), created, or deleted.

In this way you can achieve what you want. I know this process is fully manual, But really effective.

like image 182
Mayur Avatar answered Sep 29 '22 12:09

Mayur


If you need to analyze the script's behaviour only sporadically, then the easiest approach would be to change server configuration parameter log_min_duration_statement to 0 and then back to any value it had before the analysis. Then all of the script activity will be written to the instance log. This approach is not suitable if your storage is not prepared to accommodate this amount of data, or for systems in which you don't want sensitive client data to be written to a plain-text log file.

like image 34
Jonathan Jacobson Avatar answered Sep 29 '22 11:09

Jonathan Jacobson