I often have to execute complex sql scripts in a single transaction on a large PostgreSQL database and I would like to verify everything that was changed during the transaction.
Verifying each single entry on each table "by hand" would take ages.
Dumping the database before and after the script to plain sql and using diff
on the dumps isn't really an option since each dump would be about 50G
of data.
Is there a way to show all the data that was added, deleted or modified during a single transaction?
in your command line, try \s . This will list the history of queries, you have executed in the current session.
You can do it via checking last modification time of table's file. In postgresql,every table correspond one or more os files,like this: select relfilenode from pg_class where relname = 'test'; the relfilenode is the file name of table "test".
In PostgreSQL, it is also known as a transaction log. A log is a record of all the events or changes and WAL data is just a description of changes made to the actual data. So, it is 'data about data' or metadata.
To complete the transaction and have PostgreSQL make your changes permanent, execute the COMMIT command.
Dude, What are you looking for is the most searchable thing on the internet when it comes to capturing Database changes. It is a kind of version control we can say.
But as long as I know, sadly there are no in-built approaches are available in PostgreSQL or MySql. But you can overcome it by setting/adding some triggers for your most usable operations.
You can create some backup schemas, and tables to capture your changes that are changed(updated), created, or deleted.
In this way you can achieve what you want. I know this process is fully manual, But really effective.
If you need to analyze the script's behaviour only sporadically, then the easiest approach would be to change server configuration parameter log_min_duration_statement
to 0
and then back to any value it had before the analysis. Then all of the script activity will be written to the instance log.
This approach is not suitable if your storage is not prepared to accommodate this amount of data, or for systems in which you don't want sensitive client data to be written to a plain-text log file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With