Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Oracle: How to find the timestamp of the last update (any table) within a schema?

There is an Oracle database schema (very small in data, but still about 10-15 tables). It contains a sort of configuration (routing tables).

There is an application that have to poll this schema from time to time. Notifications are not to be used.

If no data in the schema were updated, the application should use its current in-memory version.

If any table had any update, the application should reload all the tables into memory.

What would be the most effective way to check the whole schema for update since a given key point (time or transaction id)?

I am imagined Oracle keeps an transaction id per schema. Then there should be a way to query such an ID and keep it to compare with at next poll.

I've found this question, where such an pseudo-column exists on a row level:

How to find out when an Oracle table was updated the last time

I would think something similar exists on a schema level.

Can someone please point me in the right direction?

like image 732
Vladimir Dyuzhev Avatar asked Jul 21 '11 12:07

Vladimir Dyuzhev


People also ask

How do you find the last modified date of a table in Oracle?

If you want to find, when a table was last modified like insert,update ,delete, then use the dictionary table dba_tab_modifications.

How do I find the last updated record in PL SQL?

So, in Oracle database table rows are not in ordered untill and unless you don't have a column by which you will say order by <that_column>. If you wish to get the last updated row, just enable auditing for table and query (timestamp column of dba_audit_trail view) and forget all rest thing.

How do you check last DML on a table?

All DML is recorded in REDO logfile. DBMS_LOGMNR can be used to inspect the content of any REDO log file. You can find out from view dba_tab_modifications.


1 Answers

I'm not aware of any such functionality in Oracle. See below.

The best solution I can come up with is to create a trigger on each of your tables that updates a one-row table or context with the current date/time. Such triggers could be at the table-level (as opposed to row-level), so they wouldn't carry as much overhead as most triggers.

Incidentally, Oracle can't keep a transaction ID per schema, as one transaction could affect multiple schemas. It might be possible to use V$ views to track a transaction back to the objects it affected, but it wouldn't be easy and it would almost certainly perform poorer than the trigger scheme.

It turns out, if you have 10g, you can use Oracle's flashback functionality to get this information. However, you'd need to enable flashback (which carries some overhead of it's own) and the query is ridiculously slow (presumably because it's not really intended for this use):

select max(commit_timestamp) 
from FLASHBACK_TRANSACTION_QUERY 
where table_owner = 'YOUR_SCHEMA' 
      and operation in ('INSERT','UPDATE','DELETE','MERGE') 

In order to avoid locking issues in the "last updated" table, you'd probably want to put that update into a procedure that uses an autonomous transaction, such as:

create or replace procedure log_last_update as
pragma autonomous_transaction;
begin
   update last_update set update_date = greatest(sysdate,update_date);
   commit;
end log_last_update;

This will cause your application to serialize to some degree: each statement that needs to call this procedure will need to wait until the previous one finishes. The "last updated" table may also get out of sync, because the update on it will persist even if the update that activated the trigger is rolled back. Finally, if you have a particularly long transaction, the application could pick up the new date/time before the transaction is completed, defeating the purpose. The more I think about this, the more it seems like a bad idea.


The better solution to avoid these issues is just to insert a row from the triggers. This would not lock the table, so there wouldn't be any serialization and the inserts wouldn't need to be made asynchronously, so they could be rolled back along with the actual data (and wouldn't be visible to your application until the data is visible as well). The application would get the max, which should be very fast if the table is indexed (in fact, this table would be an ideal candidate for an index-organized table). The only downside is that you'd want a job that runs periodically to clean out old values, so it didn't grow too large.

like image 71
Allan Avatar answered Sep 21 '22 19:09

Allan