Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Transaction Performance Impact of Materialized View Logs

I have been researching using materialized views for data aggregation and reporting purposes for a company that is largely centered around transactions (using an Oracle db). The current reporting system is dependent upon a series of views that obscure a lot of the complex data logic of the application. These views place a heavy burden on the system when they are called.

We are interested in using the "fast refresh" for incremental updates to perform some of the complex query logic prior to use in reporting; however, there is a concern within the organization that the materialized view logs (which are required for this fast refresh) will have an impact on our current transaction performance in the database. This performance is very essential to our organization therefore there is a great fear of any change.

Here is an example of the type of materialized view log we would need to implement:

create materialized view log on transaction
  with rowid, sequence(transaction_id,account_id,order_id,currency_id,price,transaction_date,payment_processor_id)
  including new values;

We would not be using the "on commit" clause for updates but rather the "on demand" clause in creation of the view, as we understand this would have a performance impact.

Will implementing this type of logging affect database transaction performance? I imagine that it must slightly affect performance as there is an additional write procedure (to the log) that is wrapped in the commit, but I cannot find any reference to this in the Oracle documentation. Any literature or advice on this subject would be greatly appreciated.

Thanks for your help!

like image 700
Daniel Heacock Avatar asked Feb 15 '23 17:02

Daniel Heacock


1 Answers

Yes, there will be an impact. The materialized log needs to be maintained synchronously so the transactions will need to insert a new row into the materialized view log for every row that is modified in the base table. How great an impact depends heavily on the system. If your system is I/O bound and you've optimized it so that physically writing the changes to the base table is a significant fraction of the wait time, the impact will be much greater than if your system is CPU bound and most of your wait time is spent reading data or performing computations.

If you are really concerned about the performance of the OLTP system, it would make sense to offload reporting to a different database on a different server. You can replicate the data to the reporting server using Streams (or GoldenGate if you can afford the additional licensing) which will have less of an impact on the source than materialized views because the redo information can be read asynchronously (and can be read on the reporting server rather than putting that workload on the production server). You could then define materialized views on the reporting server where they won't have any impact on the OLTP server. Or you could create a logical standby database as your reporting server and create the materialized views there. Either way, moving the reporting workload off the production server and reading the redo data asynchronously will protect the performance of the production server.

like image 117
Justin Cave Avatar answered Feb 18 '23 05:02

Justin Cave