I have a view that uses 11 outer joins and two inner joins to create the data. This results in over 8 million rows. When I do a count (*) on the table it takes about 5 minutes to run. I'm at a loss as to how to improve the performance of this table. Does anyone have any suggestions on where to begin? There appear to be indexes on all of the columns that are joining (though some are composit, not sure if that makes a difference...)
Any help appreciated.
This is a hard one, with a complex view you also have potential interactions with queries against the view, so guaranteeing reasonable performance will be quite hard. Outer joins in views (especially complex ones) are also prone to cause trouble for the query optimiser.
One option would be to materialise the view (called 'indexed views' on SQL Server). However you may need to monitor update performance to check that it does not impose too much overhead. Also, outer joins in a materialised view may preclude real-time refresh; if you need this then you may have to re-implement the view as a denormalised table and maintain the data with triggers.
Another possibility would be to examine whether the view could be split into two or three simpler views, possibly materialising some but not all of the view. It may be easier to materialise some of the view and get performance from the system that way.
your basic premise is wrong. having a view that returns 8 million rows is not a good idea because realisticaly you can't really do anything with so much data. 5 minutes sounds pretty good for 8 million count() because of all those joins.
what you have to do is to think about your business problem and write a smaller query/view.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With