Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using a Cache Table in SQLServer, am I crazy?

I have an interesting delimma. I have a very expensive query that involves doing several full table scans and expensive joins, as well as calling out to a scalar UDF that calculates some geospatial data.

The end result is a resultset that contains data that is presented to the user. However, I can't return everything I want to show the user in one call, because I subdivide the original resultset into pages and just return a specified page, and I also need to take the original entire dataset, and apply group by's and joins etc to calculate related aggregate data.

Long story short, in order to bind all of the data I need to the UI, this expensive query needs to be called about 5-6 times.

So, I started thinking about how I could calculate this expensive query once, and then each subsequent call could somehow pull against a cached result set.

I hit upon the idea of abstracting the query into a stored procedure that would take in a CacheID (Guid) as a nullable parameter.

This sproc would insert the resultset into a cache table using the cacheID to uniquely identify this specific resultset.

This allows sprocs that need to work on this resultset to pass in a cacheID from a previous query and it is a simple SELECT statement to retrieve the data (with a single WHERE clause on the cacheID).

Then, using a periodic SQL job, flush out the cache table.

This works great, and really speeds things up on zero load testing. However, I am concerned that this technique may cause an issue under load with massive amounts of reads and writes against the cache table.

So, long story short, am I crazy? Or is this a good idea.

Obviously I need to be worried about lock contention, and index fragmentation, but anything else to be concerned about?

like image 798
FlySwat Avatar asked Jul 07 '09 22:07

FlySwat


2 Answers

I have done that before, especially when I did not have the luxury to edit the application. I think its a valid approach sometimes, but in general having a cache/distributed cache in the application is preferred, cause it better reduces the load on the DB and scales better.

The tricky thing with the naive "just do it in the application" solution, is that many time you have multiple applications interacting with the DB which can put you in a bind if you have no application messaging bus (or something like memcached), cause it can be expensive to have one cache per application.

Obviously, for your problem the ideal solution is to be able to do the paging in a cheaper manner, and not need to churn through ALL the data just to get page N. But sometimes its not possible. Keep in mind that streaming data out of the db can be cheaper than streaming data out of the db back into the same db. You could introduce a new service that is responsible for executing these long queries and then have your main application talk to the db via the service.

like image 79
Sam Saffron Avatar answered Nov 08 '22 01:11

Sam Saffron


Your tempdb could balloon like crazy under load, so I would watch that. It might be easier to put the expensive joins in a view and index the view than trying to cache the table for every user.

like image 21
Eric Avatar answered Nov 08 '22 01:11

Eric