I have a big table in SQL Server 2012, it contains about 34 million records. I have a batch program that insert/updates the table every 5 minutes, and I have a web application that reads from the table any time.
Every time the batch program is writing to the table, it is causing the reads to go really slow. I think it has something do to with the isolation level, but I'm not so into SQL Server so I don't know how I could fix it.
The best scenario for me would be to let the insert not block the reads. Is it possible to specify that in the query? Or do I have to change the read queries to pick up "dirty rows"?
The application is not critical dependent on "fresh" data so if I can force the DB to allow dirty reads it would be fine. But I don't know how I can achieve this, or if it's a better solution.
You certainly can't stop insert/update putting locks on, things would go very wrong very quickly. You could set isolation level on your read operations to say read uncommitted, or specify that as a hint in a select. They are hints though, if there's risk of losing data integrity, as opposed to simply returning dirty data (and there are many pitfalls to that), the dbms will quite rightly ignore them. There are other things you can do, but if you are serious about it, get a sql server dba in, this sort of thing is well past gifted amateur level, never mind novice.
There is an easy answer and a hard answer:
It is easy to solve if you can turn on and use snapshot isolation. That solves blocking and consistency problems for readers.
If you can't do that the problem might be hard to solve but there is not enough information here to answer.
So try to use snapshot isolation for readers and be sure to read up on it before using it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With