Just read this interesting article by Omar on his blog Linq to SQL solve Transaction deadlock and Query timeout problem using uncommitted reads and at the end Javed Hasan started arguing with him about his solution to the nolock situation on a high volume site.
Here, the problem trying to solve is, from the sql sense we need to use Select statements with NOLOCK or use SET TRANSACTION LEVEL READ UNCOMMITTED, otherwise at high volume rows in DB will be locked and cause errors. The technology Omar used is Linq2Sql, so the question is how do we get this achieved in your C# data access code so the above does not happen?
Basically in the post, Omar comes to his solution by working and testing on real world site and with tools like SqlProfiler, whereas Javed Hasan comes to his solution with MSDN documents and Scott Hanselman's blog post etc.
Omar suggests using the following
using (var db = new DropthingsDataContext2())
{
db.Connection.Open();
db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");
var user = db.aspnet_Users.First();
var pages = user.Pages.ToList();
}
whereas Javed Hasan suggests
using (new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
//Your db Operation
}
I'm very interested to know what you guys out there do on this particular issue on a high volume site like the StatckOverflow, or what did Jeff and their guys do in this regard?
Edit: After reading the first post, I want to point out a few things in Omar's post.
First of all please avoid uncommitted reads, they can cause lots of issues. A much better approach is just to set the database to snapshot isolation. This is what Jeff did.
Jeff basically said: "bla bla bla, be real, bla bla bla, database theoreticians, bla bla bla, READ UNCOMMITTED can be useful for REAL production apps that don't need data consistency." Jeff is not a DBA, fortunately there are many DBAs out here on SO.
The problem with Omar's approach is that it can leak connections with "read uncommitted" isolation level in to your connections pool which could wreak havoc in your website. Meaning random statement may be executed in read uncommitted.
Javed approach would be much better because on dispose MS have the chance to clean stuff up on the connection.
EDIT If you are having performance issues with Javed's approach you could look at rolling your own transaction manager.
Some things you probably want to do:
I'm a developer on a tools team in the SQL Server group at Microsoft. Many applications are not super-sensitive to transaction consistency, especially if you writing an app which does reporting or something where occasionally inconsistent data is not the end of the world. Of course, if you writing a financial application or something else which has very low tolerance for data inconsistency, you probably want to explore other solutions.
If do choose to use uncommitted reads, I have blogged a handy solution using extension methods in C#.
{My (poor) reputation prevents me from posting comments so I put this as an answer}
If you use IsolationLevel via System.Transactions and create a new Linq context within the transaction block, SQL Server ends up trying to call DTC to coordinate the transaction. That just happened to me and was quite unexpected.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With