Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Avoiding Deadlocks within SQL transaction

Tags:

c#

sql

sql-server

I have the following code (simplified) to access my database. I know about using statements and parameters but I shortend this to focus my problem.

string ConnectionString = "ConnectionStringHere";

//1. insert Transaction
SqlConnection myConnection = new SqlConnection(ConnectionString);            
SqlTransaction myTrans = myConnection.BeginTransaction();                
string sUpdate = "INSERT INTO User (FirstName, LastName) VALUES ('jon','doe')";
SqlCommand myCommand = new SqlCommand(sUpdate, myConnection, myTrans);
myCommand.ExecuteNonQuery();

//2. select from the same table
SqlConnection myConnection2 = new SqlConnection(ConnectionString);
string sSelect = "SELECT FirstName, LastName FROM User WHERE ID = 123";
DataTable dtResult = new DataTable();
SqlDataAdapter myDataAdapter2 = new SqlDataAdapter(sSelect, myConnection2);
myDataAdapter2.Fill(dtResult); //TimeOut Exception here

//submit changes from my transaction here
myTrans.Commit();

In the second part I get a TimeOutExcetion because I can't access my User table until I commit my transaction myTrans.Commit(); which is after that - deadlock.

My question here is - what's the best practice to avoid deadlocks here? I could avoid the Exception by making the SELECT part of the transaction or by setting the IsolationLevel

SqlTransaction myTrans = myConnection.BeginTransaction(IsolationLevel.ReadUncommitted);

But I'm not sure about the usage of those. I don't have to worry about invalid data because I always select by ID.

like image 943
fubo Avatar asked Mar 11 '23 07:03

fubo


2 Answers

I don't see any reason why you would make the SELECT query as part of the transaction to solve the deadlock or time out issue. Setting the ReadUncommitted isolation level on first sql connection myConnection that you have thought is also not the right approach. I see there are two possible solutions:

  1. First Solution: Setting isolation level IsolationLevel.ReadUncommitted on the transaction myTrans you have started will not help. If you are comfortable with dirty reads then you should actually be setting this isolation level on the second SQL connection myConnection2 that you are establishing for firing select query on User table. To set the isolation level for the select query through myConnection2 you need to use with (nolock) table level hint. So your query will start to look like:

    string sSelect = "SELECT FirstName, LastName FROM User WITH (NOLOCK) WHERE ID = 123";
    

    You can get more details here. Also, read about the consequences of dirty read here which is a side effect of using this particular isolation level.

  2. Second solution: Default isolation level of SQL Server is Read Committed. So when you start firing query through a new SQL connection using a variable named myConnection2 it is working on ReadCommitted isolation level. The default behavior exhibited by ReadCommitted isolation level is Blocking Read i.e. if there are uncommitted changes on a table (which can be committed or rollbacked due to an active transaction) then your select statement on User table will get blocked. It will wait for the transaction to finish so that it can read the newly updated data or the original data in case of a rollback. If it doesn't do such a blocking read then it will end up doing dirty read which is a well known concurrency issue with databases.
    If you do not want your SELECT statements to get blocked and want to keep going with last committed value of a row then there is a new database level setting in SQL Server named READ_COMMITTED_SNAPSHOT. Here is how you can change it using SQL script:

    ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
    

    Quoting Pinal Dave from his article here:

If you are having problem with blocking between readers (SELECT) and writers (INSERT/UPDATE/DELETE), then you can enable this property without changing anything from the application. Which means application would still run under read committed isolation and will still read only committed data.

Note: This is a database level setting and will affect all the transactions on your database using READCOMMITTED isolation level.

In my opinion you should go with first solution. Also there are few key points which you should keep in mind to avoid deadlocks in SQL Server queries. Quoting Pinal Dave from here:

  • Minimize the size of transaction and transaction times.
  • Always access server objects in the same order each time in application.
  • Avoid cursors, while loops, or process which requires user input while it is running.
  • Reduce lock time in application.
  • Use query hints to prevent locking if possible (NoLock, RowLock)
  • Select deadlock victim by using SET DEADLOCK_PRIORITY.
like image 159
RBT Avatar answered Mar 20 '23 03:03

RBT


It depends on what you need on that select.

If you need to make sure the insert has been made prior to the select, you should have both commands executed within the same transaction, or commit the insertion before you run the SELECT query.

If not, then this being SQL Server, you could either do as you said and set the isolation level to READ UNCOMMITED, or you can use the NOLOCK table hint, i.e.:

string sSelect = "SELECT FirstName, LastName FROM User WITH(NOLOCK) WHERE ID = 123";

But before you decide to do that, please read about its pros and cons in this other question:

When should you use "with (nolock)"

like image 36
Geeky Guy Avatar answered Mar 20 '23 01:03

Geeky Guy