I have implemented the following code for handling INSERT/UPDATE retry logic with exponential backoff when writing to an Azure Database.
static SqlConnection TryOpen(this SqlConnection connection)
{
int attempts = 0;
while (attempts < 5)
{
try
{
if (attempts > 0)
System.Threading.Thread.Sleep(((int)Math.Pow(3, attempts)) * 1000);
connection.Open();
return connection;
}
catch { }
attempts++;
}
throw new Exception("Unable to obtain a connection to SQL Server or SQL Azure.");
}
However should I consider applying retry logic for my database reads as well? Or would the SqlCommand.CommandTimeout() method suffice? Most of my reads are instituted using the following code:
Dim myDateAdapter As New SqlDataAdapter(mySqlCommand)
Dim ds As New DataSet
myDateAdapter.Fill(ds, "dtName")
It's hard to know what sort of transient errors will occur in a production environment with Azure so I am trying to do as much mitigation as possible now.
One way you can test your retry logic is to disconnect your client computer from the network while the program is running. The error is: SqlException. Number = 11001.
If the application persistently fails to connect to Azure SQL Database, it usually indicates an issue with one of the following: Firewall configuration. The Azure SQL database or client-side firewall is blocking connections to Azure SQL Database.
The SqlAzureExecutionStrategy will retry instantly the first time a transient failure occurs, but will delay longer between each retry until either the max retry limit is exceeded or the total time hits the max delay.
I think retries are going to be part of your Windows Azure SQL Database operations in general.
Rather than implementing a custom solution, have you looked at the transient fault handling application block published by Microsoft Patterns and Practices, specifically for SQL Database?
Connection failures in SQL Azure are common. This is because your application will create a connection pool but while your side thinks these connections are over, Azure could terminate them at their end and you will never know about it.
They do this for valid reasons such as a particular instance has become overloaded and they are transferring connections to another one. With in-house SQL servers you generally never get this problem because your SQL Servers are always available and dedicated for your use.
As an example, I get about 5 connection failures with SQL Azure on about 100,000 database queries in a day.
It's going to happen with SQL Azure. If you are using ADO.NET then David's suggestion of transient fault handling is the way to go.
If you are going to use Entity Framework, there is good news and bad news: Transient Fault Handling with SQL Azure using Entity Framework
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With