I am doing some fairly intensive database work, and end up inserting many, many records into the database. In an effort to minimize context bloat, I do these inserts 100 at a time, dispose of the context, and recreate the context.
I am getting some odd errors that I don't understand. These errors only occur on our production server, but all is well on the development server. If anyone could shed some light on what they think might be the problem, I would sure appreciate it.
System.Data.Entity.Core.EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details.
System.Data.SqlClient.SqlException: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
System.ComponentModel.Win32Exception: The semaphore timeout period has expired
System.Data.SqlClient.SqlException (0x80131904): A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available.)
System.ComponentModel.Win32Exception (0x80004005): The specified network name is no longer available
System.Data.Entity.Infrastructure.CommitFailedException: An error was reported while committing a database transaction but it could not be determined whether the transaction succeeded or failed on the database server. See the inner exception and http://go.microsoft.com/fwlink/?LinkId=313468 for more information. > System.Data.SqlClient.SqlException: A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The specified network name is no longer available.)
System.ComponentModel.Win32Exception: The specified network name is no longer available`
System.Data.Entity.Core.EntityException: An exception has been raised that is likely due to a transient failure. If you are connecting to a SQL Azure database consider using SqlAzureExecutionStrategy.
System.Data.Entity.Core.EntityCommandExecutionException: An error occurred while executing the command definition. See the inner exception for details.
System.Data.SqlClient.SqlException: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) -
System.ComponentModel.Win32Exception: An existing connection was forcibly closed by the remote host
These errors are completely random as to when they occur in the process (I have a counter that tells me where). Looking these errors up, it appears as if they are network errors. I don't have access to my database server logs, so I can't look there. My webserver logs don't shine any light on what is happening. Any help would be fantastic.
EDIT:
I am not running Azure.
I am also getting plenty of primary key violation errors too:
System.Data.Entity.Infrastructure.DbUpdateException: An error occurred while updating the entries. See the inner exception for details.
System.Data.Entity.Core.UpdateException: An error occurred while updating the entries. See the inner exception for details.
System.Data.SqlClient.SqlException: Violation of PRIMARY KEY constraint 'PK_dbo.MissileDataReferences'. Cannot insert duplicate key in object 'dbo.MissileDataReferences'. The duplicate key value is (4277, 2, 448388).
Annoying aren't they? Yes, you'll get these in any high volume site.
This seems to be why even Microsoft introduced the Azure retry policy in SQL Azure. You'd hope that even they could guarantee a connection between their web site and and their database in the same data centre on the same network. But they can't.
You can switch this on for Azure (you don't say if you are using Azure but I suspect not). See https://docs.microsoft.com/en-us/azure/sql-database/sql-database-connectivity-issues for the connection string changes required.
https://msdn.microsoft.com/en-us/library/dn456835(v=vs.113).aspx also covers EF6 on Azure.
Perhaps it is not the answer you want, but I think you should look at using the Polly library as it enables you to set retry logic explicitly for each command.
Why wouldn't you want to do this for every call (to save coding up retry per command)? Well, I've had quite a few problems in the past where we've retried logic that inserts etc. and gets repeated again a moment later (resulting in a primary key violation) because the response didn't get back to the client.
So only do it for safe 'read only' calls and if you must do a retry on writes, put handlers in place to detect dupe inserts, then double check with the server and ask the user what they want to do.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With