Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Insufficient winsock resources

We have a c# web service and client, both created in Visual Studio 2008 (new project -> ASP.Net web service application). Service is hosted on Windows server 2012 R2, IIS 8.5.

When the client sends the data to our service, we forward it to a 3rd party service, save the result to a database and return it to the client.

The problem is that, in some rare occasions, when our service is under heavy load (many requests per second), it starts throwing 'Insufficient winsock resources available to complete socket connection initiation'.

We found that our web service is opening many TCP connections to 3rd party services, and leaving them in TIME_WAIT state. When the number of such connections reaches a high number (around 17000), the entire server loses ability to make any new connections. Everything from remote desktop to the internet browser stops working. This lasts for a few minutes, and then, when Windows begins closing those connections, it resumes normally.

For communication with the 3rd party service, our service is using only one SoapClient instance through its entire life time. It is created on initialization, and is never closed or destroyed; new instances are never created.

BLIND.BLINDSoapClient client = new BLIND.BLINDSoapClient(base.binding, base.address);

When sending the data to the 3rd party service we simply call its web method, and leave it like that without closing, disposing or doing any clean-up:

BLIND.Answer answer = client.Search(...);
..save to database
return answer;

Is there anything we can do to avoid this build-up of time_wait connections?

Is there a better way to manage SoapClient(s)? Should we open a new soap client for every request and close them manually?

If it is relevant, here is how our binding is set up:

      binding = new BasicHttpBinding();
      binding.Name = "SLTDSoap";
      binding.CloseTimeout = TimeSpan.FromSeconds(Timeout);
      binding.OpenTimeout = TimeSpan.FromSeconds(Timeout);
      binding.ReceiveTimeout = TimeSpan.FromSeconds(Timeout);
      binding.SendTimeout = TimeSpan.FromSeconds(Timeout);
      binding.AllowCookies = false;
      binding.BypassProxyOnLocal = false;
      binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard;
      binding.MaxBufferSize = 65536;
      binding.MaxBufferPoolSize = 524288;
      binding.MessageEncoding = WSMessageEncoding.Text;
      binding.TextEncoding = System.Text.Encoding.UTF8;
      binding.TransferMode = TransferMode.Buffered;
      binding.UseDefaultWebProxy = true;

      binding.ReaderQuotas.MaxDepth = 32;
      binding.ReaderQuotas.MaxStringContentLength = 8192;
      binding.ReaderQuotas.MaxArrayLength = 16384;
      binding.ReaderQuotas.MaxBytesPerRead = 4096;
      binding.ReaderQuotas.MaxNameTableCharCount = 16384;

      binding.Security.Mode = (_url.StartsWith("https:")) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None;
      binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None;
      binding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.None;
      binding.Security.Transport.Realm = "";
      binding.Security.Message.ClientCredentialType = BasicHttpMessageCredentialType.UserName;
      binding.Security.Message.AlgorithmSuite = System.ServiceModel.Security.SecurityAlgorithmSuite.Default;

System.Net.ServicePointManager.DefaultConnectionLimit = 500;

Thank you!

like image 259
dbrckovi Avatar asked Oct 01 '22 14:10

dbrckovi


1 Answers

I think we might have solved the 'insufficient winsock resources' problem.

We have set the following registry values: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters\MaxUserPort = 60000 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\Tcpip\Parameters\TcpTimedWaitDelay = 30

Our maximum expected load on production environment at peek times is 150 requests per second. This means we'll create 4500 connections in 30 seconds, before windows starts releasing them. This is well below 60000, and should ensure this problem doesn't happen again.

We left the system running at 150 requests per seconds with these settings over 3 days, and the problem didn't happen.

like image 55
dbrckovi Avatar answered Oct 03 '22 04:10

dbrckovi