We have an async socket server written in C#. (running on Windows Web Server 2008)
It works flawlessly up until it stop accepting new connections for an unknown reason.
We have about 200 concurrent connections on average, however we keep a count of both connections created and connections dropped. These figures can reach as high as 10,000 or as low as only 1000 before it just stops! It can run for up to around 8 hours sometimes before it stops or it can run for about half hour, at the moment it's running for about an hour before we have another application bring it back up automatically when that can't connect (not exactly ideal).
It doesn't appear like we're running out of sockets as we're closing them properly, we're also logging all errors and nothing is happening immediately before it stops.
We can figure this out. Does anyone have any ideas what might be going on?
I can paste code, but it generally just the same old async beginaccept/send code you see everywhere.
Who initiates the active close, the client or the server? If it's the server then you may be accumulating socket's in TIME_WAIT state on the server and this may prevent you from accepting new connections. This is more likely if the client connections can be short lived and you go through periods when lots of short lived client connections occur.
Oh and if you ARE accumulating socket's in TIME_WAIT then please don't just assume that changing the machine-wide time wait period length is the best or only solution.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With