I have a problem with System.Net.Sockets.TcpClient
.
A simple test app just opens a connection, sends some data, and closes. There's a simple server on the other end, whose performance is just fine.
The code looked something like this:
var client = new TcpClient("localhost", 1234);
using (var stream = client.GetStream())
using (var writer = new StreamWriter(stream))
{
writer.Write("foo");
writer.flush();
}
client.Close();
It works just fine, but I noticed that the unit test was taking > 1000ms to run. When I put it in a loop called 10 times, it was > 10,000ms.
After hours of debugging with timings on both the client and the server, I found where it was slow.
The fix was to change the code from this:
var client = new TcpClient("localhost", 1234);
to this:
var client = new TcpClient();
client.Connect("localhost", 1234);
That made all the difference. One pass takes about 10ms now, and 10 passes is a little less than 100ms.
WHY???
From http://msdn.microsoft.com/en-us/library/115ytk56(v=vs.110).aspx:
"If IPv6 is enabled and the TcpClient(String, Int32) method is called to connect to a host that resolves to both IPv6 and IPv4 addresses, the connection to the IPv6 address will be attempted first before the IPv4 address. This may have the effect of delaying the time to establish the connection if the host is not listening on the IPv6 address."
I'm not sure why the default constructor doesn't do this too (I would have expected you to have to use the constructor that takes an AddressFamily and specify IPv4 before connecting) but apparently it doesn't.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With