I've got a .net 2.0 client app I've inherited that uses sockets. Server is running on an iSeries. I've got computers that try to use the client app and are experiencing lag. On a computer experiencing "lag" I've determined that the Socket.Poll method is taking longer.
Here is how(i think) I know.
MyApp.WriteLogEntry("CS: START check for readable socket");
start = DateTime.Now;
readable = ControllerSocket.Poll(500, SelectMode.SelectRead);
end = DateTime.Now;
MyApp.WriteLogEntry("CS: END check for readable socket");
elapsed = end.Subtract(start);
MyApp.WriteLogEntry("Elapsed TotalMilliseconds = " + elapsed.TotalMilliseconds.ToString());
log from a computer with no lag
10.04.22.994427|CS: START check for readable socket
10.04.22.997427|CS: END check for readable socket
10.04.22.997427|Elapsed TotalMilliseconds = 1.0001
log from a computer with lag
10.03.30.729816|CS: START check for readable socket
10.03.30.745432|CS: END check for readable socket
10.03.30.745432|Elapsed TotalMilliseconds = 15.6152
Both computer are windows 7 64 bit. One is a fresh copy from the disk (no lag), other computer is corporate image(lag). Both computers are gigabit ethernet.
I've disabled firewalls on both and they both are running Symantec Endpoint 12, configured identically. I've removed SEP all together and get the same result
Why the delay? Registry setting? Ninja Gremlins?
EDIT switch out to use the stopwatch class for timing
MyApp.WriteLogEntry("CS: START check for readable socket");
stopwatch.Start();
readable = ControllerSocket.Poll(500, SelectMode.SelectRead);
stopwatch.Stop();
MyApp.WriteLogEntry("Elapsed TotalMilliseconds = " + stopwatch.Elapsed.ToString());
MyApp.WriteLogEntry("CS: END check for readable socket");
11.27.30.012079|CS: START check for readable socket
11.27.30.013079|Elapsed TotalMilliseconds = 00:00:00.0000696
11.27.30.013079|CS: END check for readable socket
11.28.30.518912|CS: START check for readable socket
11.28.30.534512|Elapsed TotalMilliseconds = 00:00:00.0148936
11.28.30.534512|CS: END check for readable socket
GOOD READ:http://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
It is actually the "fast" machine that is misbehaving. Timers in Windows have a resolution that's determined by the clock interrupt rate. A properly configured machine ticks 64 times per second, that makes the accuracy of a timer 15.625 msec. The normal state of the processor between ticks is to be powered-off, stopped on a HLT instruction. During which it of course can't observe time passing by.
You can usually find the program that causes the machine to misbehave by running powercfg.exe /energy
from an elevated command report. This usually pinpoints a media related program, audio drivers or plugins are often to blame. Google's Chrome is notorious for doing this, even on battery-powered devices, worst possible thing you could do to battery life.
The resolution suggested by Socket.Poll() is of course greatly overstated, this comes from the underlying select() socket function. Dates back to Unix in the 1980s when sockets were invented, power consumption was definitely not a concern back then.
This ought to not be a problem, after all there was nothing to do so it should not matter how long it took. And you should not in general use the method but rely on asynchronous I/O with Socket.BeginSend/Receive(), very efficient. If you look for a quick fix then you can do the evil thing as well and reprogram the clock interrupt rate. You have to pinvoke the timeBeginPeriod() function. Ask for 1 msec. And pinvoke timeEndPeriod() when you don't need it anymore.
You are observing the Windows timer resolution. By default, it is about 15ms. Some programs turn it up to 1ms. It is not always at such a high value in order to drain the battery less.
Use the Stopwatch
class.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With