I have a java application which uses a few MulticastSocket instances to listen
to a few UDP multicast feeds. Each such socket is handled by a dedicated thread.
The thread reads each Datagram, parses it's content and writes to log (log4j) the packet's sequence id (long) and the timestamp the Datagram was received.
When I try to run 2 instances of the same application on a Windows Server 2008 R2, with 2 * 6 cores and compare the 2 logs created by the 2 applications, I notice that quite frequently the timing of the packets isn't the same.
Most packets are received by the 2 apps at the same time (milis), but frequently there's a difference of about 1-7ms diff between the reception time of the same packet by the 2 apps.
I tried allocating more buffers in the NIC, and also made the socket read buffer bigger. In addition I tried minimizing GC runs and I also use -verbose:gc and can see that GC times and the problematic timing diff does not occur at the same time. This allows me to assume that my problem isn't GC related.
No drop packets problem was observed, and a bandwidth problem is not likely.
Ideas / Opinions are welcome. Thanks.
By default Windows timer interrupt frequency is 100 Hz (1 tick per 10ms). It means that OS cannot guarantee that Java threads will get woken up at higher precision.
Here's an excerpt from a prominent James Holmes article about timing in Java - it could be your case:
for Windows users, particularly on dual-core or multi-processor systems (and it seems most commonly on x64 AMD systems) if you see erratic timing behaviour either in Java, or other applications (games, multi-media presentations) on your system, then try adding the /usepmtimer switch in your boot.ini file.
PS: by no means I'm credible in the field of Windows performance optimization, also starting from Windows 2008 HPET is supported, but how it is related to timer interrupt frequency is a mystery to me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With