Now has an approximate resolution of 10 milliseconds on all NT operating systems. The actual precision is hardware dependent.
DateTime. Today is static readonly . So supposedly it should never change once (statically) instantiated.
First, lets take a look at precision: The DateTime type is basically just a 64 bit integer that counts “ticks”. One tick is 100 nanoseconds (or 0.0001 milliseconds) long (MSDN). So DateTime 's precision can be up to 0.0001 milliseconds.
Compare() method in C# is used for comparison of two DateTime instances. It returns an integer value, <0 − If date1 is earlier than date2. 0 − If date1 is the same as date2.
Curiously, your code works perfectly fine on my quad core under Win7, generating values exactly 2 ms apart almost every time.
So I've done a more thorough test. Here's my example output for Thread.Sleep(1)
. The code prints the number of ms between consecutive calls to DateTime.UtcNow
in a loop:
Each row contains 100 characters, and thus represents 100ms of time on a "clean run". So this screen covers roughly 2 seconds. The longest preemption was 4ms; moreover, there was a period lasting around 1 second when every iteration took exactly 1 ms. That's almost real-time OS quality!1 :)
So I tried again, with Thread.Sleep(2)
this time:
Again, almost perfect results. This time each row is 200ms long, and there's a run almost 3 seconds long where the gap was never anything other than exactly 2ms.
Naturally, the next thing to see is the actual resolution of DateTime.UtcNow
on my machine. Here's a run with no sleeping at all; a .
is printed if UtcNow
didn't change at all:
Finally, while investigating a strange case of timestamps being 15ms apart on the same machine that produced the above results, I've run into the following curious occurrences:
There is a function in the Windows API called timeBeginPeriod
, which applications can use to temporarily increase the timer frequency, so this is presumably what happened here. Detailed documentation of the timer resolution is available via the Hardware Dev Center Archive, specifically Timer-Resolution.docx (a Word file).
Conclusions:
DateTime.UtcNow
can have a much higher resolution than 15msThread.Sleep(1)
can sleep for exactly 1msUtcNow
grows grow by exactly 1ms at a time (give or take a rounding error - Reflector shows that there's a division in UtcNow
).Here's the code:
static void Main(string[] args)
{
Console.BufferWidth = Console.WindowWidth = 100;
Console.WindowHeight = 20;
long lastticks = 0;
while (true)
{
long diff = DateTime.UtcNow.Ticks - lastticks;
if (diff == 0)
Console.Write(".");
else
switch (diff)
{
case 10000: case 10001: case 10002: Console.ForegroundColor=ConsoleColor.Red; Console.Write("1"); break;
case 20000: case 20001: case 20002: Console.ForegroundColor=ConsoleColor.Green; Console.Write("2"); break;
case 30000: case 30001: case 30002: Console.ForegroundColor=ConsoleColor.Yellow; Console.Write("3"); break;
default: Console.Write("[{0:0.###}]", diff / 10000.0); break;
}
Console.ForegroundColor = ConsoleColor.Gray;
lastticks += diff;
}
}
It turns out there exists an undocumented function which can alter the timer resolution. I haven't investigated the details, but I thought I'd post a link here: NtSetTimerResolution
.
1Of course I made extra certain that the OS was as idle as possible, and there are four fairly powerful CPU cores at its disposal. If I load all four cores to 100% the picture changes completely, with long preemptions everywhere.
The problem with DateTime when dealing with milliseconds isn't due to the DateTime class at all, but rather, has to do with CPU ticks and thread slices. Essentially, when an operation is paused by the scheduler to allow other threads to execute, it must wait at a minimum of 1 time slice before resuming which is around 15ms on modern Windows OSes. Therefore, any attempt to pause for less than this 15ms precision will lead to unexpected results.
IF you take a snap shot of the current time before you do anything, you can just add the stopwatch to the time you stored, no?
You should ask yourself if you really need accurate time, or just close enough time plus an increasing integer.
You can do good things by getting now() just after a wait event such as a mutex, select, poll, WaitFor*, etc, and then adding a serial number to that, perhaps in the nanosecond range or wherever there is room.
You can also use the rdtsc machine instruction (some libraries provide an API wrapper for this, not sure about doing this in C# or Java) to get cheap time from the CPU and combine that with time from now(). The problem with rdtsc is that on systems with speed scaling you can never be quite sure what its going to do. It also wraps around fairly quickly.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With