Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

ColdFusion Execution Time Accuracy

Tags:

coldfusion

I found a very old thread from 2004 that reported the fact that the execution times listed in ColdFusion debugging output are only accurate to the 16ms. Meaning, that when you turn debugging output on and look at execution times, you're seeing an estimate to the closest multiple of 16ms. I can see this today with ACF10. When refreshing a page, most times bounce between multiples of 15-16ms.

Here are the questions:

  1. Starting at the bottom, when ColdFusion reports 0ms or 16ms, does that always mean somewhere between 0 and 16, but not over 16ms?

  2. When coldfusion reports 32 ms, does this mean somewhere between 17 and 32?

  3. ColdFusion lists everything separately by default rather than as an execution tree where callers include many functions. When determining the execution cost higher up on the tree, is it summing the "innaccurate" times of the children, or is this a realistic cost of the actual time all the child processes took to execute?

  4. Can we use cftimers or getTickCount() to actually get accurate times, or are these also estimates?

  5. Sometimes, you'll see that 3 functions took 4ms each for a total of 12ms or even a single call taking 7ms. Why does it sometimes seem "accurate?"

I will now provide some guesses, but I'd like some community support!

  1. Yes

  2. Yes

  3. ColdFusion will track report accurate to the 16ms the total time that process took rather than summing the child processes.

  4. cftimers and getTickCount() are more accurate.

  5. I have no idea?

like image 830
J.T. Avatar asked Feb 03 '14 16:02

J.T.


1 Answers

In Java, you either have System.currentTimeMillis() or System.nanoTime().

I assume getTickCount() merely returns System.currentTimeMillis(). It's also used by ColdFusion to report debugging output execution times. You can find on numerous StackOverflow questions complaining about the inaccuracy of System.currentTimeMillis() because it is reporting from the operating system. On Windows, the accuracy can vary quite a bit, up to 50ms some say. It doesn't take leap ticks into account or anything. However, it is fast. Queries seem to report either something from the JDBC driver, the SQL engine, or another method as they are usually accurate.

As an alternative, if you really want increased accuracy, you can use this:

currentTime = CreateObject("java", "java.lang.System").nanoTime()

That is less performant than currentTimeMillis(), but it is precise down to nanoseconds. You can divide by 1000 to get to microseconds. You'll want to wrap in precisionEvaluate() if you are trying to convert to milliseconds by dividing by 1000000.

Please note that nanoTime() is not accurate to the nanosecond, is just precise to the nanosecond. It's accuracy is just a matter of being an improvement over currentTimeMillis().

like image 111
J.T. Avatar answered Nov 07 '22 03:11

J.T.