Our tool generates performance logs in diagnostic mode however we track the performance as in code execution time (Stopwatch + miliseconds).
Obviously it's not reliable at all, the testing system's CPU can be used by some random process, results will be totally different if you the tool configured to run 10 threads rather than 2, etc.
My question is:
What's the correct way to find out correct CPU time for a piece of code (not for the whole process)?
What I mean by CPU Time:
Basically how much cycle CPU spent. I assume this will be always a same for the same piece of code in the same computer and not effected by other processes. There might be some fundamental stuff I'm missing in here, if so please enlighten me in the comments or answers.
P.S. Using a profiler is not possible in our setup
Another update,
Why I'm not going to use profiler
Because we need to test the code in different environments with different data where we don't have a profiler or a IDE or something like that. Hence code itself should handle it. An extreme option can be using a profiler's DLL maybe but I don't think this task requires such a complex solution (assuming there is no free and easy to implement profiling library out there).
I assume this will be always a same for the same piece of code in the same computer and not effected by other processes
That's just not the way computers work. Code very much is affected by other processes running on the machine. A typical Windows machine has about 1000 active threads, you can see the number in the Performance tab of Taskmgr.exe. The vast majority of them are asleep, waiting for some kind of event signaled by Windows. Nevertheless, if the machine is running code, including yours, that is ready to go and take CPU time then Windows will give them all a slice of the pie.
Which makes measuring the amount of time taken by your code a pretty arbitrary measurement. The only thing you can estimate is the minimum amount of time taken. Which you do by running the test dozens of times, odds are decent that you'll get a sample that wasn't affected by other processes. That will however never happen in Real Life, you'd be wise to take the median value as a realistic perf measurement.
The only truely useful measurement is measuring incremental improvements to your algorithm. Change code, see how the median time changes because of that.
Basically how much cycle CPU spent. I assume this will be always a same for the same piece of code in the same computer and not effected by other processes. There might be some fundamental stuff I'm missing in here, if so please enlighten me in the comments or answers.
CPU time used by a function is a really squishy concept.
If the purpose is not just measurement, but to find code worth optimizing, I think a more useful concept is Percent Of Time On Stack. An easy way to collect that information is to read the function call stack at random wall-clock times (during the interval you care about). This has the properties:
A profiler that works on this principle is Zoom.
On the other hand, if the goal is simply to measure, so the user can see if changes have helped or hurt performance, then the CPU environment needs to be controlled, and simple overall time measurement is what I'd recommend.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With