What I mean is...
get the time, run the code, get the time, compare the time and get the seconds out:
am I doing this right?
DateTime timestamp = DateTime.Now;
//...do the code...
DateTime endstamp = DateTime.Now;
string results = ((endstamp.ticks - timestamp.ticks)/10000000).ToString();
Using clock() function We can use the clock() function provided by the <time. h> header file to calculate the CPU time consumed by a task within a C application. It returns the clock_t type, which stores the total number of clock ticks.
To calculate the running time, find the maximum number of nested loops that go through a significant portion of the input. Some algorithms use nested loops where the outer loop goes through an input n while the inner loop goes through a different input m. The time complexity in such cases is O(nm).
Hence the correct answer is Response time. Throughput: It is nothing but the measure of work i.e., the number of processes that are completed per time unit.
You should use Stopwatch for this, for example:
var sw = Stopwatch.StartNew();
//...do the code...
sw.Stop();
var result = sw.ElapsedTicks; //ticks it took
//or less accurate/for bigger tasks, sw.ElapsedMilliseconds
Edited to include @Brian's improvement from comments.
As many people have noted, the high-precision Stopwatch class is designed for answering the question "how long did this take?" whereas the DateTime class is designed for answering the question "when does Doctor Who start?" Use the right tool for the job.
However, there is more to the problem of correctly measuring elapsed time than simply getting the timer right. You've also got to make sure that you're measuring what you really want to measure. For example, consider:
// start the timer
M();
// stop the timer
// start another timer
M();
// stop the timer
Is there going to be a significant difference between the timings of the two calls? Possibly yes. Remember, the first time a method is called the jitter has to compile it from IL into machine code. That takes time. The first call to a method can be in some cases many times longer than every subsequent call put together.
So which measurement is "right"? The first measurement? The second? An average of them? It depends on what you are trying to optimize for. If you are optimizing for fast startup then you care very very much about the jit time. If you are optimizing for number of identical pages served per second on a warmed-up server then you don't care at all about jit time and should be designing your tests to not measure it. Make sure you are measuring the thing you are actually optimizing for.
No. Use the System.Diagnostics.Stopwatch
class instead. DateTime.Now
doesn't have the level of precision that you desire (although the DateTime
struct is plenty precise, in and of itself).
Stopwatch watch = new Stopwatch();
watch.Start();
// do stuff
watch.Stop();
long ticks = watch.ElapsedTicks;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With