What is best method to measure execution time of Android code snippet?
I have a section of code before and after which I want to place timestamps to find out it's execution time (e.g. one in onCreate()
and another in onDestroy()
method of activity).
I have tried Time.toMillies(false)
, but it only returns seconds to me (with constant 000
at the end). I also tried two java functions: System.currentTimeMillis()
and System.nanoTime()
. First one returns milliseconds of epoch time, second doesn't.
What would be the best way to measure execution time and get good precision?
1) Create a loop around whatneeds to be measured, that executes 10, 100, or 1000 times or more. Measure execution time to the nearest 10 msec. Then divide that time bythe number of times the loop executed. If the loop executed 1000 timesusing a 10 msec clock, you obtain a resolution of 10 µsec for theloop.
By using 20MHZ crystal oscillator, it would be around 200 ns per period. SO it's 200ns*4 Tosc = 800ns/instruction. Since, i write the code in c language , then i compile the program and downloading the HEX file in microcontroller and it works//executes.
The kotlin. time. measureTime(block: () -> Unit) function accepts a block of code as a lambda expression and calculates the elapsed time while executing it. As opposed to measureTimeMillis() and measureNanoTime(), this function returns a kotlin.
What about TimingLogger?
From TimingLogger documentation:
TimingLogger timings = new TimingLogger(YOUR_TAG, "methodA"); // ... do some work A ... timings.addSplit("work A"); // ... do some work B ... timings.addSplit("work B"); // ... do some work C ... timings.addSplit("work C"); timings.dumpToLog();
and the dump will look like:
D/TAG (3459): methodA: begin D/TAG (3459): methodA: 9 ms, work A D/TAG (3459): methodA: 1 ms, work B D/TAG (3459): methodA: 6 ms, work C D/TAG (3459): methodA: end, 16 ms
Do not forget to enable your tag by running: adb shell setprop log.tag.YOUR_TAG VERBOSE
What would be the best way to measure execution time
System.nanoTime()
is probably a good choice. Jake Wharton is using that with Hugo, for example.
and get good precision
This is not strictly possible, as anything can happen on the device while your method is executing. Those external factors will affect your time measurements, by stealing away CPU time, tying up I/O channels, etc. You need to average your tests across several runs to try to average out those external factors, and accuracy/precision will suffer as a result.
And, as Marcin Orlowski notes, to actually figure out why you are consuming certain amounts of time, use Traceview.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With