When we talk about performance, benchmarks, execution time, and we tend to say that implementation A is N percent faster/slower than implementation B, what exactly we mean?
For example implementation A took 70 milliseconds, and B took 80 milliseconds.
80/.70-100 = 14.285714285714292
100-70/.80 = 12.5
This always puzzled me, is there a standard or common approach here?
It should be mathematic approach. You want to calculate simple percentage (how many A can fit into B...). Example:
I have 10 bananas, you have 5. So i have 200% of your bananas, but you have only 50% of mine.
A is 70/80 of B. So A is 12,5% faster than B.
B is 80/70 of A. So B is ~14% slower than A.
With a little bit of explanation...
Faster Value: 70ms
Slower Value: 80ms
70ms/80ms = 0.875
p = 0.875 * 100
p = 87.5%
70ms is 87.5% of 80%
100% represents 80ms.
d = 100% - 87.5%
d = 12.5%
70ms is 12.5% faster than 80ms.
Fast calculator operation for repetitive clerical work.
70/80*100-100 and just mentally apply absolute value to the result.
70/80*100-100 = -12.5
|-12.5| = 12.5
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With