Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What do the results from benchmark js mean?

I am using a version of Benchmark JS for node and I can't find any information about how to read the results.

Firstly, is there a place that details what all the data you can extract from Benchmark JS?

Secondly I am currently getting the following result in my console:

Test x 2,276,094 ops/sec ±0.84% (190 runs sampled)

What do all these bits of information mean?

Test: the name of my test, I know that one

x 2,276,094 ops/sec: I am assuming this is the average number of times the code could theoretically run in a second?

±0.84%: No idea

190 runs sampled: The amount of times benchmark ran the code to get the result?

like image 705
McShaman Avatar asked Feb 15 '15 09:02

McShaman


1 Answers

Your question probably isn't a duplicate, but amusingly the answer to it is as GolezTroi points out in a comment.

In case that question is deleted (highly unlikely), here's the full text of John-David Dalton's answer:

I wrote Benchmark.js, which jsPerf uses.

  1. "ops/sec" stands for operations per second. That is how many times a test is projected to execute in a second.

  2. A test is repeatedly executed until it reaches the minimum time needed to get a percentage uncertainty for the measurement of less than or equal to 1%. The number of iterations will vary depending on the resolution of the environment’s timer and how many times a test can execute in the minimum run time. We collect completed test runs for 5 seconds (configurable), or at least 5 runs (also configurable), and then perform statistical analysis on the sample. So, a test may be repeated 100,000 times in 50 ms (the minimum run time for most environments), and then repeated 100 times more (5 seconds). A larger sample size (in this example, 100), leads to a smaller margin of error.

  3. We base the decision of which test is faster on more than just ops/sec by also accounting for margin of error. For example, a test with a lower ops/sec but higher margin of error may be statistically indistinguishable from a test with higher ops/sec and lower margin of error.

    We used a welch t-test, similar to what SunSpider uses, but switched to an unpaired 2-sample t-test for equal variance (the variance is extremely small) because the welch t-test had problems comparing lower ops/sec and higher ops/sec with small variances which caused the degrees of freedom to be computed as less than 1. We also add a 5.5% allowance on tests with similar ops/sec because real world testing showed that identical tests can swing ~5% from test to re-test. T-tests are used to check that differences between tests are statistically significant.

like image 84
4 revs, 2 users 82% Avatar answered Oct 19 '22 03:10

4 revs, 2 users 82%