Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the purpose of JMH @Fork?

If IIUC each fork creates a separate virtual machine for the reason that each virtual machine instance might run with slight differences in JIT instructions?

I'm also curious about what the time attribute does in the below annotations:

@Warmup(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS) @Measurement(iterations = 10, time = 500, timeUnit = TimeUnit.MILLISECONDS) 

TIA, Ole

like image 334
Ole Avatar asked Jan 27 '16 19:01

Ole


People also ask

What is JMH fork?

Java Microbenchmark Harness or JMH is a tool for creating Java microbenchmarks.

What is JMH?

JMH is short for Java Microbenchmark Harness. JMH is a toolkit that helps you implement Java microbenchmarks correctly. JMH is developed by the same people who implement the Java virtual machine, so these guys know what they are doing.

What is JMH operation?

In JMH, "operation" is an abstract unit of work. See e.g. the sample result: Benchmark Mode Cnt Score Error Units MyBenchmark.testMethod avgt 5 5.068 ± 0.586 ns/op. Here, the performance is 5.068 nanoseconds per operation. Nominally, one operation is one @Benchmark invocation.

What is warmup JMH?

With a JMH benchmark you run one or more forks sequentially, and one or more iterations of your benchmark code within each fork. There are two forms of warmup associated with this: At the fork level the warmups parameter to @Fork specifies how many warmup forks to run before running the benchmarked forks.


1 Answers

JMH offers the fork functionality for a few reasons. One is compilation profile separation as discussed by Rafael above. But this behaviour is not controlled by the @Forks annotation (unless you choose 0 forks, which means no subprocesses are forked to run benchmarks at all). You can choose to run all the benchmarks as part of your benchmark warmup (thus creating a mixed profile for the JIT to work with) by using the warmup mode control(-wm).

The reality is that many things can conspire to tilt your results one way or another and running any benchmark multiple times to establish run-to-run variance is an important practice which JMH supports (and most hand-rolled framework don't help with). Reasons for run to run variance might include (but I'm sure there's more):

  • CPU start at a certain C-state and scale up the frequency at the face of load, then overheat and scale it down. You can control this issue on certain OSs.

  • Memory alignment of your process can lead to paging behaviour differences.

  • Background application activity.
  • CPU allocation by the OS will vary resulting in different sets of CPUs used for each run.
  • Page cache contents and swapping
  • JIT compilation is triggered concurrently and may lead to different results (this will tend to happen when larger bits of code are under test). Note that small single threaded benchmarks will typically not have this issue.
  • GC behaviour can trigger with slightly different timings from run to run leading to different results.

Running your benchmark with at least a few forks will help shake out these differences and give you an idea of the run to run variance you see in your benchmark. I'd recommend you start with the default of 10 and cut it back (or increase it) experimentally depending on your benchmark.

like image 195
Nitsan Wakart Avatar answered Sep 20 '22 13:09

Nitsan Wakart