Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java Performance History [closed]

I wonder if there is some resource on the web where the different versions of the Sun Java VM are compared by speed.

Something like the PyPy speed timeline would be optimal, because I'm interested in how much progress was actually made over time.

Does anyone know of such a project?

like image 784
Axel Gneiting Avatar asked Jan 11 '11 12:01

Axel Gneiting


People also ask

Is Java being discontinued?

No, Oracle isn't killing the Java programming language itself, which is still widely used by many companies.

Is C++ faster than Java?

Speed and performance Java is a favorite among developers, but because the code must first be interpreted during run-time, it's also slower. C++ is compiled to binaries, so it runs immediately and therefore faster than Java programs.

Why C# is faster than Java?

C# is generally considered faster than Java, although the difference is insignificant. Both languages are compiled, but C# uses a just-in-time compiler while Java uses an ahead-of-time compiler. This means that C# code is typically executed more quickly.

What causes performance issues in Java?

Most code-level problems are caused by code structure errors, such as long waits, incorrect iterations, inefficient code algorithms, and improper selection of data structures. Most often, programming problems manifest themselves as loops that occupy CPU cycles in the JVM.


1 Answers

I'm not aware of any such project. However, you could do this yourself:

  • Old versions of Java (back to 1.1) are still available for download from this page.
  • The "Benchmark Game" site has Java implementations of 10 benchmarks that you could use.

But be aware that general benchmarks are notorious for not being predictive of how real applications perform. And Java benchmarking has unique problems due to things such as class loading, JIT compilation and heap sizing and tuning.


Igouy comments: "Benchmarks are notorious for not necessarily being predictive of how real applications perform. Also real applications are not necessarily predictive of how other real applications perform. Someone did tell me that fasta, k-nucleotide, reverse-complement and regex-dna really were what they wrote at work - for them those tiny programs are "real applications"."

I'm not saying benchmarks are never predictive. The problem happens when someone takes a typical benchmark or set of benchmarks, and then use it / them to predict how a specific application (or worse, "all applications") will perform. With this approach, it is a matter of luck whether the benchmarking makes an accurate prediction. This is essentially what the OP is doing.

A better approach might be to pick a benchmark (or write one) that matches what the application does. But even with this approach, the predictivity depends on correctly matching the key performance-related attributes of the benchmark and the real application. And, most importantly, there is no way to know whether you have done this correctly, until after you have implemented your application ... by which time it is too late to use the prediction.

Hence, even if your benchmarking turns out to have made an accurate prediction, you cannot objectively determine a priori whether a prediction is likely to be accurate.

Obviously, if the "benchmark" is essentially the real application, you probably can rely on the predictions. But clearly, that's not what the OP is trying to do. And besides, there are still issues (like problem size scaling) that can confound the predictions if you don't take them into account.

like image 89
Stephen C Avatar answered Sep 22 '22 12:09

Stephen C