Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double multiplication vs addition speed

Background

I'm an aerospace engineering and EECS student. I'm at the point where I work with a lot of math and physics, but haven't gotten into algorithms or assembly language yet.

I design and code a comically wide variety of programs, from business proposal software to satellite hardware controllers.

Most of this work involves doing math in some other medium, then writing code to implement it.

I algebraically simplify these equations before putting them into code. But before I expend the time to do so, I would like to know whether I should favor more addition operations, or more multiplication operations. (I already know division is much more costly.)


Example

B sub-x prime

This is an equation I've derived from some other work, and this is pretty typical of what I see.

We can plainly see that there are at least a handful of ways to simplify this equation. Since the simplification is at my discretion, I'd like to pick the option that favors performance as much as practical. I'm not going for bleeding-edge performance at the cost of algorithm design time.


The Question

In general, which double operation is faster: addition or multiplication?

I know the only definitive way to know which is faster is to write and run benchmarks, but that isn't the point here. This is not a high enough priority in what I do to justify writing test code every time I need to simplify an equation. What I need is a rule of thumb to apply to my algebra.

If the difference is so marginal as to border on negligible or inconclusive, that's an acceptable answer, so long as I know it makes nearly no difference.


Backing Research

I know that, in C and C++, the optimizer takes care of the algebra, so it's a null-issue. However, as I understand it, the Java compiler does not do algebraic simplification/optimization. Specifically, this answer indicates this to be the case, and that the programmer should do this sort of optimization.

There are scattered answers for this across the internet, but I can't come up with a conclusive answer. A former University of Maryland physics student ran these tests on Java, but the double performance data is absent in the tables, and the graph scales make the results indiscernible. This University of Quebec CS professor's tests only reveal results for integer operations. This SO answer explains that, on a hardware level, multiplication is a more complicated operation, but I'm also aware that engineers design processors with these sorts of things in mind.

Other marginally helpful links:

  • SO: Decimal vs Double Speed
  • IBM article on Java bytecode
  • SO: Theory - Addition operation versus multiplication operation
  • SO: (n - Multiplication) vs (n/2 - multiplication + 2 additions) which is better?
like image 310
drmuelr Avatar asked Oct 29 '22 22:10

drmuelr


1 Answers

In general, you should write the code which is clearest. The JIT (not the javac) takes simple, common patterns and optimises them. This means using simple, common patterns is often the best way to optimise code.

If you profile your application and find that code is not running optimially you can try optimising the code yourself however;

  • meaningful micro benchmarks are difficult to write.
  • the results can be highly sensitive to the environment. Change the update of Java or the CPU model and you can get conflicting result.
  • when you performance test the whole code you are likely to find the delays are not when you expect them to be. e.g, they are often in IO.

Unless you are confident an optimisation really helps, you should stick with code which is simplest and easiest to maintain and you are likely to find it run fast enough.

like image 193
Peter Lawrey Avatar answered Nov 16 '22 11:11

Peter Lawrey