Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double or float - optimization routines

I am reading through code for optimization routines (Nelder Mead, SQP...). Languages are C++, Python. I observe that often conversion from double to float is performed, or methods are duplicated with double resp. float arguments. Why is it profitable in optimization routines code, and is it significant? In my own code in C++, should I be careful for types double and float and why?

Kind regards.

like image 486
kiriloff Avatar asked Mar 14 '12 20:03

kiriloff


2 Answers

Often the choice between double and float is made more on space demands than speed. Modern processors are capable of operating on double quite fast.

Floats may be faster than doubles when using SIMD instructions (such as SSE) which can operate on multiple values at a time. Also if the operations are faster than the memory pipeline, the smaller memory requirements of float will speed things overall.

like image 125
Mark Ransom Avatar answered Sep 27 '22 18:09

Mark Ransom


Other times that I've come across the need to consider the choice between double and float types in terms of optimisation include:

  • Networking. Sending double precision data across a socket connection will obviously require more time than sending half that amount of data.
  • Mobile and embedded processors may only be able to handle high speed single precision calculations efficiently on a coprocessor.

As mentioned in another answer, modern desktop processors can handle double precision Processing quite fast. However, you have to ask yourself if the double precision processing is really required. I work with audio, and the only time that I can think of where I would need to process double precision data would be when using high order filters where numerical errors can accumulate. Most of the time this can be avoided by paying more careful attention to the algorithm design. There are, of course, other scientific or engineering applications where double precision data is required in order to correctly represent a huge dynamic range.

Even so, the question of how much effort to spend on considering the data type to use really depends on your target platform. If the platform can crunch through doubles with negligible overhead and you have memory to spare then there is no need to concern yourself. Profile small sections of test code to find out.

like image 42
learnvst Avatar answered Sep 27 '22 17:09

learnvst