I am reading through code for optimization routines (Nelder Mead, SQP...). Languages are C++, Python. I observe that often conversion from double to float is performed, or methods are duplicated with double resp. float arguments. Why is it profitable in optimization routines code, and is it significant? In my own code in C++, should I be careful for types double and float and why?
Kind regards.
Often the choice between double
and float
is made more on space demands than speed. Modern processors are capable of operating on double
quite fast.
Floats may be faster than doubles when using SIMD instructions (such as SSE) which can operate on multiple values at a time. Also if the operations are faster than the memory pipeline, the smaller memory requirements of float
will speed things overall.
Other times that I've come across the need to consider the choice between double and float types in terms of optimisation include:
As mentioned in another answer, modern desktop processors can handle double precision Processing quite fast. However, you have to ask yourself if the double precision processing is really required. I work with audio, and the only time that I can think of where I would need to process double precision data would be when using high order filters where numerical errors can accumulate. Most of the time this can be avoided by paying more careful attention to the algorithm design. There are, of course, other scientific or engineering applications where double precision data is required in order to correctly represent a huge dynamic range.
Even so, the question of how much effort to spend on considering the data type to use really depends on your target platform. If the platform can crunch through doubles with negligible overhead and you have memory to spare then there is no need to concern yourself. Profile small sections of test code to find out.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With