Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

IEEE floating point vs custom float performance

I'm working on a processor without a floating point unit so I have to use fixed or a custom floating point type for a user interface.

What does the performance on say a multiply look like for these three types:

  1. IEEE Float (32)
  2. Custom 32 bit float class with a 16 bit signed value and a signed 16 bit exponent
  3. 32-bit fixed decimal

I want something that will scale to a processor with a floating point unit as well, will the custom float be competitive performance-wise with an IEEE float? I've heard the performance of IEEE floats are terrible on processors without FPUs, is that because it has to do crazy and/oring due to the 24-bit value not being native? That is, will the custom float class mitigate that performance problem?

Any help would be greatly appreciated!

like image 351
Ryan Brown Avatar asked Jan 26 '13 08:01

Ryan Brown


People also ask

Is fixed-point arithmetic faster?

Integers and fixed-point arithmetic in FPGA In other words, 10 out of the 16 bits are used to represent the fractional part and 6 bits for the integer part. Fixed-point arithmetic is widely used in FPGA-based algorithms because it usually runs faster and uses fewer resources when compared to floating-point arithmetic.

What is the difference between double and float type of data?

float and double both have varying capacities when it comes to the number of decimal digits they can hold. float can hold up to 7 decimal digits accurately while double can hold up to 15.

What does precision mean in floating-point?

You would get a more accurate result if you were to do the calculation including ten digits to the right of the decimal point (3.1415926535). For computers, this level of accuracy is called precision, and it's measured in binary digits (bits) instead of decimal places. The more bits used, the higher the precision.

What is the single-precision floating point representation of?

The format of IEEE single-precision floating-point standard representation requires 23 fraction bits F, 8 exponent bits E, and 1 sign bit S, with a total of 32 bits for each word. F is the mantissa in 2's complement positive binary fraction represented from bit 0 to bit 22.


1 Answers

Software-emulated IEEE floats/doubles are slow because of many edge cases one needs to check for and properly handle.

  • +/-infinity in input
  • Not-A-Number in input
  • +/-0 in input
  • normalized vs denormalized number in input and the implicit '1' in the mantissa
  • unpacking and packing
  • normalization/denormalization
  • under- and overflow checks
  • correct rounding, which can lead to extra (de)normalization and/or underflow/overflow

If you just roughly count the above as a number of primitive micro operations (1 for each item on the list), you get close to 10. There will be many more in the worst case.

So, if you're interested in IEEE-compilant floating point arithmetic, expect every emulated operation to be something like 30x slower than its integer counterpart (CodesInChaos's comment is timely with the 38 clocks per addition/multiplication).

You could cut some corners by choosing a floating-point format with:

  • just one zero
  • no Not-A-Number
  • normalized numbers only
  • no implicit '1' in the mantissa
  • exponent and mantissa each occupying an integral number of bytes
  • no or primitive rounding
  • possibly, no infinities
  • possibly, 2's complement mantissa
  • possibly, no exponent bias

Fixed-point arithmetic may turn out much more performant. But the usual problem with it is that you have to know all the ranges of the inputs and intermediate results beforehand so you can choose the right format in order to avoid overflows. You'll also likely need a number of different fixed-point formats supported, e.g. 16.16, 32.32, 8.24, 0.32. C++ templates may help reduce code duplication here.

In any event, the best you can do is define your problem, solve it with both floating and fixed point arithmetic, observe which of the two is the best for which CPU and choose the winner.

EDIT: For an example of a simpler floating-point format, take a look at the MIL-STD-1750A's 32-bit floating point format:

 MSB                                         LSB MSB          LSB
------------------------------------------------------------------
| S|                   Mantissa                 |    Exponent    |
------------------------------------------------------------------
  0  1                                        23 24            31

Floating point numbers are represented as a fractional mantissa times 2 raised to the power of the exponent. All floating point numbers are assumed normalized or floating point zero at the beginning of a floating point operation and the results of all floating point operations are normalized (a normalized floating point number has the sign of the mantissa and the next bit of opposite value) or floating point zero. A floating point zero is defined as 0000 000016, that is, a zero mantissa and a zero exponent (0016). An extended floating point zero is defined as 0000 0000 000016, that is, a zero mantissa and a zero exponent. Some examples of the machine representation for 32-bit floating point numbers:

Decimal Number  Hexadecimal Notation  
(Mantissa x Exp)  
0.9999998 x 2127     7FFFFF 7F  
0.5 x 2127   400000 7F  
0.625 x 24   500000 04  
0.5 x 21     400000 01  
0.5 x 20     400000 00  
0.5 x 2-1    400000 FF  
0.5 x 2-128  400000 80  
0.0 x 20     000000 00  
-1.0 x 20    800000 00  
-0.5000001 x 2-128   BFFFFF 80  
-0.7500001 x 24  9FFFFF 04  
like image 168
Alexey Frunze Avatar answered Sep 19 '22 02:09

Alexey Frunze