Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why aren't Floating-Point Decimal numbers hardware accelerated like Floating-Point Binary numbers?

Is it worth it to implement it in hardware? If yes why? If not why not?


Sorry I thought it is clear that I am talking about Decimal Rational Numbers! Ok something like decNumber++ for C++, decimal for .NET... Hope it is clear now :)

like image 686
Khaled Alshaya Avatar asked Sep 18 '09 23:09

Khaled Alshaya


People also ask

Why are floating point calculations so inaccurate?

Floating-point decimal values generally do not have an exact binary representation. This is a side effect of how the CPU represents floating point data. For this reason, you may experience some loss of precision, and some floating-point operations may produce unexpected results.

What is the main problem with floating point numbers?

The Problem Since real numbers cannot be represented accurately in a fixed space, when operating with floating-point numbers, the result might not be able to be fully represented with the required precision.

Why do computers mess up floating point math?

Because JavaScript uses the IEEE 754 standard for Math, it makes use of 64-bit floating numbers. This causes precision errors when doing floating point (decimal) calculations, in short, due to computers working in Base 2 while decimal is Base 10.

Why computers Cannot use normal decimal numbers as data representation?

Because binary lends itself naturally to a ON-OFF system where ON means a current and OFF means no current. Using 10 different voltage levels would be very error prone. You could use binary numbers to represent decimal digits (this is called BCD).

Why are floating points inaccurate in Python?

The floating-point calculations are inaccurate because mainly the rationals are approximating that cannot be represented finitely in base 2 and in general they are approximating numbers which may not be representable in finitely many digits in any base.

How do computers store floating point numbers?

This type of number is also known as a floating point number. All floating point numbers are stored by a computer system using a mantissa and an exponent.


1 Answers

The latest revision of the IEEE 754:2008 standard does indeed define hardware decimal floating point numbers, using the representations shown in the software referenced in the question. The previous version of the standard (IEEE 754:1985) did not provide decimal floating point numbers. Most current hardware implements the 1985 standard and not the 2008 standard, but IBM's iSeries computers using Power6 chips have such support, and so do the z10 mainframes.

The standardization effort for decimal floating point was spearheaded by Mike Cowlishaw of IBM UK, who has a web site full of useful information (including the software in the question). It is likely that in due course, other hardware manufacturers will also introduce decimal floating point units on their chips, but I have not heard a statement of direction for when (or whether) Intel might add one. Intel does have optimized software libraries for it.

The C standards committee is looking to add support for decimal floating point and that work is TR 24732.

like image 187
Jonathan Leffler Avatar answered Oct 16 '22 21:10

Jonathan Leffler