Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why aren’t posit arithmetic representations commonly used?

I recently found this library that seems to provide its own types and operations on real numbers that are 2 to 3 orders of magnitude faster than normal floating point arithmetic.

The library is based on using a different representation for real numbers. One that is described to be both more efficient and mathematically accurate than floating point - posit.

If this representation is so efficient why isn’t it widely used in all sorts of applications and implemented in hardware, or maybe it is? As far as I know most typical hardware uses some kind of IEEE floating point representation for real numbers.

Is it somehow maybe only applicable to some very specific AI research, as they seem to list mostly that as an example?

If this representation is not only hundreds to thousands of times faster than floating point, but also much more deterministic and designed for use in concurrent systems, why isn’t it implemented in GPUs, which are basically massively concurrent calculators working on real numbers? Wouldn’t it bring huge advances in rendering performance and GPU computation capabilities?

Update: People behind the linked Universal library have released a paper about their design and implementation.

like image 884
janekb04 Avatar asked Aug 29 '20 20:08

janekb04


3 Answers

The most objective and convincing reason I know of is that posits were introduced less than 4 years ago. That's not enough time to make inroads in the marketplace (people need time to develop implementations), much less take it over (which, among other things, requires overcoming incompatibilities with existing software).

Whether or not the industry wants to make such a change is a separate issue that tends towards subjectivity.

like image 126
JaMiT Avatar answered Oct 23 '22 03:10

JaMiT


The reason why the IEEE standard seems to be slower is because the IEEE addresses some topics with an higher importance. For example:

. . .

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) defines:

arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special "not a number" values (NaNs)

interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form

rounding rules: properties to be satisfied when rounding numbers during arithmetic and conversions

operations: arithmetic and other operations (such as trigonometric functions) on arithmetic formats

exception handling: indications of exceptional conditions (such as division by zero, overflow, etc.)

The above is from Wikipedia copied: https://en.wikipedia.org/wiki/IEEE_754

. . .

Your linked library, which seems to be called the posit number system advocates the following strengths.

Economical - No bit patterns are redundant. There is one representation for infinity denoted as ± inf and zero. All other bit patterns are valid distinct non-zero real numbers. ± inf serves as a replacement for NaN.

Mathematical Elegant - There is only one representation for zero, and the encoding is symmetric around 1.0. Associative and distributive laws are supported through deferred rounding via the quire, enabling reproducible linear algebra algorithms in any concurrency environment.

Tapered Accuracy - Tapered accuracy is when values with small exponent have more digits of accuracy and values with large exponents have fewer digits of accuracy. This concept was first introduced by Morris (1971) in his paper ”Tapered Floating Point: A New Floating-Point Representation”.

Parameterized precision and dynamic range -- posits are defined by a size, nbits, and the number of exponent bits, es. This enables system designers the freedom to pick the right precision and dynamic range required for the application. For example, for AI applications we may pick 5 or 6 bit posits without any exponent bits to improve performance. For embedded DSP applications, such as 5G base stations, we may select a 16 bit posit with 1 exponent bit to improve performance per Watt.

Simpler Circuitry - There are only two special cases, Not a Real and Zero. No denormalized numbers, overflow, or underflow.

The above is from GitHub copied: https://github.com/stillwater-sc/universal

. . .

So, in my opinion, the posit number system prefers performance, while the IEEE Standard for Floating-Point Arithmetic (IEEE 754) prefers technical compatibility and interchangeability.

like image 44
paladin Avatar answered Oct 23 '22 03:10

paladin


I strongly challenge the claim of that library being faster than IEEE floating point:

Modern hardware includes circuitry specifically designed to handle IEEE floating point arithmetic. Depending on your CPU model, it can perform roughly 0.5 to 4 floating point operations per clock cycle. Yes, this circuitry does complex things, but because it's built in hardware and aggressively optimized for many years, it achieves this kind of speed.

Any software library that provide a different floating point format must perform the arithmetic in software. It cannot just say "please multiply these two numbers using double precision arithmetic" and see the result appear in the corresponding register two clock cycles later, it must contain code that takes the four different parts of the posit format, handles them separately, and fuses together a result. And that code takes time to execute. Much more time than just two clock cycles.

The "universal" library may have corner cases where its posit number format shines. But speed is not where it can hope to compete.

like image 3
cmaster - reinstate monica Avatar answered Oct 23 '22 01:10

cmaster - reinstate monica