Assuming I am really pressed for memory and want a smaller range (similar to short
vs int
). Shader languages already support half
for a floating-point type with half the precision (not just convert back and forth for the value to be between -1 and 1, that is, return a float like this: shortComingIn / maxRangeOfShort
). Is there an implementation that already exists for a 2-byte float?
I am also interested to know any (historical?) reasons as to why there is no 2-byte float.
reasons as to why there is no 2-byte float. It's called half-precision floating point in IEEE lingo, and implementations exist, just not in the C standard primitives (which C++ uses by extension).
A 4‑byte floating point field is allocated for it, which has 23 bits of precision. float(41) defines a floating point type with at least 41 binary digits of precision in the mantissa. A 8‑byte floating point field is allocated for it, which has 53 bits of precision.
Yes it has 4 bytes only but it is not guaranteed.
Re: Implementations: Someone has apparently written half
for C, which would (of course) work in C++: https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/cellperformance-snippets/half.c
Re: Why is float
four bytes: Probably because below that, their precision is so limited. In IEEE-754, a "half" only has 11 bits of significand precision, yielding about 3.311 decimal digits of precision (vs. 24 bits in a single yielding between 6 and 9 decimal digits of precision, or 53 bits in a double yielding between 15 and 17 decimal digits of precision).
There are currently 2 common standard 16-bit float formats: IEEE-754 binary16 and Google's bfloat16. Since they're standardized, obviously anyone who knows the spec can write an implementation. Some examples:
Or if you don't want to use them, you can also design a different 16-bit float format and implement it
2-byte floats are generally not used, because even float's precision is not enough for normal operations and double
should always be used by default unless you're limited by bandwidth or cache size. Floating-point literals are also double
when using without a suffix in C and C-like languages. See
However less-than-32-bit floats do exist. They're mainly used for storage purposes, like in graphics when 96 bits per pixel (32 bits per channel * 3 channels) are far too wasted, and will be converted to a normal 32-bit float for calculations (except on some special hardware). Various 10, 11, 14-bit float types exist in OpenGL. Many HDR formats use a 16-bit float for each channel, and Direct3D 9.0 as well as some GPUs like the Radeon R300 and R420 have a 24-bit float format. A 24-bit float is also supported by compilers in some 8-bit microcontrollers like PIC where 32-bit float support is too costly. 8-bit or narrower float types are less useful but due to their simplicity, they're often taught in computer science curriculum. Besides, a small float is also used in ARM's instruction encoding for small floating-point immediates.
The IEEE 754-2008 revision officially added a 16-bit float format, A.K.A binary16 or half-precision, with a 5-bit exponent and an 11-bit mantissa
Some compilers had support for IEEE-754 binary16, but mainly for conversion or vectorized operations and not for computation (because they're not precise enough). For example ARM's toolchain has __fp16
which can be chosen between 2 variants: IEEE and alternative depending on whether you want more range or NaN/inf representations. GCC and Clang also support __fp16
along with the standardized name _Float16
. See How to enable __fp16 type on gcc for x86_64
Recently due to the rise of AI, another format called bfloat16 (brain floating-point format) which is a simple truncation of the top 16 bits of IEEE-754 binary32 became common
The motivation behind the reduced mantissa is derived from Google's experiments that showed that it is fine to reduce the mantissa so long it's still possible to represent tiny values closer to zero as part of the summation of small differences during training. Smaller mantissa brings a number of other advantages such as reducing the multiplier power and physical silicon area.
- float32: 242=576 (100%)
- float16: 112=121 (21%)
- bfloat16: 82=64 (11%)
Many compilers like GCC and ICC now also gained the ability to support bfloat16
More information about bfloat16:
In cases where bfloat16 is not enough there's also the rise of a new 19-bit type called TensorFloat
If you're low on memory, did you consider dropping the float concept? Floats use up a lot of bits just for saving where the decimal point is. You can work around this if you know where you need the decimal point, let's say you want to save a Dollar value, you could just save it in Cents:
uint16_t cash = 50000;
std::cout << "Cash: $" << (cash / 100) << "." << ((cash % 100) < 10 ? "0" : "") << (cash % 100) << std::endl;
That is of course only an option if it's possible for you to predetermine the position of the decimal point. But if you can, always prefer it, because this also speeds up all calculations!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With