Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

on what systems does Python not use IEEE-754 double precision floats

Python makes various references to IEEE 754 floating point operations, but doesn't guarantee 1 2 that it'll be used at runtime. I'm therefore wondering where this isn't the case.

CPython source code defers to whatever the C compiler is using for a double, which in practice is an IEEE 754-2008 binary64 on all common systems I'm aware of, e.g.:

  • Linux and BSD distros (e.g. FreeBSD, OpenBSD, NetBSD)
    • Intel i386/x86 and x86-64
    • ARM: AArch64
    • Power: PPC64
  • MacOS all architectures supported are 754 compatible
  • Windows x86 and x86-64 systems

I'm aware there are other platforms it's known to build on but don't know how these work out in practice.

like image 906
Sam Mason Avatar asked Dec 01 '21 12:12

Sam Mason


People also ask

Does IEEE 754 float Python?

Introduction to the Python float type The C double type usually implements IEEE 754 double-precision binary float, which is also called binary64. Python float uses 8 bytes (or 64 bits) to represent real numbers. Unlike the integer type, the float type uses a fixed number of bytes.

Does Python use floats or doubles?

Python does not have an inbuilt double data type, but it has a float type that designates a floating-point number. You can count double in Python as float values which are specified with a decimal point.

Why does Python use float instead of double?

These are almost the same - two names are provided because in some programming languages there are differences between float and double types. There are no such differences in python, but note that float is built in to python, while double comes from numpy, and hence is slightly different.

Does Python use floating-point numbers?

The float type in Python represents the floating point number. Float is used to represent real numbers and is written with a decimal point dividing the integer and fractional parts. For example, 97.98, 32.3+e18, -32.54e100 all are floating point numbers.


1 Answers

In theory, as you say, CPython is designed to be buildable and usable on any platform without caring about what floating-point format their C double is using.

In practice, two things are true:

  • To the best of my knowledge, CPython has not met a system that's not using IEEE 754 binary64 format for its C double within the last 15 years (though I'd love to hear stories to the contrary; I've been asking about this at conferences and the like for a while). My knowledge is a long way from perfect, but I've been involved with mathematical and floating-point-related aspects of CPython core development for at least 13 of those 15 years, and paying close attention to floating-point related issues in that time. I haven't seen any indications on the bug tracker or elsewhere that anyone has been trying to run CPython on systems using a floating-point format other than IEEE 754 binary64.

  • I strongly suspect that the first time modern CPython does meet such a system, there will be a significant number of test failures, and so the core developers are likely to find out about it fairly quickly. While we've made an effort to make things format-agnostic, it's currently close to impossible to do any testing of CPython on other formats, and it's highly likely that there are some places that implicitly assume IEEE 754 format or semantics, and that will break for something more exotic. We have yet to see any reports of such breakage.

There's one exception to the "no bug reports" report above. It's this issue: https://bugs.python.org/issue27444. There, Greg Stark reported that there were indeed failures using VAX floating-point. It's not clear to me whether the original bug report came from a system that emulated VAX floating-point.

I joined the CPython core development team in 2008. Back then, while I was working on floating-point-related issues I tried to keep in mind 5 different floating-point formats: IEEE 754 binary64, IBM's hex floating-point format as used in their zSeries mainframes, the Cray floating-point format used in the SV1 and earlier machines, and the VAX D-float and G-float formats; anything else was too ancient to be worth worrying about. Since then, the VAX formats are no longer worth caring about. Cray machines now use IEEE 754 floating-point. The IBM hex floating-point format is very much still in existence, but in practice the relevant IBM hardware also has support for IEEE 754, and the IBM machines that Python meets all seem to be using IEEE 754 floating-point.

Rather than exotic floating-point formats, the modern challenges seem to be more to do with variations in adherence to the rest of the IEEE 754 standard: systems that don't support NaNs, or treat subnormals differently, or allow use of higher precision for intermediate operations, or where compilers make behaviour-changing optimizations.

The above is all about CPython-the-implementation, not Python-the-language. But the story for the Python language is largely similar. In theory, it makes no assumptions about the floating-point format. In practice, I don't know of any alternative Python implementations that don't end up using an IEEE 754 binary format (if not semantics) for the float type. IronPython and Jython both target runtimes that are explicit that floating-point will be IEEE 754 binary64. JavaScript-based versions of Python will similarly presumably be using JavaScript's Number type, which is required to be IEEE 754 binary64 by the ECMAScript standard. PyPy runs on more-or-less the same platforms that CPython does, with the same floating-point formats. MicroPython uses single-precision for its float type, but as far as I know that's still IEEE 754 binary32 in practice.

like image 196
Mark Dickinson Avatar answered Oct 24 '22 02:10

Mark Dickinson