The Learn You A Haskell for Great Good! book by Miran Lipovača says in the chapter Making Our Own Types and Typeclasses that the idea of Haskell's Int type could be represented like this:
data Int = -2147483648 | -2147483647 | ... | -1 | 0 | 1 | 2 | ... | 2147483647
Nevertheless, it says that it worked just as demonstrative purposes, but it doesn't say how Int is actually defined. Is Int defined especially by the compiler or can it be definable with plain Haskell code?
Int
is magic - defined by the compiler. As the other authors have said, it is not actually defined as an algebraic data type, it is implementation-defined, much like Double
!
There are some rules though, Int
is guaranteed to be at least a 30-bit signed integer. Int
must be able to express every value in the range [-2^29, 2^29)
(upper bound exclusive). In practice, Int
is defined by the compiler to be equivalent to a 32-bit integer value. The reason for this is that Int
can be optimized in certain ways that a machine word cannot. Pointer tagging is important for Haskell performance, and so implementations are free to use some number of bits.
If you want guaranteed sized values, Data.Int
has Int32
, and Data.Word
has Word32
, which guarantee exact correspondance to 32-bit signed and unsigned machine integers.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With