Not long ago I've discovered Natural
data type in base
. It's supposed to be taken (as it seems to me) when you intend to use non-negative integer type. But it's not exactly clear why should I prefer Natural
to Integer
. Both types have arbitrary precision, both have quite optimized runtime representation — Integer
representation and Natural
representation. But Natural
can throw pure exceptions when you subtract natural numbers and this doesn't really add more typesafety to your code. While Integer
is more popular across all packages.
So when and why should I use Natural
?
What's the difference between Integer and Int ? Integer can represent arbitrarily large integers, up to using all of the storage on your machine. Int can only represent integers in a finite range. The language standard only guarantees a range of -229 to (229 - 1).
There's a very simple way to test for a perfect square - quite literally, you check if the square root of the number has anything other than zero in the fractional part of it. Save this answer.
It states that this function is of type Int -> Int , that is to say, it takes an integer as its argument and it returns an integer as its value.
Int is defined to be a signed integer with a range of at least [-2^29, 2^29) .
I do not see why you want to use Natural
or Integer
. Why not use Rational
instead? It is arbitrary precision, has an optimised runtime representation, and works for naturals, integers, and rationals!
My point is that we should choose a type which makes sense semantically. Lets count the houses on the street with naturals, record our next golf game with integers, and divide a fresh blueberry pie with rationals.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With