Considering an int
will be 4 bytes on a 32-bit system and 8 bytes on a 64-bit system, why is float
not treated the same? Why is size of a double
!= size of a float
on a 64-bit system? Considering that the best native integer type is selected when I declare an int
(which results in higher performance), shouldn't the same happen for float
(which also results in a performance increase)?
Related question: Is it a bad idea to declare a type my_float
(pardon the name!) that is float
on 32-bit systems and double
on 64-bit systems?
Your question is based on a false premise. On most modern 64-bit systems, int
is still 4 bytes. Why consume twice as much memory and twice as much memory bandwidth when such large integers are so rarely needed? On typical modern 64-bit systems, math on 64-bit integers is not faster than math on 32-bit integers, so there's no benefit to be had.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With