I get a float by dividing two numbers. I know that the numbers are divisible, so I always have an integer, only it's of type float. However, I need an actual int type. I know that int()
strips the decimals (i.e., floor rounding). I am concerned that since floats are not exact, if I do e.g. int(12./3)
or int(round(12./3))
it may end up as 3 instead of 4 because the floating point representation of 4 could be 3.9999999593519561 (it's not, just an example).
Will this ever happen and can I make sure it doesn't?
(I am asking because while reshaping a numpy array, I got a warning saying that the shape must be integers, not floats.)
Casting a float
to an integer
truncates the value, so if you have 3.999998
, and you cast it to an integer
, you get 3
.
The way to prevent this is to round the result. int(round(3.99998)) = 4
, since the round function always return a precisely integral value.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With