I had the following statement in code:
int a = (int)( (float)(b * 1000) / (float)c + .5f );
where b
is an int
as well,
and c
is an unsigned int
with a constant value (of 15 in my test runs)
This statement was inside a while loop where b
gets incremented by 1 on each iteration.
This code was working fine till I decided to switch on the optimisation flags (maximise speed) in Visual studio. After which, a
would randomly run into an overflow (i.e it had a value of -2147483647
). On investigating further this overflow used to happen at different values of b
. The values of b where the overflow happened were between 9-12 in the test runs I observed.
What solved the problem was making the a small change as illustrated below:
int a = (int)( (b * 1000.f) / c + .5f );
Any ideas on how did this help? It is working fine but I can't figure out why?
Edit: Adding some more information based on comments:
While logging, I observed that :
(float)(b * 1000) / (float)c + .5f
was being evaluated to a really large value , which when type-casted to int
was leading to the overflow. Individually, float(b*1000)
and float(c)
were being calculated correctly however.
Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster.
The Optimize option is enabled by default for a Release build configuration. It is off by default for a Debug build configuration. You set the Optimize option from Build properties page for your project in Visual Studio. Optimize also tells the common language runtime to optimize code at run time.
Optimizing compilers are a mainstay of modern software: allowing a programmer to write code in a language that makes sense to them, while transforming it into a form that makes sense for the underlying hardware to run efficiently.
b * 1000
is most likely overflowing the int
type; the behaviour of doing this is undefined.
Drop all those obfuscating casts, and use
b * 1000.0 / c + 0.5
instead. 1000.0
is a double
literal and causes the first term to be evaluated in floating point.
You ought to check the size of the expression before converting back to int
, especially if c
is small. std::numeric_limits
is useful for that.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With