I have an image with width 888px and height 592px, with aspect ratio of width:height as 3:2.
The following produces a wrong value of 1, because of integer calculation/truncation as BitmapDecoder.PixelWidth and BitmapDecoder.PixelHeight are both uint
(unsigned integer), and decoder
below being a BitmapDecoder object.
double aspectRatio = decoder.PixelWidth / decoder.PixelHeight;
The following gives the expected correct value of 1.5, but Visual Studio says 'Cast is redundant', but why?
double aspectRatio = (double)decoder.PixelWidth / (double)decoder.PixelHeight;
You only need to cast one of the uints to double to force the floating point arithmetic so either:
double aspectRatio = decoder.PixelWidth / (double)decoder.PixelHeight;
or:
double aspectRatio = (double)decoder.PixelWidth / decoder.PixelHeight;
Personally, I'd go with the latter, but it is a matter of opinion.
Just to complement @ChrisF's answer, you can see this nicely in the IL code, where a single cast to double
will yield a conversion for both values:
IL_0013: stloc.0 // decoder
IL_0014: ldloc.0 // decoder
IL_0015: callvirt UserQuery+Decoder.get_PixelHeight
IL_001A: conv.r.un // convert uint to float32
IL_001B: conv.r8 // convert to float64 (double)
IL_001C: ldloc.0 // decoder
IL_001D: callvirt UserQuery+Decoder.get_PixelWidth
IL_0022: conv.r.un // convert uint to float32
IL_0023: conv.r8 // convert to float64 (double)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With