I am working on someone else's code, and see things like:
if ((somevariable) > decimal.Parse("24,999.99")) ...
and
return int.Parse("0");
I cannot think of any logical reason to do this instead of
if ((somevariable) > 24999.99) ...
or
return 0;
What am I missing?
There is a semantic difference between the original code and your proposed change, but you're right in being skeptical.
The conversion from a string is just plain stupid, sorry. There is no need to do that, ever. The difference is that the original code parses the string as a decimal
, but your change would use a double
. So, it should be:
if (somevariable > 24999.99m) ...
For one thing, because 24999.99 is a double
value rather than a decimal
value - you would want to use 24999.99m
. But yes, otherwise it would be a much better idea to use the literal. (I wouldn't bother with the parentheses round the variable, either.)
Note that the code performing parsing will even fail if it's run in some cultures, where the decimal separator isn't .
and/or the thousands separator isn't ,
. I can't think of any good reason for using this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With