For example, in:
bool eq = (1 / double.Parse("-0.0")) == (1 / -0.0);
eq
will be false
.
double.Parse
would have to go through some trouble to explicitly ignore the sign for zero, even though not doing that almost never results in a problem.
Since I need the raw representation, I had to write my own parsing function which special-cases negative zero and uses double.Parse
for everything else.
That's not a big problem, but I'm really wondering why they made the decision to ignore the sign of zero, because it seems to me that not doing so wouldn't be a bad thing.
I don't know about the why per se, but a potential solution: If you see a -
character at the beginning, parse the rest of the string and then negate it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With