Why when I save a value of say 40.54 in SQL Server to a column of type Real does it return to me a value that is more like 40.53999878999 instead of 40.54? I've seen this a few times but have never figured out quite why it happens. Has anyone else experienced this issue and if so causes it?
All floating point values that can represent a currency amount (in dollars and cents) cannot be stored exactly as it is in the memory. So, if we want to store 0.1 dollars (10 cents), float/double can not store it as it is.
Money and Decimal are fixed numeric datatypes while Float is an approximate numeric datatype. Results of mathematical operations on floating point numbers can seem unpredictable, especially when rounding is involved.
The best datatype to use for currency in C# is decimal. The decimal type is a 128-bit data type suitable for financial and monetary calculations. The decimal type can represent values ranging from 1.0 * 10^-28 to approximately 7.9 * 10^28 with 28-29 significant digits.
Real is a Single Precision Floating Point number, while Float is a Double Precision Floating Point number. The Floating point numbers can store very large or very small numbers than decimal numbers.
Have a look at What Every Computer Scientist Should Know About Floating Point Arithmetic.
Floating point numbers in computers don't represent decimal fractions exactly. Instead, they represent binary fractions. Most fractional numbers don't have an exact representation as a binary fraction, so there is some rounding going on. When such a rounded binary fraction is translated back to a decimal fraction, you get the effect you describe.
For storing money values, SQL databases normally provide a DECIMAL type that stores exact decimal digits. This format is slightly less efficient for computers to deal with, but it is quite useful when you want to avoid decimal rounding errors.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With