We are storing financial data in a SQL Server database using the decimal data type and we need 6-8 digits of precision in the decimal. When we get this value back through our data access layer into our C# server, it is coming back as the decimal data type.
Due to some design constraints that are beyond my control, this needs to be converted. Converting to a string isn't a problem. Converting to a double is as the MS documentation says "[converting from decimal to double] can produce round-off errors because a double-precision floating-point number has fewer significant digits than a decimal."
As the double (or string) we can round to 2 decimal places after any calculations are done, so what is the "right" way to do the decimal conversion to ensure that we don't lose any precision before the rounding?
The conversion won't produce errors within the first 8 digits. double
has 15-16 digits of precision - less than the 28-29 of decimal
, but enough for your purposes by the sounds of it.
You should definitely put in place some sort of plan to avoid using double
in the future, however - it's an unsuitable datatype for financial calculations.
If you round to 2dp, IMO the "right" way would be store an integer that is the multiple - i.e. for 12.34 you store the integer 1234. No more double rounding woe.
If you must use double, this still works; all integers are guaranteed to be stored exactly in double - so still use the same trick.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With