Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to safely convert from a double to a decimal in c#

Tags:

c#

We are storing financial data in a SQL Server database using the decimal data type and we need 6-8 digits of precision in the decimal. When we get this value back through our data access layer into our C# server, it is coming back as the decimal data type.

Due to some design constraints that are beyond my control, this needs to be converted. Converting to a string isn't a problem. Converting to a double is as the MS documentation says "[converting from decimal to double] can produce round-off errors because a double-precision floating-point number has fewer significant digits than a decimal."

As the double (or string) we can round to 2 decimal places after any calculations are done, so what is the "right" way to do the decimal conversion to ensure that we don't lose any precision before the rounding?

like image 481
icfantv Avatar asked Feb 18 '11 17:02

icfantv


2 Answers

The conversion won't produce errors within the first 8 digits. double has 15-16 digits of precision - less than the 28-29 of decimal, but enough for your purposes by the sounds of it.

You should definitely put in place some sort of plan to avoid using double in the future, however - it's an unsuitable datatype for financial calculations.

like image 114
Jon Skeet Avatar answered Nov 15 '22 04:11

Jon Skeet


If you round to 2dp, IMO the "right" way would be store an integer that is the multiple - i.e. for 12.34 you store the integer 1234. No more double rounding woe.

If you must use double, this still works; all integers are guaranteed to be stored exactly in double - so still use the same trick.

like image 29
Marc Gravell Avatar answered Nov 15 '22 06:11

Marc Gravell