Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is casting narrow types to wider types to save memory and keep high-precision calculations a terrible idea?

I'm dealing with financial data, so there's a lot of it and it needs to be relatively high-precision (64bit floating point or wider).

The standard practice around my workplace seems to be to represent all of it as the c# decimal type which is a 128bit wide floating point specifically created to support round-off free base10 operations.

Since 64bit is wide enough to maintain the representative precision, is it ridiculous to cast the data to the wider type for all calculations (mult,div,add,etc) and then back to 64bit for sitting in memory (which is where it spends of most if its time)?

For reference: memory is definitely the limiting resource here.

like image 426
Daniel Schobel Avatar asked May 28 '09 16:05

Daniel Schobel


2 Answers

The point of using decimal (128 bits) over double (64 bits) and float (32 bits) isn't usually to do with the size. It's to do with the base. While double and float are floating binary point types, decimal is a floating decimal point type - and it's that feature that lets it represent numbers like 0.1 exactly where float/double can't.

There's no conceptual reason why we couldn't haven't a 64-bit decimal type, and in many cases that would indeed be enough - but until such a type comes along or you write it yourself, please don't use the "shorter" (and binary floating point) types of float/double for financial calculations. If you do, you're asking for trouble.

If you're suggesting writing a storage type which can convert to/from decimal and is still a floating decimal type, that sounds like a potentially good idea even without it being able to do any calculations. You'll need to be very careful when you think about what to do if you're ever asked to convert a decimal value which you can't represent exactly though. I'd be interested in seeing such a type, to be honest. Hmm...

(As other answers have indicated, I'd really make sure that it's the numbers which are taking up the memory before doing this, however. If you don't need to do it, there's little point in introducing the extra complexity speculatively.)

like image 133
Jon Skeet Avatar answered Oct 03 '22 00:10

Jon Skeet


64bit floating point cannot maintain precision of financial data. It is not a matter of space, it is a matter of which number system the data types use; double uses base-2, decimal is base-10, and base-2 cannot represent exact base-10 decimals even if it had 1000 bits of precision.

Don't believe me? Run this:

double d = 0.0;
for (int i = 0; i < 100; i++)
    d += 0.1;
Console.WriteLine(d);

> 9.99999999999998

If you need base-10 calculations you need the decimal type.

(Edit: damn, beaten by Jon Skeet again...)

If the decimal type really is the bottleneck, you can use a long number of pennies (or 1/8 cent or whatever your unit is) instead of decimal dollars.

like image 28
Dour High Arch Avatar answered Oct 03 '22 00:10

Dour High Arch