I am currently working on stock market related project using c++, involving a lot float type like prices and indexes.
I read a lot says that you should use decimal float in money related arithmetic. Why not use Double or Float to represent currency? Difference between decimal, float and double in .NET?
To my understanding, the difference between float and decimal float is at what base the exponent part is interpreted, float use 2 as base and decimal float use 10. When using decimal float you still got rounding errors, you still could not express 1/3(correct me if I am wrong), I guess it's quite possible to multiply someone's account balance by 30% and then the round error occurs, after a few more calculations, the rounding error might propagate even more serious. Besides a bigger number range, why should I use decimal float in financial arithmetic?
Depending on what financial transactions you're performing, rounding errors are likely to be inevitable. If an item costs $1.50 with 7% sales tax, you aren't going to be charged $1.605; the price you pay will be either $1.60 or $1.61. (US currency units theoretically include "mils", or thousandths of a dollar, but the smallest denomination coin is $0.01, and almost all transactions are rounded to the nearest cent.)
If you're doing simple calculations (just adding and subtracting quantities and multiplying them by integers), all the results will be whole numbers of cents. If you use binary floating-point representing the number of dollars, most amounts will not be representable; a calculation that should yield $0.01 might yield $0.01000000000000000020816681711721685132943093776702880859375.
You can avoid that problem by using integers to represent the number of cents (or, equivalently, using fixed-point if the language supports it) or by using decimal floating-point that can represent 0.01 exactly.
But for more complex operations, like computing 7% sales tax, dividing a sum of money into 3 equal parts, or especially compound interest, there are still going to be results that aren't exactly representable unless you use an arbitrary-precision package like GMP.
As I understand it, there are laws and regulations that specify exactly how rounding errors are to be resolved. If you apply 7% sales tax to $1.50, you can't legally pick between $1.60 and $1.61; the law tells you exactly which one is legally correct.
If you're writing financial software to be used by other people, you need to find out exactly what the regulations say. Once you know that, you can determine what representation (integers, fixed-point, decimal floating-point, or whatever) can best be used to get the legally required results.
(Disclaimer: I do not know what these regulations actually say.)
At least in the USA, most financial type companies are required to use decimal based math. Mainframes since the days of the IBM 360 can perform math on variable length strings of packed decimal. Typically some form of fixed point numbers are used, with a set number of digits after the decimal point. High level languages like Cobol support packed (or unpacked) decimal numbers. In the case of IBM mainframes, there's a lot of legacy assembly code to go along with the Cobol code, partly because at one time certain types of databases were accessed via macros in assembly (now called HLASM - high level assembly).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With