Just a general question on what sort of runtime differences I should be expecting between using these two different data types.
My test:
test = [100.0897463, 1.099999939393,1.37382829829393,29.1937462874847272,2.095478262874647474]
test2 = [decimal.Decimal('100.0897463'), decimal.Decimal('1.09999993939'), decimal.Decimal('1.37382829829'), decimal.Decimal('29.1937462875'), decimal.Decimal('2.09547826287')]
def average(numbers, ddof=0):
return sum(numbers) / (len(numbers)-ddof)
%timeit average(test)
%timeit average(test2)
The differences in runtime are:
1000000 loops, best of 3: 364 ns per loop
10000 loops, best of 3: 80.3 µs per loop
So using decimal was about 200 times slower than using floats. Is this type of difference normal and along the lines of what I should expect when deciding which data type to use?
Decimal used within financial applications that require a high degree of accuracy and easy to avoid rounding errors whereas Float used when you stores scientific numbers and for better performance. Performance of Decimals is slower than and float data types.
The Decimal, Double, and Float variable types are different in the way that they store the values. Precision is the main difference where float is a single precision (32 bit) floating point data type, double is a double precision (64 bit) floating point data type and decimal is a 128-bit floating point data type.
By default, Python interprets any number that includes a decimal point as a double precision floating point number. The Decimal is a floating decimal point type which more precision and a smaller range than the float. It is appropriate for financial and monetary calculations.
Decimal has higher precision than float and Double.
You get better speed with float
because Python float
uses the hardware floating point register when available (and it is available on modern computers), whereas Decimal
uses full scalar/software implementation.
However, you get better control with Decimal
, when you have the classical floating point precision problems with the float
types. See the classical StackOverflow Q&A Is floating point math broken? for instance.
Based on the time difference you are seeing, you are likely using Python 2.x. In Python 2.x, the decimal
module is written in Python and is rather slow. Beginning with Python 3.2, the decimal
module was rewritten is C and is much faster.
Using Python 2.7 on my system, the decimal
module is ~180x slower. Using Python 3.5, the decimal
module is in only ~2.5x slower.
If you care about decimal
performance, Python 3 is much faster.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With