I have the same set of data and am running the same code, but sometimes I get different results at the 19th decimal place and beyond. Although this is not a great concern to me for numbers less than 0.0001, it makes me wonder whether 19th decimal place is Raku's limit of precision?
Word 104 differ:
0.04948872986571077 19 chars
0.04948872986571079 19 chars
Word 105 differ:
0.004052062278212545 20 chars
0.0040520622782125445 21 chars
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number. For example, the number 123.45 has a precision of 5 and a scale of 2. In SQL Server, the default maximum precision of numeric and decimal data types is 38.
As per the float and real (Transact-SQL) float is an approximate data type. But also in the same article it is being mentioned that float has precision of 15 digits.
Thus, as the length of the unit increases, the measure becomes less precise. The number of decimal places in a measurement also affects precision. A time of 12.1 seconds is more precise than a time of 12 seconds; it implies a measure precise to the nearest tenth of a second.
Roughly speaking, more digits to the right of the decimal point means more precision. The more digits you track on the right of the decimal point, the less you can have on the left of the decimal point. Both sides of the decimal point need to fit into 128 bits (in the case of a decimal).
Decimal numbers are fractions of a number, specifically in-between amounts of a whole number. Their place value works the opposite of decimal numbers. They extend to the right hand side of the whole number (after a decimal point) and each place to the right gets smaller than the previous one by ten - that is, each place is divided by 10.
But in precision, the meaning in the English language, be it spoken or written, is not far off the mathematical definition. Precision refers to how consistent any given values are with each other. It shows the closeness of the given values, or measurements are with each other.
Place Values After the Decimal Point Just like each digit in a whole number, each digit behind the decimal point also has a name. You can see in the visual that the names go in the same order in the opposite direction from the whole number place value names.
In the base 10 number system, there are different number place values. The place value represents different sized counting groups. For example, there is the ones place. This counts numbers one at a time. It shows how many ones there are. Then, there is the tens place. This one shows how many groups of ten there are at a time.
TL;DR See the doc's outstanding Numerics page.
(I had forgotten about that page before I wrote the following answer. Consider this answer at best a brief summary of a few aspects of that page.)
There are two aspects to this. Internal precision and printing precision.
Raku supports arbitrary precision number types. Quoting Wikipedia's relevant page:
digits of precision are limited only by the available memory of the host system
You can direct Raku to use one of its arbitrary precision types.[1] If you do so it will retain 100% precision until it runs out of RAM.
Arbitrary precision type | Corresponding type checking[2] | Example of value of that type |
---|---|---|
Int |
my Int $foo ... |
66174449004242214902112876935633591964790957800362273 |
FatRat |
my FatRat $foo ... |
66174449004242214902112876935633591964790957800362273 / 13234889800848443102075932929798260216894990083844716 |
Thus you can get arbitrary internal precision for integers and fractions (including arbitrary precision decimals).
If you do not direct Raku to use an arbitrary precision number type it will do its best but may ultimately switch to limited precision. For example, Raku will give up on 100% precision if a formula you use calculates a Rat
and the number's denominator exceeds 64 bits.[1]
Raku's fall back limited precision number type is Num
:
On most platforms, [a
Num
is] an IEEE 754 64-bit floating point numbers, aka "double precision".
Quoting the Wikipedia page for that standard:
Floating point is used ... when a wider range is needed ... even if at the cost of precision.
The 53-bit significand precision gives from 15 to 17 significant decimal digits precision (2−53 ≈ 1.11 × 10−16).
Separate from internal precision is stringification of numbers.
(It was at this stage that I remembered the doc page on Numerics linked at the start of this answer.)
Quoting Printing rationals:
Keep in mind that output routines like
say
orput
... may choose to display aNum
as anInt
or aRat
number. For a more definitive string to output, use theraku
method or [for a rational number].nude
[1] You control the type of a numeric expression via the types of individual numbers in the expression, and the types of the results of numeric operations, which in turn depend on the types of the numbers. Examples:
1 + 2
is 3
, an Int
, because both 1
and 2
are Int
s, and a + b
is an Int
if both a
and b
are Int
s;
1 / 2
is not an Int
even though both 1
and 2
are individually Int
s, but is instead 1/2
aka 0.5
, a Rat
.
1 + 4 / 2
will print out as 3
, but the 3
is internally a Rat
, not an Int
, due to Numeric infectiousness.
[2] All that enforcement does is generate a run-time error if you try to assign or bind a value that is not of the numeric type you've specified as the variable's type constraint. Enforcement doesn't mean that Raku will convert numbers for you. You have to write your formulae to ensure the result you get is what you want.[1] You can use coercion -- but coercion cannot regain precision that's already been lost.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With