I have been doing some reading, and I'm having a tough time understanding how to interpret something that is a "digits x". I.E.
type something is digits 6
I get that it's 6 digits of precision, but I guess what has me mixed up is what does that mean.
1) Y.XXXXXX (6X's),
2) XXX.XXX (Any number of digits, just will always be 6 of them counting both fore and aft the mantissa)
...
I'm just trying to understand what a range of something that is digits 6 (or digits n to be more generic), is there a formula I can simply plug into to determine what my ranges are on a type that is some number of digits?
A type declared with digits
is a floating-point type, similar to Float
or Long_Float
.
The 6
is "the minimum number of significant decimal digits required for
the floating point type". For example, all the following will be represented reasonably accurately (but not exactly):
type My_Real is digits 6;
X: My_Real := 1.23456;
Y: My_Real := 12345.6;
Z: My_Real := 1.23456E7;
In practice, there are usually just 2 or 3 underlying floating-point types on a given system. The compiler will choose an appropriate one as the underlying type for your declaration. In practice, two types declared with digits 2
and digits 6
will probably have exactly the same representation and precision.
Understanding the phrase "not exactly" requires an understanding of floating-point that's well beyond the scope of a single question, but if you're familiar with floating-point in other languages, it's the same general idea.
If you want a general understanding of what floating-point is and how it works, the Wikipedia Article isn't bad. A much more advanced treatment is David Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic", available here as a web page and here as a PDF.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With