Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Input of a double precision number

I writing some astronomical programs, and I have available the source code for Jeffrey Sax's implementation of the algorithms in Meeus' book Astronomical Algorithms.

One of the functions he has written is ReadReal(), which obtains a real number from the user (via the keyboard or terminal). An extract from this function looks like this:

scanf("%lf", &r);
return r * 1.000000000000001;

The multiplication by the constant on the second line evidently has something to do with rounding but I cannot see exactly what. I have searched for answers, and the constant appears in many places on various sites but not in this context. Does anyone have experience of this or know what is going on here? Is it important?

Thanks for any help.

like image 1000
jmv Avatar asked Jun 23 '11 08:06

jmv


People also ask

How do you write double precision?

A double-precision exponent consists of the letter D , followed by an optional plus or minus sign, followed by an integer. A double-precision exponent denotes a power of 10. The value of a double-precision constant is the product of that power of 10 and the constant that precedes the D .

What is a double precision value?

Double precision provides greater range (approximately 10**(-308) to 10**308) and precision (about 15 decimal digits) than single precision (approximate range 10**(-38) to 10**38, with about 7 decimal digits of precision).

How do you represent a number in double precision?

For double precision, 64 bits are used to represent the floating-point number. Take Euler's number (e), for example. Here are the first 50 decimal digits of e: 2.7182818284590452353602874713526624977572470936999 .

What is double precision example?

The word double derives from the fact that a double-precision number uses twice as many bits as a regular floating-point number. For example, if a single-precision number requires 32 bits, its double-precision counterpart will be 64 bits long.


2 Answers

This refers to the density parameter, Ω, defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether Ω is equal to, less than, or greater than 1. These are called, respectively, the flat, open and closed universes. Have a look at the picture below for a visual representation.

Visual representation of the density parameter

The value of Ω you specify is determined in Big Bang Theory and gives rise to what is the known as the Flatness Problem. For more info on that go to wiki again.

To understand the importance of the density parameter have a look at the ultimate fate of the universe, it also gives a fuller explanation of Ω.

like image 120
GordyD Avatar answered Sep 21 '22 00:09

GordyD


Floating point numbers in most architectures (which use IEEE754 representation) can only represent numbers extactly which have a finite binary expansion, i.e. which are exactly represented by a number like 11.00100100001 (and the length of the string is limited by the size of the floating point type, e.g. 53 for double).

Any number which is not of this form, i.e. which is not a finite sum of powers of two, such as 1/3 or 1/5 or 1/10, can never be expressed exactly by a such floating point variable.

Since users often enter values like 0.1, rather than the more apt 0.125, this loss of exactness is often encountered quite early on in settings such as yours. Multiplying by that constant of yours is one way that the author on his platform found to get closer to what he thought the user intended. It's all subjective, though. If you just print with short precision, printf("%0.5f", x), you shouldn't notice the lack of exactness.

like image 41
Kerrek SB Avatar answered Sep 21 '22 00:09

Kerrek SB