Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fortran precision default with/out compiler flag

Tags:

fortran

I'm having trouble with the precision of constant numerics in Fortran.

Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?

I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.

like image 664
Mike Avatar asked Jun 18 '18 18:06

Mike


2 Answers

In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.

However, "double precision" means different things in different contexts.

In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.

gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.

So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.

Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.

like image 198
francescalus Avatar answered Nov 12 '22 16:11

francescalus


Adding to @francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:

program test
    use, intrinsic :: iso_fortran_env, only: RK => real64
    implicit none
    write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
    write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
    write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test

Compiling this code with gfortran gives:

$gfortran -std=gnu *.f95 -o main
$main
            real64: .666666666666667    
  double precision: .666666666666667    
  single precision: .666666686534882    

Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.

Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,

program test
    use, intrinsic :: iso_fortran_env, only: RK => real64
    implicit none
    real(RK) :: real_64
    real_64 = 2.e0 / 3.e0
    write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
    real_64 = 2._RK / 3._RK
    write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test

which gives the following output,

$gfortran -std=gnu *.f95 -o main
$main
 32-bit accuracy is returned:          0.666666686534882    
 64-bit accuracy is returned:          0.666666666666667    

Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.

So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.

like image 36
Scientist Avatar answered Nov 12 '22 17:11

Scientist