Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

what is the biggest array size for double precision in Fortran 90?

Sorry if this is not the correct place to do this question, this is not about programation instead is a technical question. I need to work with enormous size arrays of 2D vectors in double precision, 10 million of them aproximately. But, in other programs I had memory problem in deal with this kind of arrays. My question is if there is some kind of limit for the array size in double precision.

I work in Linux, Intel two core, 32-bits. thanks

like image 755
JoeCoolman Avatar asked Jan 11 '23 23:01

JoeCoolman


2 Answers

Ok, I will explain by why the number of bytes is limited, not only the element count. During array indexing, the address of the element must be calculated. Of course it must fit to the intptr_t C variable. Also, the size of the array in bytes must fit in a size_t C variable. These are both 32-bit or 64-bit on 32-bit and 64-bit programs on modern machines. The same also holds for the virtual memory addressable by program! And also the memory addressable by the OS and CPU, though they can be 64-bit even if the program is 32-bit.

This is the fundamental reason why 32-bit programs and operating systems cannot address more than 4 GB of memory. Even if you could somehow compute the address using a Fortran variable wider than the chosen CPU word size, the CPU simply cannot access it.

Finally I mad an experiment in Intel Fortran in 32-bit mode with array with 32 byte elements:

complex(16), allocatable :: a(:)
 do i=1,100
   allocate(a(2**i))
   a(size(a)) = 1
   deallocate(a)
   write(*,*) i
 end do
end

ifort arraysize.f90 -m32 -check -traceback -g

The output is as expected:

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
    forrtl: severe (179): Cannot allocate array - overflow on array size calculation.

As expected the size of the array in bytes overflowed and the program crashed long before the indexing variable overflowed. This is not a compiler specific feature, but there is a fundamental reason for this.

like image 197
Vladimir F Героям слава Avatar answered Apr 26 '23 22:04

Vladimir F Героям слава


The Fortran language standards don't define a limit to the size of arrays that a program can (attempt to) declare or allocate. In practice you may find that your compiler limits the total number of elements in an array to 2^31-1 or 2^63-1, depending on whether your default integer size is 32- or 64-bits. You may find that the maximum size of any dimension of an array is also limited to the same values.

In practice the maximum size of array that you can declare will be limited by the RAM available on your computer. Since a double precision value occupies 8 bytes it's relatively easy for you to calculate the maximum bounds of arrays that you are likely to be able to work with. Any storage overhead required for an array is tiny compared with the volumes of data you seem to want to work with.

In response to VladimirF's comments

  • I meant, and still mean, the number of elements, not the number of bytes. It is the number of elements which determines the maximum index value required to access an element of an array.
  • It may be that some compilers impose a limit to the number of bytes used in a single array, but that is not a point I am making.
  • Fortran arrays can, of course, be indexed from 0, indeed from any positive or negative integer within range, but that is really just a convenience for the programmer.
like image 25
High Performance Mark Avatar answered Apr 27 '23 00:04

High Performance Mark