Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fortran global work array vs. local dynamically allocated arrays

I am working with an older F77 code that has been upgraded to F9X. It still has some older "legacy" code structure and I'm curious on the performance aspect towards adding code in the legacy way or modern way. We have a separate F9x code that we are trying to integrate into this older code and use as many of their procedures as possible instead of rewriting our own versions. Also note, assume that all of these procedures are NOT explicitly interfaced.

Specifically, the old code has one large rank-1 work array that is allocated in the main program and as this array is passed deeper into procedures, it is split apart and used where it is needed. Essentially there is one allocation/deallocation and the only overhead with this array involves finding the starting indices (trivial) of needed temporary arrays and passing these sections of the work array into the procedure.

Our new code generally uses lower level procedures from the old code in which multiple dummy arrays originated from the older code's global work array. Instead of the hassle of creating our own work array, finding starting indices, and passing all these array sections with their starting indices, I could just create dynamically allocated arrays where they are needed. However, these procedures can be called thousands (possibly millions for some lower level routines) of times during the code execution and I am concerned with the overhead of allocating and deallocating each time any of these procedures are used. Also, these temporary arrays could contain many millions of double precision elements.

I've also dabbled with automatic arrays but stopped when I started encountering stack overflow issues and now almost exclusively use dynamic arrays. I've heard different things about the stack and heap with regards to how memory for different kinds of arrays is stored but I really don't know the difference and which is better (performance, efficiency, etc.).

Long story short, are these dynamically allocated (or automatic) arrays going to be significantly less efficient due to overhead issues? I also realize that dynamically allocated arrays are more robust in the life span of the code but what I am really after is performance. A 5% performance gain could mean many hours saved in code execution.

I realize I might not get a definitive answer to this due to differences in compiler optimizations and other factors but I'm curious if anyone might have some knowledge/experience with anything similar. Thanks for your help.

like image 808
Seth Avatar asked Jun 17 '11 02:06

Seth


1 Answers

I think that any answers are going to be guesses and speculation. My guess: array creation is going to be a very low CPU load. Unless these subroutines are doing a negligible amount of computations, the differing overhead of differing arrays types won't be noticeable. But the only way to be sure would be to try two different methods and to time them, e.g., with the Fortran intrinsic cpu_time.

Automatic arrays are usually placed on the stack, but some compilers place large automatic arrays on the heap. Some compilers have an option to change this behavior. Allocatable are probably on the heap.

like image 80
M. S. B. Avatar answered Nov 14 '22 10:11

M. S. B.