I tried to compile the following function to see what gcc made of it:
#include <stdint.h>
#include <stddef.h>
typedef struct giga
{
uint64_t g[0x10000000];
} giga;
uint64_t addfst(giga const *gptr, size_t num)
{
uint64_t retval = 0;
for (size_t i = 0; i < num; i++)
{
retval += gptr[i].g[0];
}
return retval;
}
And found gcc maxing out my memory, swapping itself to death.
I've found this to happen when optimizing at -O3
, haven't tried to dissect the exact flag(s) responsible. Testing the function on gcc.godbolt reveals this to be gcc specific, but afflicting 4.8 and 4.9 versions.
Is this a genuine compiler bug, or is my function broken?
A 32-bit version of gcc can only access 2-3 GB of memory (whatever size is the swap). Enlarge the swap space, but keep on reading if this doesn't help. In any case, for a 32-bit gcc there will be a limit to the amount of memory that it can use.
GCC is slower to compile than clang, so I spend a lot of time compiling, but my final system is (usually) faster with GCC, so I have set GCC as my system compiler.
The bug is on the gcc bugzilla, https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65518. It has been confirmed & apparently fixed in trunk. Here's hoping the fix eventually trickles down to my distro. Thanks everyone!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With