I'm using a server with 128GB memory to do some computation. I need to malloc() a 2D float array of size 56120 * 56120. An example code is as follows:
int main(int argc, char const *argv[])
{
float *ls;
int num = 56120,i,j;
ls = (float *)malloc((num * num)*sizeof(float));
if(ls == NULL){
cout << "malloc failed !!!" << endl;
while(1);
}
cout << "malloc succeeded ~~~" << endl;
return 0;
}
The code compiles successfully but when I run it, it says "malloc failed !!!". As I calculated, it only takes about 11GB of memory to hold the whole array. Before I started the code, I checked the server and there was 110GB of free memory available. Why does the error happen?
I also found that if I reduce num to, say 40000, then the malloc will succeed.
Does this mean that there is a limit on the maximum memory that can be allocated by malloc()?
Moreover, if I change the way of allocation, directly declaring a 2D float array of such size, as follows:
int main(int argc, char const *argv[])
{
int num = 56120,i,j;
float ls[3149454400];
if(ls == NULL){
cout << "malloc failed !!!" << endl;
while(1);
}
cout << "malloc succeeded ~~~" << endl;
for(i = num - 10 ; i < num; i ++){
for( j = num - 10; j < num ; j++){
ls[i*num + j] = 1;
}
}
for(i = num - 11 ; i < num; i ++){
for( j = num - 11; j < num ; j++){
cout << ls[i*num + j] << endl;
}
}
return 0;
}
then I compile and run it. I get a "Segmentation fault".
How can I solve this?
The problem is, that your calculation
(num * num) * sizeof(float)
is done as 32-bit signed integer calculation and the result for num=56120 is
-4582051584
Which is then interpreted for size_t with a very huge value
18446744069127500032
You do not have so much memory ;) This is the reason why malloc() fails.
Cast num to size_t in the calculation of malloc, then it should work as expected.
As other have pointed out, 56120*56120 overflows int math on OP's platform. That is undefined behavior (UB).
malloc(size_t x) takes a size_t argument and the values passed to it is best calculated using at least size_t math. By reversing the multiplication order, this is accomplished. sizeof(float) * num cause num to be widened to at least size_t before the multiplication.
int num = 56120,i,j;
// ls = (float *)malloc((num * num)*sizeof(float));
ls = (float *) malloc(sizeof(float) * num * num);
Even though this prevents UB, This does not prevent overflow as mathematically sizeof(float)*56120*56120 may still exceed SIZE_MAX.
Code could detect potential overflow beforehand.
if (num < 0 || SIZE_MAX/sizeof(float)/num < num) Handle_Error();
No need to cast the result of malloc().
Using the size of the referenced variable is easier to code and maintain than sizing to the type.
When num == 0, malloc(0) == NULL is not necessarily an out-of-memory.
All together:
int num = 56120;
if (num < 0 || ((num > 0) && SIZE_MAX/(sizeof *ls)/num < num)) {
Handle_Error();
}
ls = malloc(sizeof *ls * num * num);
if (ls == NULL && num != 0) {
Handle_OOM();
}
int num = 56120,i,j;
ls = (float *)malloc((num * num)*sizeof(float));
num * num is 56120*56120 which is 3149454400 which overflows a signed int which causes undefined behavoir.
The reason 40000 works is that 40000*40000 is representable as an int.
Change the type of num to long long (or even unsigned int)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With