Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does allocating memory and then releasing constitute a side effect in a C++ program?

Inspired by this question about whether the compiler can optimize away a call to a function without side effects. Suppose I have the following code:

delete[] new char[10];

It does nothing useful. But does it have a side effect? Is heap allocation immediately followed by a deallocation considered a side effect?

like image 298
sharptooth Avatar asked Jul 08 '11 11:07

sharptooth


Video Answer


3 Answers

It's up to the implementation. Allocating and freeing memory isn't "observable behavior" unless the implementation decides that it's observable behavior.

In practice, your implementation probably links against a C++ runtime library of some sort, and when your TU is compiled, the compiler is forced to recognize that calls into that library may have observable effects. As far as I know, that's not mandated by the standard, it's just how things normally work. If an optimizer can somehow work out that certain calls or combinations of calls in fact don't affect observable behavior then it can remove them, so I believe that a special case to spot your example code and remove it would conform.

Also, I can't remember how user-defined global new[] and delete[] works [I've been reminded]. Since the code might call definitions of those things in another user-defined TU that's later linked to this TU, the calls can't be optimized away at compile time. They could be removed at link time if turns out that the operators aren't user-defined (although then the stuff about the runtime library applies), or are user-defined but don't have side-effects (once the pair of them is inlined - this seems pretty implausible in a reasonable implementation, actually[*]).

I'm pretty sure that you aren't allowed to rely on the exception from new[] to "prove" whether or not you've run out of memory. In other words, just because new char[10] doesn't throw this time, doesn't mean it won't throw after you free the memory and try again. And just because it threw last time and you haven't freed anything since, doesn't mean it'll throw this time. So I don't see any reason on those grounds why the two calls can't be eliminated - there's no situation where the standard guarantees that new char[10] will throw, so there's no need for the implementation to find out whether it would or not. For all you know, some other process on the system freed 10 bytes just before the call to new[], and allocated it just after the call to delete[].

[*]

Or maybe not. If new doesn't check for space, perhaps relying on guard pages, but just increments a pointer, and delete normally does nothing (relying on process exit to free memory), but in the special case where the block freed is the last block allocated it decrements the pointer, your code could be equivalent to:

// new[]
global_last_allocation = global_next_allocation;
global_next_allocation += 10 + sizeof(size_t);
char *tmp = global_last_allocation;
*((size_t *)tmp) = 10; // code to handle alignment requirements is omitted
tmp += sizeof(size_t);

// delete[]
tmp -= sizeof(size_t);
if (tmp == global_last_allocation) {
    global_next_allocation -= 10 + *((size_t*)tmp);
}

Which could almost all be removed assuming nothing is volatile, just leaving global_last_allocation = global_next_allocation;. You could get rid of that too by storing the prior value of last in the block header along with the size, and restoring that prior value when the last allocation is freed. That's a pretty extreme memory allocator implementation, though, you'd need to have a single-threaded program, with a speed demon programmer who's confident the program doesn't churn through more memory than was made available to begin with.

like image 158
Steve Jessop Avatar answered Oct 23 '22 15:10

Steve Jessop


new[] and delete[] could ultimately result in system calls. Additionally, new[] might throw. With this in mind, I don't see how the new-delete sequence can be legitimately considered free from side effects and optimized away.

(Here, I assume no overloading of new[] and delete[] is involved.)

like image 30
NPE Avatar answered Oct 23 '22 15:10

NPE


No. Neither should it be removed by compiler nor considered to be a side effect. Consider below:

struct A {
  static int counter;
  A () { counter ++; }
};

int main ()
{
  A obj[2]; // counter = 2
  delete [] new A[3]; // counter = 2 + 3 = 5
}

Now, if the compiler removes this as side effect then, the logic goes wrong. So, even if you are not doing anything, compiler will always assume that something useful is happening (in constructor). That's the reason why;

A(); // simple object allocation and deallocation

is not optimized away.

like image 29
iammilind Avatar answered Oct 23 '22 15:10

iammilind