In C++ it is possible to allocate a const object on heap:
const Class* object = new const Class();
const_cast<Class*>( object )->NonConstMethod(); // UB
so that attempt to write into an object will be UB.
I don't get how such an object will be different from a heap-allocated object that is not declared const
:
const Class* object = new Class();
I mean when I allocate an object on stack it goes to automatic storage which is implementation-specific and so there might be some implementation-specific means that would allow allocating const
objects in some special way that would yield UB when I write to an object.
Yet whenever I use new
the compiler is required to emit operator new()
function invokation and that function can't possibly do anything different - it just allocates memory in a uniform manner regardless of whether there was const
in my code.
How is a const
heap-allocated object different from a non-const
one and how is undefined behavior possible if I try to modify it?
It's different because the created object has a different type (const Class
instead of just Class
), and it is undefined behavior because the standard says so.
That's the short version. There doesn't have to be a reason. (if anything, the inverse is true. There doesn't have to be a reason for something being UB. UB is the default state. It's only when there is a reason that something becomes well-defined)
As for what it means in practice, or whether it can actually cause problems if you treat the object as non-const, the hardware is unlikely to do anything different. The const object obviously won't be written into some kind of read-only memory (because that's not possible), and the memory page it's in probably isn't going to be flagged as read-only once the object has been allocated.
But the compiler is allowed to assume that the object is const. So it may optimize or transform the code in a way that is legal if the object is guaranteed to be unchanging, but which breaks if the object is modified halfway through.
It's really not about how the object is stored in the hardware. Const or no const rarely makes a difference on the hardware level. But it makes a difference in the type system, and it makes a difference in how the compiler is able to transform the code.
If you tell the compiler that an object is const, then the compiler believes you, and generates code on the assumption that the object is const.
There is no difference in the object. There is a difference in the (compile-time) type of the variable(s) used to refer to the memory area.
This is semantic friction only: the variable is different, the actual memory used by the data bits is const/volatile agnostic.
For a very amusing and enlightening story describing similar semantic friction see this all-time-favourite answer by Eric Lippert:
Treating const data in a non-const way can lead to Undefined Behaviour, because the compiler is allowed to do certain optimizations based on the knowledge that a const variable won't change1. Changing it none-the-less (e.g. by const_cast<>
) can lead to erronous results because the compiler's assumptions are actively negated.
1 Note that volatile
is there to help in cases where const variables can get modified concurrently. You can see how const
is a 'local' won't/can't touch promis, whereas volatile
says: 'don't assume this won't change, even though it is not being written to in this code segment'.
`
With current compilers, there is no technical difference. Undefined Behaviour includes things miraculously working.
I dimly remember that there was a proposal to have const-qualified constructors that would allow special-casing instances where the object would be const
immediately after construction; that would be useful e.g. for string classes that would allocate less memory if they didn't need to expect the string to grow.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With