There is such a type std::size_t
. It may be used for describing the size of object as it is guaranteed to be able to express the maximum size of any object (so is written here). But what does that mean? We actually have no objects in memory. So does that mean that this type can store an integer that represents the largest amount of memory we theoretically can use?
If I try to write something like
size_t maxSize = std::numeric_limits<std::size_t>::max();
new char[maxSize];
I will get an error because the total size of the array is limited to 0x7fffffff. Why?
Moreover, if I pass a nonconstant expression which is equal to maxSize
, std::bad_array_new_length
will be thrown. If I pass an expression which is less than maxSize
but still greater than 0x7fffffff, std::bad_alloc
will be thrown. I suppose that std::bad_alloc
is thrown because of lack of memory, not because the size is greater than 0x7fffffff. Why does it happen so? I guess it is natural to throw a special exception if the size of memory we what to allocate is greater than 0x7fffffff (which is the max value for the const which is passed to new[] at compile time). And why is std::bad_array_new_length
thrown only if I pass maxSize
? Is this case special?
By the way, if I pass maxSize to the vector's constructor like this:
vector<char> vec(maxSize);
std::bad_alloc
will be thrown, not std::bad_array_new_length
. Does that mean that vector uses different allocator?
I'm trying to make an implementation of array on my own. Using unsigned int to store the size, capacity and indices is a bad approach. So is it a good idea to define some alias like this:
typedef std::size_t size_type;
and use size_type
instead of unsigned int
?
size_t type is a base unsigned integer type of C/C++ language. It is the type of the result returned by sizeof operator. The type's size is chosen so that it can store the maximum size of a theoretically possible array of any type. On a 32-bit system size_t will take 32 bits, on a 64-bit one 64 bits.
Use size_t for variables that model size or index in an array. size_t conveys semantics: you immediately know it represents a size in bytes or an index, rather than just another integer. Also, using size_t to represent a size in bytes helps making the code portable.
new operator. The new operator denotes a request for memory allocation on the Free Store. If sufficient memory is available, a new operator initializes the memory and returns the address of the newly allocated and initialized memory to the pointer variable.
The answer is in the process involved in the creation of an object of dynamic storage duration.
To make short, when the program executes a new expression as : new char[size]
:
It checks that s=size*sizeof(char)+x
is a valid size (implementation defined 0x7fffffff, which depends on the ABI)(x=0 on most platform if you create array of trivially destructible type). If the size is invalid, it throws bad_array_new_lenght
, otherwise,
It calls the allocation function ::operator new(s)
. This function first parameter is std::size_t
, this is why std::size_t
must be large enough to create an object (an array is an object) of any size.
This allocation function asks the system to reserve a region of storage of size s
. If the system succeeds to reserve this space it returns a pointer to the beginning of the storage region. Otherwise it calls a new handler and retry allocation, but if it fails once again, it throws bad_alloc
If allocation succeed, it default initializes (in this case) the size
char
(no-op) on the allocated storage, and it may also store the size of the array on this allocated storage (the reason to be of the added x
) (This is used when executing a delete expression in order to know hown many destructors must be called. If the destructor is trivial, this is not necessary).
You will find all the details in the c++ standard (§6.7.4 [basic.stc.dynamic], §8.3 [expr.new], §8.4 [expr.delete], §21.6 [support.dynamic]).
And for the last question, you may consider using a signed type for the index and for the object size. Even if object size or index should not be negative, the standard imposes that unsigned arithmetic followes modulo-arithmetic which limits seriously optimization. Moreover unsigned integer type arithmetic and comparison are frequent subjects of bugs. std::size_t
is unsigned for compatibility reason, it was choosen to be unsigned because prehistoric machine were short in bits! (16 bits or less!)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With