I recently wrote a function template which takes a reference to a C-array:
template <class T, size_t N>
void foo(T(&c_array)[N]);
Assuming T is a char
, the length of the C-string is N - 1
due to the null-terminator. I realized I should probably handle the edge-case where N == 0
, because then N - 1
would be std::numeric_limits<std::size_t>::max()
.
So in order to avoid the chaos that might ensue in the rare case that someone passes a zero-length array to this function, I placed a check for N == 0
.
However, to my surprise, it seems that a zero-length array is actually not even an array type - or at least, that's what GCC seems to believe. In fact, a zero-length array doesn't even bind to the above function signature, if a function with a pointer-type signature is available as a candidate.
Consider the following code:
template <class T, size_t N>
void foo(T(&array)[N])
{
std::cout << "Array" << std::endl;
}
void foo(const void* p)
{
std::cout << "Pointer" << std::endl;
}
int main(int argc, char** argv)
{
char array1[10] = { };
const char* pointer = 0;
char array2[0] = { };
foo(array1);
foo(pointer);
foo(array2);
}
With GCC 4.3.2, this outputs:
Array
Pointer
Pointer
Oddly, the zero-length array prefers to bind to the function that takes a pointer type. So, is this a bug in GCC, or is there some obscure reason mandated by the C++ standard why this behavior is necessary?
As arrays must have greater than zero length, if your compiler erroneously accepts a definition of a zero-sized array then you're "safely" outside of the scope of the language standard. There's no need for you to handle the edge case of N == 0
.
This is true in C++: 8.3.5 [dcl.array]: If the constant-expression (5.19) is present, it shall be an integral constant expression and its value shall be greater than zero.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With