const int N = 100;
void function1(int array[]){
// ...
}
void function2(int array[N]){
// ...
}
int main(int argc, char *argv[]){
int a[N] = {1, 2, 3, ... , 100};
function1(a);
function2(a);
return 0;
}
I was wondering whether function2
has the potential to be faster than function1
due to some type of C++ compiler optimization (e.g., compiler figures out sizeof(array)
at compile time).
For C, the same topic has been debated before here: Should I declare the expected size of an array passed as function argument?.
Thank you!
There shouldn't be any performance difference between the two version of functions; if there is any, its negligible. But in your function2()
, N doesn't mean anything, because you can pass array of any size. The function signature doesn't put any constraint on array size, that means you don't know the actual size of the array which you pass to the function. Try passing an array of size 50
, the compiler will not generate any error!
To fix that problem, you can write the function as (which accepts an array of type int
and size exactly 100!):
const int N = 100;
void function2(int (&array)[N])
{
}
//usage
int a[100];
function2(a); //correct - size of the array is exactly 100
int b[50];
function2(b); //error - size of the array is not 100
You can generalize that by writing a function template that accepts a reference to an array of type T
and size N
as:
template<typename T, size_t N>
void fun(T (&array)[N])
{
//here you know the actual size of the array passed to this function!
//size of array is : N
//you can also calculate the size as
size_t size_array = sizeof(array)/sizeof(T); //size_array turns out to be N
}
//usage
int a[100];
fun(a); //T = int, N = 100
std::string s[25];
fun(s); //T = std::string, N = 25
int *b = new [100];
fun(b); //error - b is not an array!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With