Consider following example code:
#include <iostream>
#include <inttypes.h>
using namespace std;
int f(uint32_t i)
{
return 1;
}
int f(uint64_t i)
{
return 2;
}
int main ()
{
cout << sizeof(long unsigned) << '\n';
cout << sizeof(size_t) << '\n';
cout << sizeof(uint32_t) << '\n';
cout << sizeof(uint64_t) << '\n';
//long unsigned x = 3;
size_t x = 3;
cout << f(x) << '\n';
return 0;
}
This fails on Mac OSX with:
$ g++ --version
i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664)
$ make test
g++ test.cc -o test
test.cc: In function 'int main()':
test.cc:23: error: call of overloaded 'f(size_t&)' is ambiguous
test.cc:6: note: candidates are: int f(uint32_t)
test.cc:10: note: int f(uint64_t)
make: *** [test] Error 1
Why? Because 'size_t' should be unsigned and either 32 bit or 64 bit wide. Where is the ambiguity then?
Trying the same with 'unsigned long x' instead of 'size_t x' results in an analogous ambiguity error message.
On Linux/Solaris systems, testing with different GCC versions, different architectures etc. there is no ambiguity reported (and the right overload is used on each architecture).
Is this a Mac OS X bug or a feature?
Under Mac OS, those types are defined as:
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
Where as size_t
is defined as __SIZE_TYPE__
:
#if defined(__GNUC__) && defined(__SIZE_TYPE__)
typedef __SIZE_TYPE__ __darwin_size_t; /* sizeof() */
#else
typedef unsigned long __darwin_size_t; /* sizeof() */
#endif
So if you change your code to:
#include <iostream>
#include <inttypes.h>
using namespace std;
int f(uint32_t i)
{
return 1;
}
int f(uint64_t i)
{
return 2;
}
int f (unsigned long i)
{
return 3;
}
int main ()
{
cout << sizeof(unsigned long) << '\n';
cout << sizeof(size_t) << '\n';
cout << sizeof(uint32_t) << '\n';
cout << sizeof(uint64_t) << '\n';
//long unsigned x = 3;
size_t x = 3;
cout << f(x) << '\n';
return 0;
}
And run it, you will get:
$ g++ -o test test.cpp
$ ./test
8
8
4
8
3
If you really want to, you could implement your desired semantics like this:
#define IS_UINT(bits, t) (sizeof(t)==(bits/8) && \
std::is_integral<t>::value && \
!std::is_signed<t>::value)
template<class T> auto f(T) -> typename std::enable_if<IS_UINT(32,T), int>::type
{
return 1;
}
template<class T> auto f(T) -> typename std::enable_if<IS_UINT(64,T), int>::type
{
return 2;
}
Not saying this is a good idea; just saying you could do it.
There may be a good standard-C++ way to ask the compiler "are these two types the same, you know what I mean, don't act dumb with me", but if there is, I don't know it.
2020 UPDATE: You could have done it more idiomatically without macros. C++14 gave us the shorthand enable_if_t
and C++17 gave us is_integral_v
:
template<int Bits, class T>
constexpr bool is_uint_v =
sizeof(T)==(Bits/8) && std::is_integral_v<T> && !std::is_signed_v<T>;
template<class T> auto f(T) -> std::enable_if_t<is_uint_v<32, T>, int>
{ return 1; }
template<class T> auto f(T) -> std::enable_if_t<is_uint_v<64, T>, int>
{ return 2; }
And then in C++20 we have the even-shorter-shorthand requires
:
template<int Bits, class T>
constexpr bool is_uint_v =
sizeof(T)==(Bits/8) && std::is_integral_v<T> && !std::is_signed_v<T>;
template<class T> int f(T) requires is_uint_v<32, T> { return 1; }
template<class T> int f(T) requires is_uint_v<64, T> { return 2; }
and even-shorter-shorter-shorthand "abbreviated function templates" (although this is getting frankly obfuscated and I would not recommend it in real life):
template<class T, int Bits>
concept uint =
sizeof(T)==(Bits/8) && std::is_integral_v<T> && !std::is_signed_v<T>;
int f(uint<32> auto) { return 1; } // still a template
int f(uint<64> auto) { return 2; } // still a template
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With