According to the standard §20.10.2/1 Header <type_traits>
synopsis [meta.type.synop]:
1
The behavior of a program that adds specializations for any of the class templates defined in this subclause is undefined unless otherwise specified.
This specific clause contradicts to the general notion that STL should be expandible and prevents us from expanding type traits as in the example below:
namespace std {
template< class T >
struct is_floating_point<std::complex<T>> : std::integral_constant
<
bool,
std::is_same<float, typename std::remove_cv<T>::type>::value ||
std::is_same<double, typename std::remove_cv<T>::type>::value ||
std::is_same<long double, typename std::remove_cv<T>::type>::value
> {};
}
LIVE DEMO
where std::is_floating_point
is expanded to handle complex
number with underlying floating point type as well.
For the primary type categories, which is_floating_point
is one, there is a design invariant:
For any given type
T
, exactly one of the primary type categories has a value member that evaluates totrue
.
Reference: (20.10.4.1 Primary type categories [meta.unary.cat])
Programmers can rely on this invariant in generic code when inspecting some unknown generic type T
: I.e. if is_class<T>::value
is true
, then we don't need to check is_floating_point<T>::value
. We are guaranteed the latter is false
.
Here is a diagram representing the primary and composite type traits (the leaves at the top of this diagram are the primary categories).
http://howardhinnant.github.io/TypeHiearchy.pdf
If it was allowed to have (for example) std::complex<double>
answer true to both is_class
and is_floating_point
, this useful invariant would be broken. Programmers would no longer be able to rely on the fact that if is_floating_point<T>::value == true
, then T
must be one of float
, double
, or long double
.
Now there are some traits, where the standard does "say otherwise", and specializations on user-defined types are allowed. common_type<T, U>
is such a trait.
For the primary and composite type traits, there are no plans to relax the restriction of specializing these traits. Doing so would compromise the ability of these traits to precisely and uniquely classify every single type that can be generated in C++.
Adding to Howard's answer (with an example).
If users were allowed to specialize type traits they could lie (intentionally or by mistake) and the Standard Library could no longer assure that its behavior is correct.
For instance, when an object of type std::vector<T>
is copied an optimization that popular implementations do is calling std::memcpy
to copy all elements provided that T
is trivially copy constructible. They might use std::is_trivially_copy_constructible<T>
to detect whether the optimization is safe or not. If not, then the implementation falls back to the safe but slower method which is looping through the elements and call T
's copy constructor.
Now, if one specializes std::is_trivially_copy_constructible
for T = std::shared_ptr<my_type>
like this:
namespace std {
template <>
class is_trivially_copy_constructible<std::shared_ptr<my_type>> : std::true_type {
};
}
Then copying a std::vector<std::shared_ptr<my_type>>
would be disastrous.
This would not be the Standard Library implementation's fault but rather the specialization writer's. To some extend, that's what the quote provided by the OP says: "It's your fault, not mine."
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With