Is there any argument for using the numeric limits macros (e.g. INT64_MAX
) over std::numeric_limits<T>
? From what I understand numeric_limits
is in the standard but the macros are only in C99 so therefore non-standard.
numeric_limits::minReturns the minimum finite value representable by the numeric type T . For floating-point types with denormalization, min returns the minimum positive normalized value. Note that this behavior may be unexpected, especially when compared to the behavior of min for integral types.
Data types that supports std::numeric_limits() in C++ std::numeric_limits<int>::max() gives the maximum possible value we can store in type int. std::numeric_limits<unsigned int>::max()) gives the maximum possible value we can store in type unsigned int.
The other answers mostly have correct information, but it seems that this needs updating for C++11.
In C++11, std::numeric_limits<T>::min()
, std::numeric_limits<T>::max()
, and std::numeric_limits<T>::lowest()
are all declared constexpr
, so they can be usable in most of the same contexts as INT_MIN
and company. The only exception I can think of is compile-time string processing using the #
stringification token.
This means that numeric_limits
can be used for case labels, template parameters, etc., and you get the benefit of using it in generic code (try using INT_MIN
vs. LONG_MIN
in template<typename T> get_min(T t);
).
C++11 also brings a solution to the issue James Kanze talks about, by adding std::numeric_limits<T>::lowest()
, which gives the lowest finite value for all types, rather than the lowest value for integer types and the lowest positive value for floating-point types.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With