I was looking at some code that used duration_cast
. Looking a it I wondered why a static_cast
was not used since static_cast
's purpose in life is to convert between types.
Why did C++ need a new operator to convert between times? Why was static_cast
not used?
Maybe I don't appreciate the difference C++ is making between milli-seconds, micro-seconds, nano-seconds, etc. For some reason, I thought the answer would be obvious or discussed on Stack Overflow, but I have not found it (yet).
There is already direct conversion of time intervals when there is no risk of loss of precision. duration_cast
is required when there is a risk of loss of precision.
duration_cast
is therefore not so much an operator as a deliberate conversion.
static_cast
is not suitable since different duration types are not related. They are entirely different classes which happen to support the same concept.
e.g.:
#include <chrono>
int main()
{
using namespace std::literals;
// milliseconds
auto a = 10ms;
// this requires a duration-cast
auto lossy = std::chrono::duration_cast<std::chrono::seconds>(a);
// but this does not
auto not_lossy = std::chrono::nanoseconds(a);
}
I have revisited that question a lot over the years, and I now think that may have been a design mistake on my part.
I am currently experimenting with depending more on explicit conversion syntax for conversions that should not be made implicitly, rather than "named conversion syntax".
For example:
https://howardhinnant.github.io/date/date.html#year
year y = 2017_y;
int iy = int{y}; // instead of iy = y.to_int()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With