In code created by Apple, there is this line:
CMTimeMakeWithSeconds( newDurationSeconds, 1000*1000*1000 )
Is there any reason to express 1,000,000,000
as 1000*1000*1000
?
Why not 1000^3
for that matter?
One reason to declare constants in a multiplicative way is to improve readability, while the run-time performance is not affected. Also, to indicate that the writer was thinking in a multiplicative manner about the number.
Consider this:
double memoryBytes = 1024 * 1024 * 1024;
It's clearly better than:
double memoryBytes = 1073741824;
as the latter doesn't look, at first glance, the third power of 1024.
As Amin Negm-Awad mentioned, the ^
operator is the binary XOR
. Many languages lack the built-in, compile-time exponentiation operator, hence the multiplication.
There are reasons not to use 1000 * 1000 * 1000
.
With 16-bit int
, 1000 * 1000
overflows. So using 1000 * 1000 * 1000
reduces portability.
With 32-bit int
, the following first line of code overflows.
long long Duration = 1000 * 1000 * 1000 * 1000; // overflow
long long Duration = 1000000000000; // no overflow, hard to read
Suggest that the lead value matches the type of the destination for readability, portability and correctness.
double Duration = 1000.0 * 1000 * 1000;
long long Duration = 1000LL * 1000 * 1000 * 1000;
Also code could simple use e
notation for values that are exactly representable as a double
. Of course this leads to knowing if double
can exactly represent the whole number value - something of concern with values greater than 1e9. (See DBL_EPSILON
and DBL_DIG
).
long Duration = 1000000000;
// vs.
long Duration = 1e9;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With