Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why write 1,000,000,000 as 1000*1000*1000 in C?

In code created by Apple, there is this line:

CMTimeMakeWithSeconds( newDurationSeconds, 1000*1000*1000 )

Is there any reason to express 1,000,000,000 as 1000*1000*1000?

Why not 1000^3 for that matter?

like image 980
Duck Avatar asked Oct 17 '22 15:10

Duck


2 Answers

One reason to declare constants in a multiplicative way is to improve readability, while the run-time performance is not affected. Also, to indicate that the writer was thinking in a multiplicative manner about the number.

Consider this:

double memoryBytes = 1024 * 1024 * 1024;

It's clearly better than:

double memoryBytes = 1073741824;

as the latter doesn't look, at first glance, the third power of 1024.

As Amin Negm-Awad mentioned, the ^ operator is the binary XOR. Many languages lack the built-in, compile-time exponentiation operator, hence the multiplication.

like image 204
Piotr Falkowski Avatar answered Oct 23 '22 18:10

Piotr Falkowski


There are reasons not to use 1000 * 1000 * 1000.

With 16-bit int, 1000 * 1000 overflows. So using 1000 * 1000 * 1000 reduces portability.

With 32-bit int, the following first line of code overflows.

long long Duration = 1000 * 1000 * 1000 * 1000;  // overflow
long long Duration = 1000000000000;  // no overflow, hard to read

Suggest that the lead value matches the type of the destination for readability, portability and correctness.

double Duration = 1000.0 * 1000 * 1000;
long long Duration = 1000LL * 1000 * 1000 * 1000;

Also code could simple use e notation for values that are exactly representable as a double. Of course this leads to knowing if double can exactly represent the whole number value - something of concern with values greater than 1e9. (See DBL_EPSILON and DBL_DIG).

long Duration = 1000000000;
// vs.
long Duration = 1e9;
like image 76
chux - Reinstate Monica Avatar answered Oct 23 '22 18:10

chux - Reinstate Monica