Suppose I have the following:
#define MAX (16 * 1024 * 1024 * 1024)
#define MIN (1 * 1024 * 1024 * 1024)
This will give MAX = 0. I assume that this is because the define is only using 32 bits for the define. Is there a way to use 64 bits for this or do I need to rework my code so that the define can handle a smaller value?
This will give MAX = 0
No, this will replace MAX with the literal tokens ( 16 * 1024 * 1024 * 1024 )
during the preprocessing phase.
I assume that this is because the define is only using 32 bits for the define
The define isn't using any bits, it's just a text substitution.
Is there a way to use 64 bits for this
Using the type explicitly is perhaps nicer than using the integer literal suffix, because it's more explicit about exactly how many bits you get:
#define MAX ((uint64_t)16 * 1024 * 1024 * 1024)
or
#define MAX (16ll * 1024 * 1024 * 1024)
The reason this is happening is that all of those constants are implicitly of type int
. In your case, that appears to be a 32-bit type. You need to make sure you're working with a 64-bit type if that's the behaviour you want to have.
You can typecast it to make sure it's a 64-bit type:
#define MAX ((int64_t)16 * 1024 * 1024 * 1024)
Or just expand the math yourself:
#define MAX 17179869184
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With