I'm seeing strange behavior with the MAX macro in Objective C. Specifically, I have this code in my main function:
NSArray* array = [NSArray array];
NSLog(@"[array count] - 1 = %d", [array count] - 1);
NSLog(@"MAX(0, [array count] - 1) = %d", MAX(0, [array count] - 1));
NSLog(@"MAX(0, -1) = %d", MAX(0, -1));
The output is:
[array count] - 1 = -1
MAX(0, [array count] - 1) = -1
MAX(0, -1) = 0
I saved the preprocessor output with -save-temps, and it looks like this:
NSArray* array = [NSArray array];
NSLog(@"[array count] - 1 = %d", [array count] - 1);
NSLog(@"MAX(0, [array count] - 1) = %d", ((0) > ([array count] - 1) ? (0) : ([array count] - 1)));
NSLog(@"MAX(0, -1) = %d", ((0) > (-1) ? (0) : (-1)));
All the necessary parentheses are there, and [array count] - 1
has no side effects, so the usual macro issues shouldn't apply. Any idea what's going on?
[array count]
returns a NSUInteger
-- in other words, an unsigned integer. So [array count] - 1
is not -1, it is ((NSUInteger)-1), which is 0xFFFFFFFF or something like that -- which is greater than zero.
But then when you take 0xFFFFFFFF and pass it as an argument to NSLog(@"%d")
, NSLog treats it as a signed integer (because you used %d).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With