Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

MAX / MIN function in Objective C that avoid casting issues

Tags:

I had code in my app that looks like the following. I got some feedback around a bug, when to my horror, I put a debugger on it and found that the MAX between -5 and 0 is -5!

NSString *test = @"short";
int calFailed = MAX(test.length - 10, 0);                      // returns -5

After looking at the MAX macro, I see that it requires both parameters to be of the same type. In my case, "test.length" is an unsigned int and 0 is a signed int. So a simple cast (for either parameter) fixes the problem.

NSString *test = @"short";
int calExpected = MAX((int)test.length - 10, 0);                    // returns 0

This seems like a nasty and unexpected side effect of this macro. Is there another built-in method to iOS for performing MIN/MAX where the compiler would have warned about mismatching types? Seems like this SHOULD have been a compile time issue and not something that required a debugger to figure out. I can always write my own, but wanted to see if anybody else had similar issues.

like image 882
raider33 Avatar asked Apr 29 '13 01:04

raider33


People also ask

Is there MIN MAX function in C?

the MIN and MAX Function in C. The MIN and MAX functions are used to find the minimum and maximum number from two values and are not predefined in C language. If we want to use the MIN and MAX functions, we must define them in C. We can use macros to define the MIN and MAX functions in the C language.

What does Max () do in C?

The max() function returns the greater of two values. The max() function is for C programs only. For C++ programs, use the __max() macro.

Does C have a built in MIN function?

min is indeed an inline function that returns the smallest of “a” and “b” implemented with GNU C smart macros. They can be any numeric values, including pointers to almost the same base type, and then they can be an integer or floating-point values.


1 Answers

Enabling -Wsign-compare, as suggested by FDinoff's answer is a good idea, but I thought it might be worth explaining the reason behind this in some more detail, as it's a quite common pitfall.

The problem isn't really with the MAX macro in particular, but with a) subtracting from an unsigned integer in a way that leads to an overflow, and b) (as the warning suggests) with how the compiler handles the comparison of signed and unsigned values in general.

The first issue is pretty easy to explain: When you subtract from an unsigned integer and the result would be negative, the result "overflows" to a very large positive value, because an unsigned integer cannot represent negative values. So [@"short" length] - 10 will evaluate to 4294967291.

What might be more surprising is that even without the subtraction, something like MAX([@"short" length], -10) will not yield the correct result (it would evaluate to -10, even though [@"short" length] would be 5, which is obviously larger). This has nothing to do with the macro, something like if ([@"short" length] > -10) { ... } would lead to the same problem (the code in the if-block would not execute).

So the general question is: What happens exactly when you compare an unsigned integer with a signed one (and why is there a warning for that in the first place)? The compiler will convert both values to a common type, according to certain rules that can lead to surprising results.

Quoting from Understand integer conversion rules [cert.org]:

  • If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
  • Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.

(emphasis mine)

Consider this example:

int s = -1;
unsigned int u = 1;
NSLog(@"%i", s < u);
// -> 0

The result will be 0 (false), even though s (-1) is clearly less then u (1). This happens because both values are converted to unsigned int, as int cannot represent all values that can be contained in an unsigned int.

It gets even more confusing if you change the type of s to long. Then, you'd get the same (incorrect) result on a 32 bit platform (iOS), but in a 64 bit Mac app it would work just fine! (explanation: long is a 64 bit type there, so it can represent all 32 bit unsigned int values.)

So, long story short: Don't compare unsigned and signed integers, especially if the signed value is potentially negative.

like image 106
omz Avatar answered Mar 16 '23 04:03

omz