I thought I understood basic Objective-C as far as alloc and init... methods go, but apparently I don't. I've boiled down the problem I encountered to the minimum example below. (For the example I put all the source into one file, but the problem happens just the same if the source is split into multiple source and header files as it normally would be).
Here is a synopsis of what the code does and what happens when I run it.
I define two classes MyInteger and MyFloat, that are nearly identical except that one deals with type int the other with float. Both have initializer methods called initWithValue: however with different type of the argument. There is a #define to control whether the MyInteger class is defined or not, the reason being, that it leads to different behavior of the program even though the class is never used.
The main() only uses MyFloat. The first two lines do this: Allocates an instance of MyFloat, and initializes it with a value of 50.0. It then prints the value. Depending on whether MyInteger is defined or not, I get two different outputs.
Without MyInteger defined, just as I would expect:
[4138:903] float parameter value:50.000000
[4138:903] Value:50.000000
(next two output lines omitted)
With MyInteger defined, much to my surprise:
[4192:903] float parameter value:0.000000
[4192:903] Value:0.000000
(next two output lines omitted)
It seems to me that the compiler treats the call to initWithValue: as if it belongs to the MyInteger class. The next two lines of main() test this by casting [MyFloat alloc] to type MyFloat*. And that does generate output as expected, even when MyInteger is defined:
[4296:903] float parameter value:0.000000
[4296:903] Value:0.000000
[4296:903] float parameter value:50.000000
[4296:903] Value with cast:50.000000
Please explain what's going on! I've struggled with it for more than 24 hours now, even to the point of opening the doors to let some heat out so my computer might cool down:-) Thanks!
Another oddity is that if I move the definition of MyInteger down below the definition of MyFloat, then everything is "good" - works as I would expect. History has proven me wrong too often for me to suspect the compiler is to blame. In any case, here is compiler and project information: Using Xcode 4.0.2. Tried with all 3 compiler options (GCC 4.2, LLVM GCC 4.2, and LLVM Compiler 2.0). The Xcode project for this examples was setup using standard configuration for a Mac OS X command line tool based on Foundation.
#define DO_DEFINE_MYINTEGER 1
//------------- define MyInteger --------------
#if DO_DEFINE_MYINTEGER
@interface MyInteger : NSObject {
int _value;
}
-(id)initWithValue:(int)value;
@end
@implementation MyInteger
-(id)initWithValue:(int)value {
self= [super init];
if (self) {
_value= value;
}
return self;
}
@end
#endif
//------------- define MyFloat --------------
@interface MyFloat : NSObject {
float _value;
}
-(id)initWithValue:(float)value;
-(float)theValue;
@end
@implementation MyFloat
-(id)initWithValue:(float)value {
self= [super init];
if (self) {
NSLog(@"float parameter value:%f",value);
_value= value;
}
return self;
}
-(float)theValue {
return _value;
}
@end
//--------------- main ------------------------
int main (int argc, const char * argv[])
{
MyFloat *mf1= [[MyFloat alloc] initWithValue:50.0f];
NSLog(@"Value:%f",[mf1 theValue]);
MyFloat *mf2= [((MyFloat*)[MyFloat alloc]) initWithValue:50.0f];
NSLog(@"Value with cast:%f",[mf2 theValue]);
return 0;
}
+alloc
is prototyped as returning id
, and when the compiler is faced with a choice of multiple -initWithValue:
methods, it generates code for calling the first one it finds. When MyInteger is defined, that means the compiler will generate code to convert 50.0 and pass it as an integer argument. Note that integer and float arguments are passed differently, the former on the stack and the latter in a floating-point register.
At run time, because message dispatch is handled dynamically, the correct method is called - but that method assumes that the value
argument was passed in a floating-point register. But that's not where the calling code put it, so the called method gets incorrect results when it reads that register.
The type cast works because it explicitly tells the compiler which of the methods will be called at run time, which allows it to generate the correct calling code, passing value
in a floating-point register instead of on the stack.
Edit: All of that being said, NSResponder's answer makes some very good points too. In Objective-C, it's a very bad idea to declare methods that share the same name, but have different signatures (i.e. argument & return types), and a method named -initWithValue:
implies that its argument is an NSValue object.
It's a bad idea to have two method signatures where the method names are identical but the types are not identical, especially if they're being compiled in the same file. The compiler's getting confused here over whether the message expression should be passing an int or a float.
If you look at NSNumber
, you'll see that there are separate -initWithFloat:
, -initWithDouble:
and -initWithInt:
methods, for example.
Also, if you have a method with the word "value" in its name, most Cocoa developers would assume that it expects an NSValue
parameter.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With