No doubt every other student of C has noticed this; it's new to me.
If I declare:
int xlate( void *, ... );
and then define xlate( )
in several different ways (maybe all definitions but one are #ifdef
-ed out):
int xlate ( char *arg1 ) { ... }
int xlate ( int arg1, char *arg2, int arg3 ) { ... }
int xlate ( char arg1, int *arg2 ) { ... }
and omit any mention of va_list -- never mentioning it -- in every definition of xlate( ); and then call xlate( ) abiding by one of its several definitions, it seems that every compiled version of xlate( ) works just the way I want, at least under gcc and msvc.
Is this relaxed, undemanding, generous compiler behavior guaranteed under C99?
Thanks!
-- Pete
No, it's more a poor man's overloading. Polymorphism (the ability to perform an action on multiple object types and have each do its correct thing) in C is usually done with structures containing function pointers.
And you can't just blindly use as many arguments as you like. Either you have a fixed minimum number of arguments which can inform the function how many variable ones to expect, or you you have a sentinel argument at the end to indicate you're done.
In other words, something like:
printf ("%d: %s\n", int1, charpointer2);
x = sum_positive_values (1, 2, 3, 4, 5, -1);
In addition to caf's answer:
This can't work because of several problems, and the standard couldn't do anything else than to forbid such a thing:
Prototypes tell the calling side how argument conversions have to be performed when calling the function. Already the example you give would not work reliably for the first parameter. You declare it void*
and then int
in another. Since both may have different width, your code is condemned to fail on most 64 bit architectures.
Even worse, the ...
notation tells the calling side to apply default promotions for the remaining arguments. E.g if your implementation would expect a float
the calling side would always provide a double
and, again, your code would crash badly (= lately).
Then, modern architectures have complicated rules which type of arguments they put on the stack and which are kept in registers. This depends on the type of the argument, e.g integers and floating points have different sets of registers. So this will get your arguments completely wrong.
If you declare xlat as taking void *
, you can't just go and implement it with int
. Even poor-man's overloading is to be done properly, and would maybe look like
enum { T_FOO, T_BAR, }; void xlat(enum tp type, ...) { struct foo *foo; struct bar *bar; va_list argp; va_start(argp, type); if (type == T_FOO) { foo = va_arg(argp, struct foo *); do_something_with_foo; } else if (type == T_BAR) { bar = va_arg(argp, struct bar *); do_something_with_bar; } }
Though I guess that's more like overloading than polymorphism.
No, such behaviour is not guaranteed by the standard. The relevant text is in §6.5.2.2:
9 If the function is defined with a type that is not compatible with the type (of the expression) pointed to by the expression that denotes the called function, the behavior is undefined.
Some platforms need to use a different calling convention when calling varargs functions, because their usual calling convention requires the callee to know how many actual arguments were passed. The C standard was written with this specifically in mind - so varargs functions can only be called through a correctly typed varargs declaration, and expressions denoting varargs functions can only be used to call varargs functions.
You can do what you want by creating matching declarations of each function, wrapped in the same #ifdef
magic that selects one that is also used to select the correct function definition.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With