For example, if I had a struct FooSpec
:
struct FooSpec { /* lots of data */ };
Is it bad practice to provide both of the following as overloaded functions? Or should I just choose one and stick with it?
Foo *createFoo(const FooSpec *spec) { /* ... */ }
Foo *createFoo(const FooSpec &spec) { return createFoo(&spec); }
Could having both cause unexpected problems?
Difference Between Reference Variable and Pointer Variable: A reference is the same object, just with a different name and a reference must refer to an object. Since references can't be NULL, they are safer to use. A pointer can be re-assigned while a reference cannot, and must be assigned at initialization only.
Function overloading is a feature of object-oriented programming where two or more functions can have the same name but different parameters. When a function name is overloaded with different jobs it is called Function Overloading.
A pointer is an object itself. It can be assigned or copied to pass a reference to a pointer as a function parameter. In this case, the & symbol is a reference notation and not the address-of operator used to retrieve the pointer to the object's location in memory.
There should be only one way to do a specific thing. This will make the code clearer for its users. Therefore, you should pick only one, depending on the semantics:
If a nullptr
is a meaningful value to be passed to createFoo
, then declare the function to take a pointer.
If a nullptr
shouldn't be passed to createFoo
, then declare the parameter a reference to make the compiler yell at people who try pass in a nullptr
.
Now the users of your code will go "Ah, I can pass a null to that function, cool!" or "Okay this function needs an actual object to work properly".
Stick to this convention throughout a project to make the interface of the code more coherent.
Or as WhozCraig put it,
Pick the appropriate model and tell the clients "this is your interface."
I can think of one marginally useful reason to do this.
The Foo*
version can check for null, handle that case, and if not use the reference case.
The Foo&
can then not check for null.
This lets callers say "I know this is not null because I checked", saving a branch within the function.
However this benefit (saving a branch) is pointless once the cost of the method passes a very low threshhold.
A software codebase is not a static object, it evolves† with the needs of those who use it. If a confusing interface is "bad practice" (I would rather say a silly thing to do), a codebase may end up in such situation as it evolves from refactoring and code improvement.
In this instance, it's not bad practice if you're in the process of migrating your coding conventions to use references instead of const pointers (or the opposite, but that doesn't make much sense to me). The bad practice here would probably to make these evolutions and not document it, with other people not knowing that one of these methods is deprecated, or simply which one to use.
There's also the possibility of passing a nullptr_t
(or NULL
pointers for old timers like me) With const pointers. I am not sure of the benefit of this, given that you can overload methods on their arguments, but again in the context of a changing API it makes sense.
† it's alive!!!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With