I often see code that checks for errors from POSIX functions by testing for less than zero instead of explicitly defined and the usually only error code used -1. That is
ret = function();
if (ret < 0) {
...
}
vs
ret = function();
if (ret == -1) {
...
}
Is there any purpose for the first practice? Is comparing with 0 faster than comparing with -1 on some architectures? Does C or POSIX standard make any guarantees that the first alternative won't break in the future if the only error code defined now is -1? If not, should it be considered bad practice or not? (I guess it's unlikely for most of the functions to change in such manner that would cause lot of already written code to broke?).
EDIT: To make the question more clear: I am talking only about functions that exclusively return -1 as an error code as defined by standard. I know there are some that don't. I saw the < 0 check on these functions in many places instead of == -1. Hence the question.
Firstly, some functions have only one error code, some have more. For one example, pthread_create
is a POSIX function that can return more than one error code. When I don't care about specific error codes, I also don't want to care about how many there are of them. In such cases checking for negative return is just a reliable universal no-maintenance approach that covers all cases. Additionally, errors in POSIX specification are typically described in terms of manifest constants, not in terms of specific numerical values.
Secondly, I'm not sure what exactly POSIX says about the extensibility of error returns for existing functions, but naturally I would prefer to anticipate such possibility in my code.
Thirdly, in general, checking for "negative" value on many (if not most or all) processors is more efficient that checking for a specific value. Many CPU's have a dedicated CPU status flag that designates negative result. That means that testing for negative values does not have to involve a literal operand. For example, on x86 platform test for negativity does not have to involve a comparison with literal 0
. Comparisons with specific values will generally require embedding that specific value into the machine instruction, resulting in a slower instruction. However, in any case "special" values, like 0
, -1
, 1
etc., can be tested for using more clever and efficient techniques. So I doubt that comparisons in question are done that way for performance reasons.
It's mostly historical accident. There used to be (pre-POSIX) UNIX-like systems that returned -errno on error instead of -1, so you didn't have to use a global errno variable. So checking for < 0 would work on such systems.
You can see this history still in Linux -- system calls return -errno and the C library checks for values in the range [-4096..-1]. If the return value is in that range, the library negates it and stores it in errno (which the kernel knows nothing about) and then returns -1.
If a function is defined to return -1
on error and 0
on success, then use -1
to test for error and 0
to test for success.
Doing anything else is inaccurate and increases the probability of malfunction.
If one wants to really be on the save side one should try to detect all the cases where a function returns a value not being defined as return value and do an addtional assert on this case. For my introducing example this would be all values >0
and <-1
.
That's it.
int func(void); /* Returns 0 on success and -1 on error. */
int main(void)
{
int result = func();
switch result
{
case 0:
/* Success. */
break;
case -1:
/* Failure. */
break;
default:
/* Unexpected result! */
break;
}
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With