Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why glibc's fclose(NULL) cause segmentation fault instead of returning error?

Tags:

According to man page fclose(3):

RETURN VALUE

Upon successful completion 0 is returned. Otherwise, EOF is returned and the global variable errno is set to indicate the error. In either case any further access (including another call to fclose()) to the stream results in undefined behavior.

ERRORS

EBADF The file descriptor underlying fp is not valid.

The fclose() function may also fail and set errno for any of the errors specified for the routines close(2), write(2) or fflush(3).

Of course fclose(NULL) should fail but I expect that it to return with an errno normally instead of dying by segmentation fault directly. Is there any reason of this behavior?

Thanks in advance.

UPDATE: I shall put my code here (I'm trying strerror(), particularly).

FILE *not_exist = NULL;  not_exist = fopen("nonexist", "r"); if(not_exist == NULL){     printError(errno); }  if(fclose(not_exist) == EOF){     printError(errno); } 
like image 529
Vdragon Avatar asked Jun 04 '13 16:06

Vdragon


People also ask

What happens if you Fclose NULL?

The fclose function closes stream . If stream is NULL , the invalid parameter handler is invoked, as described in Parameter validation. If execution is allowed to continue, fclose sets errno to EINVAL and returns EOF . It's recommended that you always check the stream pointer before you call this function.

Can you call Fclose on NULL?

Not only it is not necessary to use fclose() when f is NULL , but you should actually not invoke fclose() when f is NULL . If f is NULL , then the file was never opened to begin with, so it does not need any closing.

Does Fclose set pointer NULL?

fclose doesn't set it to NULL because it CAN'T set it to null (it's a pointer to a FILE structure, not to a FILE *). How FP is allocated isn't a concern to you, it's not part of your contract with the API, so don't worry about it. Addenda: fopen is returning a pointer to a FILE structure.


2 Answers

fclose requires as its argument a FILE pointer obtained either by fopen, one of the standard streams stdin, stdout, or stderr, or in some other implementation-defined way. A null pointer is not one of these, so the behavior is undefined, just like fclose((FILE *)0xdeadbeef) would be. NULL is not special in C; aside from the fact that it's guaranteed to compare not-equal to any valid pointer, it's just like any other invalid pointer, and using it invokes undefined behavior except when the interface you're passing it to documents as part of its contract that NULL has some special meaning to it.

Further, returning with an error would be valid (since the behavior is undefined anyway) but harmful behavior for an implementation, because it hides the undefined behavior. The preferable result of invoking undefined behavior is always a crash, because it highlights the error and enables you to fix it. Most users of fclose do not check for an error return value, and I'd wager that most people foolish enough to be passing NULL to fclose are not going to be smart enough to check the return value of fclose. An argument could be made that people should check the return value of fclose in general, since the final flush could fail, but this is not necessary for files that are opened only for reading, or if fflush was called manually before fclose (which is a smarter idiom anyway because it's easier to handle the error while you still have the file open).

like image 144
R.. GitHub STOP HELPING ICE Avatar answered Sep 30 '22 07:09

R.. GitHub STOP HELPING ICE


fclose(NULL) should succeed. free(NULL) succeeds, because that makes it easier to write cleanup code.

Regrettably, that's not how it was defined. Therefore you can't use fclose(NULL) in portable programs. (E.g. see http://pubs.opengroup.org/onlinepubs/9699919799/).

As others have mentioned, you don't generally want an error return if you pass NULL to the wrong place. You want a warning message, at least on debug/test builds. Dereferencing NULL gives you an immediate warning message, and the opportunity to collect a backtrace which identifies the programming error :). While you're programming, a segfault is about the best error you can get. C has many more subtle errors, which take much longer to debug...

It is possible to abuse error returns to increase robustness against programming errors. However, if you're worried a software crash would lose data, note that exactly the same can happen e.g. if your hardware loses power. That's why we have autosave (since Unix text editors with two-letter names like ex and vi). It'd still be preferable for your software to crash visibly, rather than continuing with an inconsistent state.

like image 34
sourcejedi Avatar answered Sep 30 '22 08:09

sourcejedi