Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

is it a good practice to close file descriptors on exit

If for some reason, I discover a fatal situation in my program, and I would like to exit with an error code. Sometimes, the context of the fatal error is outside the scope of other file-descriptors. is it a good practice to close these file descriptors. As far as I know, these files are automatically closed when the process dies.

like image 230
stdcall Avatar asked Mar 06 '13 12:03

stdcall


People also ask

What happens if you don't close a file descriptor?

As long as your program is running, if you keep opening files without closing them, the most likely result is that you will run out of file descriptors/handles available for your process, and attempting to open more files will fail eventually.

Are file descriptors closed automatically?

Files are automatically closed, but it's a good practice. Show activity on this post.

Why it is important to close a file how it is closed?

Because files are limited resources managed by the operating system, making sure files are closed after use will protect against hard-to-debug issues like running out of file handles or experiencing corrupted data.

Why is it important to ensure that you close any file that has been opened before terminating a program?

It is important to close files you are working with as soon as you are finished with them. This minimizes the amount of user virtual machine resources required and helps keep shared files and directories available for other users. Closing a file means logically disconnecting it from the application program.


1 Answers

Files are automatically closed, but it's a good practice.

See valgrind on this example

david@debian:~$ cat demo.c #include <stdio.h>  int main(void) {     FILE *f;      f = fopen("demo.c", "r");     return 0; } david@debian:~$ valgrind ./demo ==3959== Memcheck, a memory error detector ==3959== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al. ==3959== Using Valgrind-3.6.0.SVN-Debian and LibVEX; rerun with -h for copyright info ==3959== Command: ./demo ==3959==  ==3959==  ==3959== HEAP SUMMARY: ==3959==     in use at exit: 568 bytes in 1 blocks ==3959==   total heap usage: 1 allocs, 0 frees, 568 bytes allocated ==3959==  ==3959== LEAK SUMMARY: ==3959==    definitely lost: 0 bytes in 0 blocks ==3959==    indirectly lost: 0 bytes in 0 blocks ==3959==      possibly lost: 0 bytes in 0 blocks ==3959==    still reachable: 568 bytes in 1 blocks ==3959==         suppressed: 0 bytes in 0 blocks ==3959== Rerun with --leak-check=full to see details of leaked memory ==3959==  ==3959== For counts of detected and suppressed errors, rerun with: -v ==3959== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 4) 

As you can see, it raises a memory leak

On some circumstances you can make use of atexit():

#include <stdio.h> #include <stdlib.h>  static FILE *f;  static void free_all(void) {     fclose(f); }  static int check(void) {     return 0; }  int main(void) {     atexit(free_all);     f = fopen("demo.c", "r");     if (!check()) exit(EXIT_FAILURE);     /* more code */     return 0; } 
like image 134
David Ranieri Avatar answered Sep 18 '22 17:09

David Ranieri