Of course, the immediate answer for most situations is "yes", and I am a firm believer that a process should correctly cleanup any resources it has allocated, but what I have in my situation is a long-running system daemon that opens a fixed number of file descriptors at the startup, and closes them all before exiting.
This is an embedded platform, and I'm trying to make the code as compact as possible, while not introducing any bad style. But since file descriptors are closed before exit anyway, does this file descriptor cleanup code serve any purpose? Do you always close all your file descriptors?
Yes, close your file descriptors and free all heap memory, even if you know that the OS will clean it up - that way, when you run valgrind or some similar tool, you don't get a lot of noise in the results, and you can easily recognize "legit" fd leaks.
As long as your program is running, if you keep opening files without closing them, the most likely result is that you will run out of file descriptors/handles available for your process, and attempting to open more files will fail eventually.
It is important to close files you are working with as soon as you are finished with them. This minimizes the amount of user virtual machine resources required and helps keep shared files and directories available for other users. Closing a file means logically disconnecting it from the application program.
Because files are limited resources managed by the operating system, making sure files are closed after use will protect against hard-to-debug issues like running out of file handles or experiencing corrupted data.
Closing file descriptors when you are done using them makes your code more reusable and easier to extend. This sounds to me like a case where you have a valid reason for letting them get closed automatically.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With