Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Guaranteeing calling to destruction on process termination

After reading A LOT of data on the subject I still couldn't find any actual solution to my problem (there might not be any).

My problem is as following:

In my project I have multiple drivers working with various hardware's (IO managers, programmable loads, power supplies and more).

Initializing connection to these hardware's is costly (in time), and I cant open and then close the connection for every communication iteration between us.

Meaning I cant do this (Assuming programmable load implements enter / exit):

start of code...

with programmable_load(args) as program_instance:
     programmable_load_instance.do_something()

rest of code...

So I went for a different solution :

class programmable_load():
    def __init__(self):
         self.handler = handler_creator()
    def close_connection(self):
         self.handler.close_connection()
         self.handler = None
    def __del__(self):
         if (self.handler != None):
             self.close_connection()

For obvious reasons I dont 'trust' the destructor to actually get called so I explicitly call close_connection() when I want to end my program (for all drivers).

The problem happens when I abruptly terminate the process, for example when I run via debug mode and quit debugging.

In these cases the process terminates without running through any destructors. I understand that the OS will clear all memory unused at this point, but is there any way to clear the memory in an organized manner?

and if not, is there a way to make the quit debugging function pass through a certain set of functions? Does the python process know it got a quite debugging event or does it treat it as a normal termination?

Operating system: Windows

like image 952
Rohi Avatar asked Jun 10 '18 09:06

Rohi


2 Answers

According to this documentation:

If a process is terminated by TerminateProcess, all threads of the process are terminated immediately with no chance to run additional code.

(Emphasis mine.) This implies that there is nothing you can do in this case.

As detailed here, signals don't work very well on ms-windows.

like image 175
Roland Smith Avatar answered Sep 29 '22 06:09

Roland Smith


As was mentioned in a comment, you could use atexit to do the cleanup. But that only works if the process is asked to close (e.g. QUIT signal on Linux) and not just killed (as is likely the case when stopping the debugging session). Similarily if you force your computer to turn off (e.g. long press power button or remove power) then it won't be called either. There is no 'solution' to that for obvious reasons. Your program can't expect to be called when the power suddenly goes off or when it is forcefully killed. The point of forcefully killing is to definitely kill the process now. If it first called your clean-up code then you could delay that which defeats the purpose. That is why there are signals such as to ask your process to stop. This is not Python specific. The same concept also applies across operating systems.

Bonus (design suggestion, not a solution): I would argue that you can still make use of the context manager (using with). Your problem is not unique. Database connections are usually kept alive for longer as well. It is a question of the scope. Move the context further up to the application level. Then it is clear what the boundary is and you don't need any magic (you are probably also aware of @contextmanager to make that a breeze).

like image 30
de1 Avatar answered Sep 29 '22 05:09

de1