I have a web-service that runs long-running jobs (in the order of several hours). I am developing this using Flask, Gunicorn, and nginx.
What I am thinking of doing is to have the route which takes a long time to complete, call a function that creates a thread. The function will then return a guid back to the route, and the route will return a url (using the guid) that the user can use to check progress. I am making the thread a daemon (thread.daemon = True) so that the thread exits if my calling code exits (unexpectedly).
Is this the correct approach to use? It works, but that doesn't mean that it is correct.
my_thread = threading.Thread(target=self._run_audit, args=()) my_thread.daemon = True my_thread.start()
We can configure a new daemon thread to execute a custom function that will perform a long-running task, such as monitor a resource or data. For example we might define a new function named background_task(). Then, we can configure a new threading. Thread instance to execute this function via the “target” argument.
You still should use foreground services to perform tasks that are long running and need to notify the user that they are ongoing. If you use foreground services directly, ensure you shut down the service correctly to preserve resource efficiency.
Use bg to Send Running Commands to the Background Hitting Ctrl + Z stops the running process, and bg takes it to the background. You can view a list of all background tasks by typing jobs in the terminal. Use the fg command to get back to the running task.
Celery and RQ is overengineering for simple task. Take a look at this docs - https://docs.python.org/3/library/concurrent.futures.html
Also check example, how to run long-running jobs in background for Flask app - https://stackoverflow.com/a/39008301/5569578
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With