What I have in mind is a very generic BackgroundTask class that can be used within webservers or standalone scripts, to schedule away tasks that don't need to be blocking.
I don't want to use any task queues (celery, rabbitmq, etc.) here because the tasks I'm thinking of are too small and fast to run. Just want to get them done as out of the way as possible. Would that be an async approach? Throwing them onto another process?
First solution I came up with that works:
# Need ParamSpec to get correct type hints in BackgroundTask init
P = ParamSpec("P")
class BackgroundTask(metaclass=ThreadSafeSingleton):
"""Easy way to create a background task that is not dependent on any webserver internals.
Usage:
async def sleep(t):
time.sleep(t)
BackgroundTask(sleep, 10) <- Creates async task and executes it separately (nonblocking, works with coroutines)
BackgroundTask(time.sleep, 9) <- Creates async task and executes it separately (nonblocking, works with normal functions)
"""
background_tasks = set()
lock = threading.Lock()
def __init__(self, func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None:
"""Uses singleton instance of BackgroundTask to add a task to the async execution queue.
Args:
func (typing.Callable[P, typing.Any]): _description_
"""
self.func = func
self.args = args
self.kwargs = kwargs
self.is_async = asyncio.iscoroutinefunction(func)
async def __call__(self) -> None:
if self.is_async:
with self.lock:
task = asyncio.create_task(self.func(*self.args, **self.kwargs))
self.background_tasks.add(task)
print(len(self.background_tasks))
task.add_done_callback(self.background_tasks.discard)
# TODO: Create sync task (this will follow a similar pattern)
async def create_background_task(func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None:
b = BackgroundTask(func, *args, **kwargs)
await b()
# Usage:
async def sleep(t):
time.sleep(t)
await create_background_task(sleep, 5)
I think I missed the point by doing this though. If I ran this code along with some other async code, then yes, I would get a performance benefit since blocking operations aren't blocking the main thread anymore.
I'm thinking I maybe need something more like a separate process to handle such background tasks without blocking the main thread at all (the above async code will still be run on the main thread).
Does it make sense to have a separate thread that handles background jobs? Like a simple job queue but very lightweight and does not require additional infrastructure?
Or does it make sense to create a solution like the one above?
I've seen that Starlette does something like this (https://github.com/encode/starlette/blob/decc5279335f105837987505e3e477463a996f3e/starlette/background.py#L15) but they await the background tasks AFTER a response is returned.
This makes their solution dependent on a web server design (i.e. doing things after response is sent is OK). I'm wondering if we can build something more generic where you can run background tasks in scripts or webservers alike, without sacrificing performance.
Not that familiar with async/concurrency features, so don't really know how to compare these solutions. Seems like an interesting problem!
Here is what I came up with trying to perform the tasks on another process:
class BackgroundTask(metaclass=ThreadSafeSingleton):
"""Easy way to create a background task that is not dependent on any webserver internals.
Usage:
async def sleep(t):
time.sleep(t)
BackgroundTask(sleep, 10) <- Creates async task and executes it separately (nonblocking, works with coroutines)
BackgroundTask(time.sleep, 9) <- Creates async task and executes it separately (nonblocking, works with normal functions)
BackgroundTask(es.transport.close) <- Probably most common use in our codebase
"""
background_tasks = set()
executor = concurrent.futures.ProcessPoolExecutor(max_workers=2)
lock = threading.Lock()
def __init__(self, func: typing.Callable[P, typing.Any], *args: P.args, **kwargs: P.kwargs) -> None:
"""Uses singleton instance of BackgroundTask to add a task to the async execution queue.
Args:
func (typing.Callable[P, typing.Any]): _description_
"""
self.func = func
self.args = args
self.kwargs = kwargs
self.is_async = asyncio.iscoroutinefunction(func)
async def __call__(self) -> None:
if self.is_async:
with self.lock:
loop = asyncio.get_running_loop()
with self.executor as pool:
result = await loop.run_in_executor(
pool, functools.partial(self.func, *self.args, **self.kwargs))
Your questions are so abstract that I'll try to give common answers to all of them.
How can I "fire and forget" a task without blocking main thread?
It depends on what you mean by saying forget.
I don't want to use any task queues (celery, rabbitmq, etc.) here because the tasks I'm thinking of are too small and fast to run. Just want to get them done as out of the way as possible. Would that be an async approach? Throwing them onto another process?
If it contains loops or other CPU-bound operations, then right to use a subprocess. If the task makes a request (async), reads files, logs to stdout
, or other I/O bound operations, then it is right to use coroutines or threads.
Does it make sense to have a separate thread that handles background jobs? Like a simple job queue but very lightweight and does not require additional infrastructure?
We can't just use a thread as it can be blocked by another task that uses CPU-bound operations. Instead, we can run a background process and use pipes, queues, and events to communicate between processes. Unfortunately, we cannot provide complex objects between processes, but we can provide basic data structures to handle status changes of the tasks running in the background.
Starlette is a lightweight ASGI framework/toolkit, which is ideal for building async web services in Python. (README description)
It is based on concurrency. So even this is not a generic solution for all kinds of tasks. NOTE: Concurrency differs from parallelism.
I'm wondering if we can build something more generic where you can run background tasks in scripts or webservers alike, without sacrificing performance.
The above-mentioned solution suggests use a background process. Still, it will depend on the application design as you must do things (emit an event, add an indicator to the queue, etc.) that are needed for communication and synchronization of running processes (tasks). There is no generic tool for that, but there are situation-dependent solutions.
Suppose we have a request
function that should call an API without blocking the work of other tasks. Also, we have a sleep
function that should not block anything.
import asyncio
import aiohttp
async def request(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
try:
return await response.json()
except aiohttp.ContentTypeError:
return await response.read()
async def sleep(t):
await asyncio.sleep(t)
async def main():
background_task_1 = asyncio.create_task(request("https://google.com/"))
background_task_2 = asyncio.create_task(sleep(5))
... # here we can do even CPU-bound operations
result1 = await background_task_1
... # use the 'result1', etc.
await background_task_2
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
In this situation, we use asyncio.create_task
to run a coroutine concurrently (like in the background). Sure we could run it in a subprocess, but there is no reason for that as it would use more resources without improving the performance.
Unlike the first situation where the functions were already asynchronous, in this situation, those are synchronous but not CPU-bound (I/O bound). This gives an ability to run them in threads or make them asynchronous (using asyncio.to_thread
) and run concurrently.
import time
import asyncio
import requests
def asynchronous(func):
"""
This decorator converts a synchronous function to an asynchronous
Usage:
@asynchronous
def sleep(t):
time.sleep(t)
async def main():
await sleep(5)
"""
async def wrapper(*args, **kwargs):
await asyncio.to_thread(func, *args, **kwargs)
return wrapper
@asynchronous
def request(url):
with requests.Session() as session:
response = session.get(url)
try:
return response.json()
except requests.JSONDecodeError:
return response.text
@asynchronous
def sleep(t):
time.sleep(t)
async def main():
background_task_1 = asyncio.create_task(request("https://google.com/"))
background_task_2 = asyncio.create_task(sleep(5))
...
Here we used a decorator to convert a synchronous (I/O bound) function to an asynchronous one and use them like in the first situation.
To run CPU-bound tasks parallelly in the background we have to use multiprocessing. And for ensuring the task is done we use the join
method.
import time
import multiprocessing
def task():
for i in range(10):
time.sleep(0.3)
def main():
background_task = multiprocessing.Process(target=task)
background_task.start()
... # do the rest stuff that does not depend on the background task
background_task.join() # wait until the background task is done
... # do stuff that depends on the background task
if __name__ == "__main__":
main()
Suppose the main application depends on the parts of the background task. In this case, we need an event-driven design as the join
cannot be called multiple times.
import multiprocessing
event = multiprocessing.Event()
def task():
... # synchronous operations
event.set() # notify the main function that the first part of the task is done
... # synchronous operations
event.set() # notify the main function that the second part of the task is also done
... # synchronous operations
def main():
background_task = multiprocessing.Process(target=task)
background_task.start()
... # do the rest stuff that does not depend on the background task
event.wait() # wait until the first part of the background task is done
... # do stuff that depends on the first part of the background task
event.wait() # wait until the second part of the background task is done
... # do stuff that depends on the second part of the background task
background_task.join() # wait until the background task is finally done
... # do stuff that depends on the whole background task
if __name__ == "__main__":
main()
As you already noticed with events we can just provide binary information and those are not effective if the processes are more than two (It will be impossible to know where the event was emitted from). So we use pipes, queues, and manager to provide non-binary information between the processes.
I'll answer "what you've asked", but I'll preface that you may be asking the wrong question due to a lack of understanding.
In Python stdlib, subprocess
can spin up separate independent processes that behave like "fire and forget". Here's a couple:
import os, subprocess
subprocess.Popen(['mkdir', 'foo'])
os.popen('touch answer_is_$((1 + 2))')
It'd be much better to provide concrete examples of these "small and fast non-blocking tasks" you'd like to have, complete with the environment you'll want them to be running in. You're missing some understanding that's evident b/c some of your statements conflict with others. For example, asyncio
and threading
don't operate like "fire and forget" at all.
Also, there's not going to be a good way to "background within any context" b/c the differences between different contexts matter, and "what's best" depends on many factors.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With