I'm writing a rather complex script that is using
asyncio.create_subprocess_exec(sub_cmd, *sub_cmd_args, stdout=PIPE, stderr=PIPE)
to wrap around another Python program -- that I can't modify permanently or otherwise include directly -- to capture its stdout/err for logging. The wrapped Python script is not using the -u
(unbuffered) option so the wrapper program tends to log in big buffered blocks. If this were the regular subprocess.Popen I could just pass bufsize=1
to get what I want, namely line buffering. However if I add that to asyncio.create_subprocess_exec() they trap for that specifically and I get:
<snip>
File "/usr/lib64/python3.4/asyncio/subprocess.py", line 193, in create_subprocess_exec
stderr=stderr, **kwds)
File "/usr/lib64/python3.4/asyncio/base_events.py", line 642, in subprocess_exec
raise ValueError("bufsize must be 0")
ValueError: bufsize must be 0
I assume their trap is there for good reason so I wonder if there's some other way I can affect the transport buffering.
I first proved to myself that this was a indeed a pipe buffering issue by adding -u
to the wrapped program's shebang line. I couldn't rely on that as a solution though because such a change would eventually get clobbered by OS updates.
I was able to resolve the issue though in a similar fashion though:
PYTHONUNBUFFERED=1
in its inherited environment.asyncio.create_subprocess_exec()
does support an env=
argument and most everything else that can be passed to subprocess.Popen()
; perhaps a little under-documented but looking at the code makes this quite obvious.So I changed my call to:
asyncio.create_subprocess_exec(sub_cmd, *sub_cmd_args, stdout=PIPE, stderr=PIPE, env={'PYTHONUNBUFFERED': '1'})
This worked perfectly and credit goes to my good friend and technical guru.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With