I'm making a small ncurses application in Rust that needs to communicate with a child process. I already have a prototype written in Common Lisp. I'm trying to rewrite it because CL uses a huge amount of memory for such a small tool.
I'm having some trouble figuring out how to interact with the sub-process.
What I'm currently doing is roughly this:
Create the process:
let mut program = match Command::new(command) .args(arguments) .stdin(Stdio::piped()) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .spawn() { Ok(child) => child, Err(_) => { println!("Cannot run program '{}'.", command); return; } };
Pass it to an infinite (until user exits) loop, which reads and handles input and listens for output like this (and writes it to the screen):
fn listen_for_output(program: &mut Child, output_viewer: &TextViewer) { match program.stdout { Some(ref mut out) => { let mut buf_string = String::new(); match out.read_to_string(&mut buf_string) { Ok(_) => output_viewer.append_string(buf_string), Err(_) => return, }; } None => return, }; }
The call to read_to_string
however blocks the program until the process exits. From what I can see read_to_end
and read
also seem to block. If I try running something like ls
which exits right away, it works, but with something that doesn't exit like python
or sbcl
it only continues once I kill the subprocess manually.
Based on this answer, I changed the code to use BufReader
:
fn listen_for_output(program: &mut Child, output_viewer: &TextViewer) { match program.stdout.as_mut() { Some(out) => { let buf_reader = BufReader::new(out); for line in buf_reader.lines() { match line { Ok(l) => { output_viewer.append_string(l); } Err(_) => return, }; } } None => return, } }
However, the problem still remains the same. It will read all lines that are available, and then block. Since the tool is supposed to work with any program, there is no way to guess out when the output will end, before trying to read. There doesn't appear to be a way to set a timeout for BufReader
either.
The Rust documentation for the process module is all blocking and waiting for child output, but a quick search gives you a few options: callbacks, channels, or tokio (streams and async/await). For a simple cases, callbacks should be the first thing you reach for. Here's an example where I grab a line from stdout and send it back with my callback.
The stdout, stdin, and stderrof a child process can be configured by passing an Stdioto the corresponding method on Command. Once spawned, they can be accessed from the Child. For example, piping output from one command into another command can be done like so:
Handling I/O The stdout, stdin, and stderrof a child process can be configured by passing an Stdioto the corresponding method on Command. Once spawned, they can be accessed from the Child. For example, piping output from one command into another command can be done like so:
There is no implementation of Drop for child processes, so if you do not ensure the Child has exited then it will continue to run, even after the Child handle to the child process has gone out of scope. Calling wait (or other functions that wrap around it) will make the parent process wait until the child has actually exited before continuing.
Streams are blocking by default. TCP/IP streams, filesystem streams, pipe streams, they are all blocking. When you tell a stream to give you a chunk of bytes it will stop and wait till it has the given amout of bytes or till something else happens (an interrupt, an end of stream, an error).
The operating systems are eager to return the data to the reading process, so if all you want is to wait for the next line and handle it as soon as it comes in then the method suggested by Shepmaster in Unable to pipe to or from spawned child process more than once (and also in his answer here) works.
Though in theory it doesn't have to work, because an operating system is allowed to make the BufReader
wait for more data in read
, but in practice the operating systems prefer the early "short reads" to waiting.
This simple BufReader
-based approach becomes even more dangerous when you need to handle multiple streams (like the stdout
and stderr
of a child process) or multiple processes. For example, BufReader
-based approach might deadlock when a child process waits for you to drain its stderr
pipe while your process is blocked waiting on it's empty stdout
.
Similarly, you can't use BufReader
when you don't want your program to wait on the child process indefinitely. Maybe you want to display a progress bar or a timer while the child is still working and gives you no output.
You can't use BufReader
-based approach if your operating system happens not to be eager in returning the data to the process (prefers "full reads" to "short reads") because in that case a few last lines printed by the child process might end up in a gray zone: the operating system got them, but they're not large enough to fill the BufReader
's buffer.
BufReader
is limited to what the Read
interface allows it to do with the stream, it's no less blocking than the underlying stream is. In order to be efficient it will read the input in chunks, telling the operating system to fill as much of its buffer as it has available.
You might be wondering why reading data in chunks is so important here, why can't the BufReader
just read the data byte by byte. The problem is that to read the data from a stream we need the operating system's help. On the other hand, we are not the operating system, we work isolated from it, so as not to mess with it if something goes wrong with our process. So in order to call to the operating system there needs to be a transition to "kernel mode" which might also incur a "context switch". That is why calling the operating system to read every single byte is expensive. We want as few OS calls as possible and so we get the stream data in batches.
To wait on a stream without blocking you'd need a non-blocking stream. MIO promises to have the required non-blocking stream support for pipes, most probably with PipeReader, but I haven't checked it out so far.
The non-blocking nature of a stream should make it possible to read data in chunks regardless of whether the operating system prefers the "short reads" or not. Because non-blocking stream never blocks. If there is no data in the stream it simply tells you so.
In the absense of a non-blocking stream you'll have to resort to spawning threads so that the blocking reads would be performed in a separate thread and thus won't block your primary thread. You might also want to read the stream byte by byte in order to react to the line separator immediately in case the operating system does not prefer the "short reads". Here's a working example: https://gist.github.com/ArtemGr/db40ae04b431a95f2b78.
P.S. Here's an example of a function that allows to monitor the standard output of a program via a shared vector of bytes:
use std::io::Read; use std::process::{Command, Stdio}; use std::sync::{Arc, Mutex}; use std::thread; /// Pipe streams are blocking, we need separate threads to monitor them without blocking the primary thread. fn child_stream_to_vec<R>(mut stream: R) -> Arc<Mutex<Vec<u8>>> where R: Read + Send + 'static, { let out = Arc::new(Mutex::new(Vec::new())); let vec = out.clone(); thread::Builder::new() .name("child_stream_to_vec".into()) .spawn(move || loop { let mut buf = [0]; match stream.read(&mut buf) { Err(err) => { println!("{}] Error reading from stream: {}", line!(), err); break; } Ok(got) => { if got == 0 { break; } else if got == 1 { vec.lock().expect("!lock").push(buf[0]) } else { println!("{}] Unexpected number of bytes: {}", line!(), got); break; } } } }) .expect("!thread"); out } fn main() { let mut cat = Command::new("cat") .stdin(Stdio::piped()) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .spawn() .expect("!cat"); let out = child_stream_to_vec(cat.stdout.take().expect("!stdout")); let err = child_stream_to_vec(cat.stderr.take().expect("!stderr")); let mut stdin = match cat.stdin.take() { Some(stdin) => stdin, None => panic!("!stdin"), }; }
With a couple of helpers I'm using it to control an SSH session:
try_s! (stdin.write_all (b"echo hello world\n")); try_s! (wait_forˢ (&out, 0.1, 9., |s| s == "hello world\n"));
P.S. Note that await
on a read call in async-std is blocking as well. It's just instead of blocking a system thread it only blocks a chain of futures (a stack-less green thread essentially). The poll_read is the non-blocking interface. In async-std#499 I've asked the developers whether there's a short read guarantee from these APIs.
P.S. There might be a similar concern in Nom: "we would want to tell the IO side to refill according to the parser's result (Incomplete or not)"
P.S. Might be interesting to see how stream reading is implemented in crossterm. For Windows, in poll.rs, they are using the native WaitForMultipleObjects. In unix.rs they are using mio poll
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With