Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

C++: using cin on a child process

Tags:

c++

cin

I'm writing a C++ program that forks early on and where I use std::cout and std::cin in both the child and parent processes. For some reason, on Linux, cin doesn't seem to be working in the child process; it never prompts for for any input. The funny thing is is that this same program works just fine on Mac. Does anyone know why this is happening? Thanks.

like image 626
metalhead696 Avatar asked Jan 13 '17 18:01

metalhead696


1 Answers

What you observed is because of the fork and exec model 1. All the file descriptors are copied, as you expected, but the order of precedence when two processes are reading from a single descriptor is undefined 2. That they are parent and child is irrelevant once the fork() returns.

As a result, your situation is even worse than simply being implementation dependent. You could get two different results on the SAME system.

Your scenario is called a race condition. Which of the two program copies (parent or child) acquires which character is dependent on a number of timing related details in such cases. Even what resources that other processes are demanding on your system could play into the observed behavior.

Read operations are not atomic by nature 2.
If you can read the same character from the same stream by both parent and child on any OS, that OS is not properly protecting against this race condition and it is a kernel issue that should be reported as a possible bug 3.

You can resolve this functional ambiguity with a semaphore or other methods for synchronization. If such a mechanism was employed properly to guarantee atomic reads, you might achieve thread safety (or process safety in this case), but you may not still have what you want.

The classic solution might be to decide which of the two processes you want to read std::cin, and close std::cin in the other process. The standard mechanism to do this is to test the return integer from the fork call. If (fork() == 0) then you are in the child. (Examples are given in fork() documentation.)

If you need the value in both processes, you can use pipe() and dup2() before the fork and the correct close() in each process to stream a copy of the characters from the primary reader to the secondary one. This is the proxy design pattern. If different message types should be handled by different processes, you may also want to implement the chain of responsibility design pattern too.

It is interesting to note that the output file descriptors wrapped by std::cout and std::cerr do not have race condition risks and you can intermingle output from parent and child 4.

.......

[1] POSIX standards dating back to early UNIX, as far back as the PDP11.

[2] The Open Group Base Specifications Issue 7 IEEE Std 1003.1-2008, 2016 Edition manual page for pread and read states, "The standard developers considered adding atomicity requirements to a pipe or FIFO, but recognized that due to the nature of pipes and FIFOs there could be no guarantee of atomicity of reads of {PIPE_BUF} or any other size that would be an aid to applications portability."

[3] Contiguous reads of the same message bytes likely violates standards criteria. There should be a system semaphore or other critical code protection around the acquisition of a byte or character from an input stream so that the byte or character read is discarded before it can be read again.

[4] Writing to the same stream from both parent and child when the message may exceed the POSIX guarantee on that low level writes of <= 512 bytes will enter the output stream atomically (according to Linux "man 7 pipe"). Also, to maintain chronology with multiple writers, one would need to flush the higher level C functions or C++ methods after each write. It is completely safe, however, to have multiple writers if the messages are known to be within the limit and only low level write() operations are performed.

like image 162
Douglas Daseeco Avatar answered Nov 13 '22 05:11

Douglas Daseeco