This is my simple code that opens a named pipe, writes a string to it, and then closes the pipe. The pipe is created in a another function, as mentioned below.
char * ipcnm = "./jobqueue";
std::cout << "opening job queue" << std::endl;
//ensure the jobqueue is opened
if ((jobq = open(ipcnm, O_WRONLY)) < 0) {
perror("open");
exit(-1);
}
std::cout << "queue opened" << std::endl;
// record the number of bytes written to the queue
size_t written = write(jobq, ptr, size*nmemb);
// close fifo
if (close(jobq) < 0) {
perror("close");
exit(-1);
}
// need to report to other agents the size of the job that was written
jobSizes.push_back(written);
but the call to open() hangs. I've made sure that there is no other process using the fifo "jobqueue" at the time of calling and the file permissions for the queue once it's created are set prwxrwxr-x (I'm just using mkfifo(ipcnm, 0777)
to create the pipe.
I thought at first that it was a problem the that group o
is missing w
permissions on this pipe, so i manually changed them with chmod and it still hangs, as "queue opened" never gets printed. Nor does the error message for perror("open");
What am I missing?
When you open a FIFO for writing, the writer is blocked until there is a reader.
You are probably missing the reader.
You cannot write to a pipe, then close it, and then have the reader come along later. Such storage semantics is accomplished by using a regular file.
Pipes are an inter-process communication mechanism; a pipe created by opening a FIFO is similar to the object returned by the pipe
POSIX C library function, except that pipe
returns an object which is already prepared for I/O, since there are two descriptors: opposite ends open for opposite directions of I/O. Whereas a FIFO's endpoints are separately opened one at a time.
The FIFO object in the filesystem is only a contact point which allows multiple processes to attach to the same pipe.
Initially, no pipe object exists. When the first process executes an open
on the FIFO object in the filesystem, a pipe is created. Any additional open
requests from the same process or another attach to the same pipe object held in the kernel. I/O cannot take place until the pipe is opened at least once for reading and at least once for writing. The actual pipe I/O goes through the kernel; it is not stored in the filesystem. When all processes close the pipe, the object goes away.
A FIFO could be designed such that I/O can begin before any process has the object open for reading. That is to say, a write request could be allowed to proceed and then block only when the pipe fills up. That design would have issues. For instance, what if the write is small, so that the pipe does not fill up? The writer will write the data and proceed in its execution. If it simply exits before a reader has read the data, the data has disappeared forever! The blocking behavior ensures that a reader is there to catch the data; when the writer is unblocked, it can be sure that a reader has the pipe open, and so it can safely close its end of the pipe without the data being lost. A design which does not block writes even when no reader is available would have to keep the pipe object around inside the kernel even when no process has it open, so that a writer can open a pipe, put data in it, then go away, and later a reader can pick up the data. Or else the design would have to provide, to the writer, a blocking close
(similarly to SO_LINGER
-arranged behavior on a socket) which waits for previously written data to be removed.
Use O_RDWR instead of O_WRONLY for open. This will open fifo without blocking even the reader is not opened yet the other end.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With