Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Which IPC is more efficient here?

I have a system application, that runs as a collection on 12 processes on unix. There is a monitor process , which exchanges data from with 11 other processes.

The IPC requirement is to make these 11 processes communicate with the monitor process, designed in a way that is most efficient in terms of execution. Can you guys weigh the below two options, or suggest a better one.

1) have a UDP socket communication, where these 11 processes will push data to the monitor process at periodic intervals. the monitor process is just listening and capturing info which is good enough.

OR

2) have a shared memory implementation. so there are 11 shared memory segments, where each is shared between 2 processes ( process ith and monitor process).

For shared memory, it seems faster but there is a locking/sync required, where as in udp the kernel copies the data from memory space of one process to the other.

Can anyone provide more inputs to help better evaluate the two methods. ? Thanks.

like image 223
sbr Avatar asked Mar 03 '11 01:03

sbr


2 Answers

Coordinating shared memory is tricky. The parent has to know when to read which part of each of the 11 shared memory segments, and let the child know when the data has been read so that part of the shared memory can be reused, etc. So, although the copying may be quicker, the rest of the coordination (maybe using semaphore sets - maybe with 22 semaphores, one for each direction of the 11 communication channels) means that you will almost certainly find a file-descriptor based mechanism much easier to code. The select() or poll() or variant system calls can be used to tell you when there is data for the master to read. And the kernel deals with all the nasty issues of scheduling and flow control and so on.

So, use Unix-domain sockets unless you can really demonstrate that you'll get a performance benefit out of the shared memory version. But expect to lose some hairs (and some data) getting the shared memory implementation correct. (You can demonstrate whether there is a performance benefit to using shared memory with a crude, improperly sycnhronized system; you probably won't go into production with a crude improperly synchronized system.)

like image 86
Jonathan Leffler Avatar answered Oct 30 '22 12:10

Jonathan Leffler


It depends a lot on how much data the processes need to share with each other. If it's going to be a lot of data (e.g. megabytes or gigabytes) passed back and forth then shared memory will be the more efficient way to go. If there's only going to be a relatively small amount of data (kilobytes or perhaps a few megabytes) then a sockets-based approach is likely preferable, because efficiency won't matter very much, and avoiding shared memory will make your system more robust and easier to develop and debug.

Also, some kernels support zero-copy networking, in which case sending UDP packets from one process to another might not actually require the kernel to copy the data at all, instead it merely remaps the underlying MMU pages into the destination process. If that's the case, the sockets approach would give you the best of both worlds (efficiency AND robustness).

like image 40
Jeremy Friesner Avatar answered Oct 30 '22 12:10

Jeremy Friesner