The context is Inter-Process-Communication where one process("Server") has to send fixed-size structs to many listening processes("Clients") running on the same machine.
I am very comfortable doing this in Socket Programming. To make the communication between the Server and the Clients faster and to reduce the number of copies, I want to try out using Shared Memory(shm) or mmaps.
The OS is RHEL 64bit.
Since I am a newbie, please suggest which should I use. I'd appreciate it if someone could point me to a book or online resource to learn the same.
Thanks for the answers. I wanted to add that the Server ( Market Data Server ) will typically be receiving multicast data, which will cause it to be "sending" about 200,000 structs per second to the "Clients", where each struct is roughly 100 Bytes. Does shm_open/mmap implementation outperform sockets only for large blocks of data or a large volume of small structs as well ?
The main difference between System V shared memory (shmem) and memory mapped I/O (mmap) is that System V shared memory is persistent: unless explicitly removed by a process, it is kept in memory and remains available until the system is shut down.
They are easier to use than pipes when more than two processes must communicate by using a single medium. The IPC shared semaphore facility provides process synchronization. Shared memory is the fastest form of interprocess communication.
Each process has its own address space; if any process wants to communicate with some information from its own address space to other processes, then it is only possible with IPC (inter-process communication) techniques. Shared memory is the fastest inter-process communication mechanism.
Shared memory is the fastest form of interprocess communication. The main advantage of shared memory is that the copying of message data is eliminated. The usual mechanism for synchronizing shared memory access is semaphores.
I'd use mmap
together with shm_open
to map shared memory into the virtual address space of the processes. This is relatively direct and clean:
"/myRegion"
shm_open
you open a file
descriptor on that regionftruncate
you enlarge the segment to the size you needmmap
you map it into your
address spaceThe shmat
and Co interfaces have (at least historically) the disadvantage that they may have a restriction in the maximal amount of memory that you can map.
Then, all the POSIX thread synchronization tools (pthread_mutex_t
, pthread_cond_t
, sem_t
, pthread_rwlock_t
, ...) have initialization interfaces that allow you to use them in a process shared context, too. All modern Linux distributions support this.
Whether or not this is preferable over sockets? Performance wise it could make a bit of a difference, since you don't have to copy things around. But the main point I guess would be that, once you have initialized your segment, this is conceptually a bit simpler. To access an item you'd just have to take a lock on a shared lock, read the data and then unlock the lock again.
As @R suggests, if you have multiple readers pthread_rwlock_t
would probably the best lock structure to use.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With