I hear frequently that accessing a shared memory segment between processes has no performance penalty compared to accessing process memory between threads. In other words, a multi-threaded application will not be faster than a set of processes using shared memory (excluding locking or other synchronization issues).
But I have my doubts:
1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, just like in a regular process that does not access shared memory.
2) The shared memory segment must be maintained somehow by the kernel. For example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There could be some overhead related to kernel operations on the shm segment.
Is a multi-process shared memory system as fast as a multi-threaded application?
Fastest IPC mechanism in OS is Shared Memory. Shared memory is faster because the data is not copied from one address space to another, memory allocation is done only once, andsyncronisation is up to the processes sharing the memory.
Because it is on-chip, shared memory is much faster than local and global memory. In fact, shared memory latency is roughly 100x lower than uncached global memory latency (provided that there are no bank conflicts between the threads, which we will examine later in this post).
Kernel allows us to read entire message or read nothing for message queues. But shared memory requires part of segment is shared between 2 processes, both can do some synchronization technique and share the data between processes. Since there is no need to copy data to share to other process, shared memory is faster.
If you are only using the data once and there is no data reuse between different threads in a block, then using shared memory will actually be slower. The reason is that when you copy data from global memory to shared, it still counts as a global transaction.
1) shmat() maps the local process virtual memory to the shared segment. This translation has to be performed for each shared memory address and can represent a significant cost, relative to the number of shm accesses. In a multi-threaded application there is no extra translation required: all VM addresses are converted to physical addresses, as in a regular process that does not access shared memory.
There is no overhead compared to regular memory access aside from the initial cost to set up shared pages - populating the page-table in the process that calls shmat()
- in most flavours of Linux that's 1 page (4 or 8 bytes) per 4KB of shared memory.
It's (to all relevant comparison) the same cost whether the pages are allocated shared or within the same process.
2) The shared memory segment must be maintained somehow by the kernel. I do not know what that 'somehow' means in terms of performances, but for example, when all processes attached to the shm are taken down, the shm segment is still up and can be eventually re-accessed by newly started processes. There must be at least some degree of overhead related to the things the kernel needs to check during the lifetime of the shm segment.
Whether shared or not, each page of memory has a "struct page" attached to it, with some data about the page. One of the items is a reference count. When a page is given out to a process [whether it is through "shmat" or some other mechanism], the reference count is incremented. When it is freed through some means, the reference count is decremented. If the decremented count is zero, the page is actually freed - otherwise "nothing more happens to it".
The overhead is basically zero, compared to any other memory allocated. The same mechanism is used for other purposes for pages anyways - say for example you have a page that is also used by the kernel - and your process dies, the kernel needs to know not to free that page until it's been released by the kernel as well as the user-process.
The same thing happens when a "fork" is created. When a process is forked, the entire page-table of the parent process is essentially copied into the child process, and all pages made read-only. Whenever a write happens, a fault is taken by the kernel, which leads to that page being copied - so there are now two copies of that page, and the process doing the writing can modify it's page, without affecting the other process. Once the child (or parent) process dies, of course all pages still owned by BOTH processes [such as the code-space that never gets written, and probably a bunch of common data that never got touched, etc] obviously can't be freed until BOTH processes are "dead". So again, the reference counted pages come in useful here, since we only count down the ref-count on each page, and when the ref-count is zero - that is, when all processes using that page has freed it - the page is actually returned back as a "useful page".
Exactly the same thing happens with shared libraries. If one process uses a shared library, it will be freed when that process ends. But if two, three or 100 processes use the same shared library, the code obviously will have to stay in memory until the page is no longer needed.
So, basically, all pages in the whole kernel are already reference counted. There is very little overhead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With