I am trying to create a shared memory which will be used by multiple processes. these processes communicate with each other using MPI
calls (MPI_Send
, MPI_Recv
).
I need a mechanism to control the access of this shared memory I added a question yesterday to see if MPI provides any facility to do that. Shared memory access control mechanism for processes created by MPI , but it seems that there is no such provision by MPI.
So I have to choose between named semaphore
or flock
.
For named semaphore if any of the process dies abruptly without calling sem_cloe()
, than that semaphore always remains and can be seen by ll /dev/shm/
. This results in deadlock sometimes(if I run the same code again!), for this reason I am currently thinking of using flock.
Just wanted to confirm if flock
is best suited for this type of operation ?
Are there any disadvantages of using flock
?
Is there anything else apart from named semaphore
and flock
that can be used here ?
I am working on C under linux.
You can also use a POSIX mutex in shared memory; you just have to set the "pshared" attribute on it first. See pthread_mutexattr_setpshared
. This is arguably the most direct way to do what you want.
That said, you can also call sem_unlink
on your named semaphore while you are still using it. This will remove it from the file system, but the underlying semaphore object will continue to exist until the last process calls sem_close
on it (which happens automatically if the process exits or crashes).
I can think of two minor disadvantages to using flock
. First, it is not POSIX, so it makes your code somewhat less portable, although I believe most Unixes implement it in practice. Second, it is implemented as a system call, so it will be slower. Both pthread_mutex_lock
and sem_wait
use the "futex" mechanism on Linux, which only does a system call when you actually have to wait. This is only a concern if you are grabbing and releasing the lock a lot.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With