There are lots of info regarding how to use erlang mailbox, but seldom to find a paper or document describe how erlang actual access mailbox concurrently internally within the VM.
To my understanding, Erlang VM must have to do locking or CAS action to secure message integrity. Is there any sophisticate way of method behind erlang's curtain
By mailbox I'm assuming you mean the process mailbox, the one messages are inserted into. Fun question!
There's some conversation here about the locking characteristics of the Erlang process message queue:
Just a curiosity: currently there is some kind of locks in sending message. Have anybody tried to implement a lock-free linked list: http://www.amd64.org/fileadmin/user_upload/pub/epham08-asf-eval.pdf
Or I'm just looking at wrong place and erts_smp_proc_lock is already using something like this?
The message queue already has this, sort of. The process that owns the message box has an "inner box" that he has a lock on and an "outer box" that all senders compete for. So the lock contention is on the tail of the queue on the "outer box" when lots of processes sends to that process. The mail box owner is not concerned with it though.
You might find reading the implementation of the BEAM process illustrative.
Short answer: yes, locking is done on the message queue, but it's complicated and optimized to reduce contention between scheduler threads.
There are several locks which handle process structure. The most important regarding sending messages are MSGQ lock and MAIN lock. The MAIN lock is the one that locks the structure's fields while it is operational - one of fields is outgoing queue. The MSGQ lock covers linked-list of incoming messages.
So, to send message we need to acquire recipients MSGQ lock and copy message from our queue (guarded by MAIN) to the queue of incoming messages of the other process.
Mind how async is this sending operation. Processes do not block each other! (most of the time;)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With