I've recently asked a number of questions regarding TVar
, and I still have concerns about livelock.
So I thought of this structure:
TVar
). B however keeps its current priority for the retry. This system I believe prevents deadlocks, but also prevents starvation (unlike TVar
). I was wondering if anyone has implemented such a system, as it seems fairly obvious and I don't want to reinvent the wheel.
Of course, such an approach could easily be extended to allow the user to specify priorities.
Priority could be the pair (user_supplied_prio, auto_increment)
, with user_supplied_prio
taking precedence, but equal priority tasks resolving in FIFO order.
Comment/Solution:
Actually, when I think about it, what I describe already exists in Haskell, simply by using one IORef
wrapped around all the data, and only using atomicModifyIORef
. The atomicModifyIORef
will ensure transactions are applied in sequence. However, one may think that this means that the data structure is sequential (i.e. effectively limited to one thread) but it is actually parallel due to laziness.
To explain this, consider an expensive function f
. We are going to apply this to a Data.Map
to the data with the key "foo".
This replaces (foo -> x)
with (foo -> future(f x))
. This thread shall continue to work out what (f x)
actually is, but in the meantime we can apply g to "bar". Since applying g to "bar" does not need the result of "foo", we can work this out at the same time.
No deadlocks, no starvation, eventually all transactions will be processed (roughly in the order they are received).
A deadlock is a state in which each member of a group of actions, is waiting for some other member to release a lock. A livelock on the other hand is almost similar to a deadlock, except that the states of the processes involved in a livelock constantly keep on changing with regard to one another, none progressing.
Starvation or Livelock is the situation when a transaction has to wait for an indefinite period of time to acquire a lock.
Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can repeatedly trigger. This can be avoided by ensuring that only one process (chosen randomly or by priority) takes action.
An easiest example of Livelock would be two people who meet face-to-face in a corridor, and both of them move aside to let the other pass. They end up moving from side to side without making any progress as they move the same way at the time. Here, they never cross each other.
You can set up a worker thread to process all requests in a deterministic way, so nobody gets starved. This strategy would be reasonably efficient and immune to livelock.
-- yes, this is a horrible name createManagerFactory :: a -> IO ((IO a), IO (((a -> a) -> IO a)))
IO a is an action that safely and quickly queries the value with a read-only STM action. (a -> a) is a pure function that modifies the value, so ((a -> a) -> IO a) is an action that takes a modifier function, safely modifies the value, and returns the new value.
Run this once to initialize the factory.
(query, modifierFactory) <- createManagerVactory initValue
Then for each thread run this to generate a new modifier.
myModify <- modifierFactory
createManagerFactory would do the following:
modifierFactory would do this:
The worker thread would run this loop:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With