While reading the ZooKeeper's recipe for lock, I got confused. It seems that this recipe for distributed locks can not guarantee "any snapshot in time no two clients think they hold the same lock". But since ZooKeeper is so widely adopted, if there were such mistakes in the reference documentation, someone should have pointed it out long ago, so what did I misunderstand?
Quoting the recipe for distributed locks:
Locks
Fully distributed locks that are globally synchronous, meaning at any snapshot in time no two clients think they hold the same lock. These can be implemented using ZooKeeeper. As with priority queues, first define a lock node.
- Call create( ) with a pathname of "locknode/guid-lock-" and the sequence and ephemeral flags set.
- Call getChildren( ) on the lock node without setting the watch flag (this is important to avoid the herd effect).
- If the pathname created in step 1 has the lowest sequence number suffix, the client has the lock and the client exits the protocol.
- The client calls exists( ) with the watch flag set on the path in the lock directory with the next lowest sequence number.
- if exists( ) returns false, go to step 2. Otherwise, wait for a notification for the pathname from the previous step before going to step 2.
Consider the following case:
But, ZooKeeper may think Client1's session is timed out, and then
Is this a valid scenario?
What is Zookeeper Locks? Zookeeper Locks are fully distributed locks in ZooKeeper which are globally synchronous. However, globally synchronous means at any snapshot in time no two clients think they hold the same lock. Though these we can implement these locks by using ZooKeeper.
Distributed locks provide mutually exclusive access to shared resources in a distributed environment. Distributed locks are used to improve the efficiency of services or implement the absolute mutual exclusion of accesses.
With distributed locking, we have the same sort of acquire, operate, release operations, but instead of having a lock that's only known by threads within the same process, or processes on the same machine, we use a lock that different Redis clients on different machines can acquire and release.
The scenario you describe could arise. Client 1 thinks it has the lock, but in fact its session has timed out, and Client 2 acquires the lock.
The ZooKeeper client library will inform Client 1 that its connection has been disconnected (but the client doesn't know the session has expired until the client connects to the server), so the client can write some code and assume that his lock has been lost if he has been disconnected too long. But the thread which uses the lock needs to check periodically that the lock is still valid, which is inherently racy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With