Is that correct that ZooKeeper is always CP (in terms of CAP theorem)? Or is there anyway to use it as AP for service discovery needs?
ZAB consensus algorithm (2007) ZAB (ZooKeeper Atomic BroadCast) is a consensus protocol used in ZooKeeper. ZAB is a dedicated protocol for ZooKeeper and hence its usage is limited to ZooKeeper. ZAB was born in 2007 along with ZooKeeper. It works on the leader-follower principle.
ZooKeeper is a CP system with regard to the CAP theorem. This implies that it sacrifices availabilty in order to achieve consistency and partition tolerance. In other words, if it cannot guarantee correct behaviour it will not respond to queries.
Read operations in ZooKeeper are not linearizable since they can return potentially stale data. This is because a read in ZooKeeper is not a quorum operation and a server will respond immediately to a client that is performing a read .
ZooKeeper is a distributed, open-source coordination service for distributed applications. It exposes a simple set of primitives that distributed applications can build upon to implement higher level services for synchronization, configuration maintenance, and groups and naming.
Zookeeper is not A, and can't drop P. So it's called CP apparently. In terms of CAP theorem, "C" actually means linearizability.
linearizability : if operation B started after operation A successfully completed, then operation B must see the the system in the same state as it was on completion of operation A, or a newer state.
But, Zookeeper has Sequential Consistency - Updates from a client will be applied in the order that they were sent.
ZooKeeper does not in fact simultaneously consistent across client views. http://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#ch_zkGuarantees
ZooKeeper does not guarantee that at every instance in time, two different clients will have identical views of ZooKeeper data. Due to factors like network delays, one client may perform an update before another client gets notified of the change. Consider the scenario of two clients, A and B. If client A sets the value of a znode /a from 0 to 1, then tells client B to read /a, client B may read the old value of 0, depending on which server it is connected to. If it is important that Client A and Client B read the same value, Client B should should call the sync() method from the ZooKeeper API method before it performs its read.
ZooKeeper provides "sequential consistency". This is weaker than linearizability but is still very strong, much stronger than "eventual consistency". ZooKeeper also provides a sync command. If you invoke a sync command and then a read, the read is guaranteed to see at least the last write that completed before the sync started.
linearizability
, writes should appear to be instantaneous. Imprecisely, once a write completes, all later reads (where “later” is defined by wall-clock start time) should return the value of that write or the value of a later write. Once a read returns a particular value, all later reads should return that value or the value of a later write."
In Zookeeper they have sync() method to use where we need something like linearizability.
Serializability
is a guarantee about transactions, or groups of one or more operations over one or more objects. It guarantees that the execution of a set of transactions (usually containing read and write operations) over multiple items is equivalent to some serial execution (total ordering) of the transactions.
Refer :
No, you cannot change consistency guarantees in current versions of ZooKeeper like you can in some other systems.
You can add a local cache to your clients which will make them have read only data if the cluster goes down, but in terms of CAP that is still not A because it needs to be available for updates as well as reads.
If ZK offers too strong levels of consistency for your service discovery needs, you should try researching other options, e.g. Eureka, Consul or etcd.
Possibly related reads:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With