Why does AWS says "strong consistency" in DynamoDB and "read-after-write consistency" for S3? Do both of them mean the same thing?
“Strong read-after-write consistency is a huge win for us because it helps drive down the time to complete our data processing pipelines by removing the software overhead needed to deliver consistency, and it also reduces our cost to operate the data lake by simplifying the data access architecture.”
In the diagram, Node A is the originating node and node B and C are the replicas. In contrast, traditional relational databases have been designed based on the concept of strong consistency, also called immediate consistency. This means that data viewed immediately after an update will be consistent for all observers of the entity.
This is what eventual consistency is. 2. Strong Consistency : Strong Consistency simply means the data must be strongly consistent at all times. All the server nodes across the world should contain the same value as an entity at any point in time. And the only way to implement this behavior is by locking down the nodes when being updated.
When you request a strongly consistent read, DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful. However, this consistency comes with some disadvantages:
The two terms essentially mean the same thing, in the sense that read-after-write is one type of strong consistency.
The noteworthy difference is that DynamoDB's strong consistency includes read-after-update and read-after-delete, as well as read-after-write. S3 only offers read-after-write... so we could say read-after-write is a subset of strong consistency.
In S3, everything is eventually consistent with one exception: if you create an object and you have not previously tried to fetch that object (such as to check whether the object already existed before creating it) then fetching that object after creating it will always return the object you created. That's the read-after-write consistency in S3, and it's always available in the circumstance described -- you don't have to ask S3 for a strongly-consistent read-after-write on a new object, because it's always provided.
Any other operation in S3 does not have that consistency guarantee. Examples:
All of these are aspects of the S3 Consistency Model which are the result of optimizations for performance.
DynamoDB is also optimized for performance, and as a result, it defaults to eventual (not strong) consistency, for the same reasons... but you can specify strongly-consistent reads in DynamoDB if you need them. These come with caveats:
A strongly consistent read might not be available if there is a network delay or outage. In this case, DynamoDB may return a server error (HTTP 500).
Strongly consistent reads may have higher latency than eventually consistent reads.
Strongly consistent reads are not supported on global secondary indexes.
Strongly consistent reads use more throughput capacity than eventually consistent reads
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html
For those seeing this post in December 2020 and later, an update – AWS S3 now delivers strong consistency automatically for GET, PUT, and LIST operations, as well as operations that change object tags, ACLs, or metadata. Bucket configurations have an eventual consistency model.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With