Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

DynamoDB consistent reads for Global Secondary Index

Why cant I get consistent reads for global-secondary-indexes?

I have the following setup:

The table: tblUsers (id as hash)

Global Secondary Index: tblUsersEmailIndex (email as hash, id as attribute)

Global Secondary Index: tblUsersUsernameIndex (username as hash, id as attribute)

I query the indexes to check if a given email or username is present, so I dont create a duplicate user.

Now, the problem is I cant do consistent reads for queries on the indexes. But why not? This is one of the few occasions I actually need up-to-date data.

According to AWS documentation:

Queries on global secondary indexes support eventual consistency only.

Changes to the table data are propagated to the global secondary indexes within a fraction of a second, under normal conditions. However, in some unlikely failure scenarios, longer propagation delays might occur. Because of this, your applications need to anticipate and handle situations where a query on a global secondary index returns results that are not up-to-date.

But how do i handle this situation? How can I make sure that a given email or username is not already present in the db?

like image 443
Jpst Avatar asked Feb 15 '16 16:02

Jpst


People also ask

Why strongly consistent reads are not supported on global secondary indexes?

Strongly consistent reads may have higher latency than eventually consistent reads. Strongly consistent reads are not supported on global secondary indexes. Strongly consistent reads use more throughput capacity than eventually consistent reads. For details, see Read/write capacity mode.

Which consistency models are supported by DynamoDB for data reads?

While reading data from DynamoDB, user can specify whether they want the read to be eventually or strongly consistent, these are the two consistency model in DynamoDB. Eventually Consistent Reads (Default) – the eventual consistency option is used to maximize the read throughput.

How many reads per second DynamoDB?

Transactional read requests require two read capacity units to perform one read per second for items up to 4 KB. If you need to read an item that is larger than 4 KB, DynamoDB must consume additional read capacity units.


1 Answers

You probably already went through this: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html

The short answer is that you cannot do what you want to do with Global Secondary Indexes (ie it's always eventual consistency).

A solution here would be to have a separate table w/ the attribute you're interested in as a key and do consistent reads there. You would need to ensure you are updating that whenever you are inserting new entities, and you would also have to worry about the edge case in which inserts there succeed, but not in the main table (ie you need to ensure they are in sync)

Another solution would be to scan the whole table, but that would probably be overkill if the table is large.

Why do you care if somebody creates 2 accounts with the same email? You could just use the username as the primary hash key and just not enforce the email uniqueness.

like image 169
Mircea Avatar answered Sep 29 '22 23:09

Mircea