I'm starting to use Redis, and I've run into the following problem.
I have a bunch of objects, let's say Messages
in my system. Each time a new User
connects, I do the following:
INCR
some global variable, let's say g_message_id
, and save INCR's return value (the current value of g_message_id
).
LPUSH
the new message (including the id
and the actual message) into a list.
Other clients use the value of g_message_id
to check if there are any new messages to get.
Problem is, one client could INCR
the g_message_id
, but not have time to LPUSH
the message before another client tries to read it, assuming that there is a new message.
In other words, I'm looking for a way to do the equivalent of adding rows in SQL, and having an auto-incremented index to work with.
Notes:
I can't use the list indexes, since I often have to delete parts of the list, making it invalid.
My situation in reality is a bit more complex, this is a simpler version.
Current solution:
The best solution I've come up with and what I plan to do is use WATCH
and Transactions
to try and perform an "autoincrement" myself.
But this is such a common use-case in Redis that I'm surprised there is not existing answer for it, so I'm worried I'm doing something wrong.
Redis stores integers in their integer representation, so for string values that actually hold an integer, there is no overhead for storing the string representation of the integer.
"You might not know it, but Redis is actually single-threaded, which is how every command is guaranteed to be atomic. While one command is executing, no other command will run."
WATCH is used to provide a check-and-set (CAS) behavior to Redis transactions. WATCH ed keys are monitored in order to detect changes against them. If at least one watched key is modified before the EXEC command, the whole transaction aborts, and EXEC returns a Null reply to notify that the transaction failed.
Redis sets are unordered collections of unique strings that act like the sets from your favorite programming language (for example, Java HashSets, Python sets, and so on). With a Redis set, you can add, remove, and test for existence O(1) time (in other words, regardless of the number of set elements).
If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. One option is to split this into two variables: one to assign message identifiers and the other as a flag to signal to clients that a new message is available.
Clients can then compare the current / prior value of g_new_message_flag to know when new messages are available:
> INCR g_message_id
(integer) 123
# construct the message with id=123 in code
> MULTI
OK
> INCR g_new_message_flag
QUEUED
> LPUSH g_msg_queue "{\"id\": 123, \"msg\": \"hey\"}"
QUEUED
> EXEC
Possible alternative, if your clients can support it: you might want to look into the Redis publish/subscribe commands, e.g. cients could publish notifications of new messages and subscribe to one or more message channels to receive notifications. You could keep the g_msg_queue to maintain a backlog of N messages for new clients, if necessary.
Update based on comment: If you want each client to detect there are available messages, pop all that are available, and zero out the list, one option is to use a transaction to read the list:
# assuming the message queue contains "123", "456", "789"..
# a client detects there are new messages, then runs this:
> WATCH g_msg_queue
OK
> LRANGE g_msg_queue 0 100000
QUEUED
> DEL g_msg_queue
QUEUED
> EXEC
1) 1) "789"
2) "456"
3) "123"
2) (integer) 1
Update 2: Given the new information, here's what I would do:
Do you really need unique sequential IDs? You can use UUIDs for uniqueness and timestamps to check for new messages. If you keep the clocks on all your servers properly synchronized then timestamps with a one second resolution should work just fine.
If you really do need unique sequential IDs then you'll probably have to set up a Flickr style ticket server to properly manage the central list of IDs. This would, essentially, move your g_message_id
into a database with proper transaction handling.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With