Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Redis CRUD patterns

Tags:

php

redis

nosql

i've recently started learning Redis and am currently building an app using it as sole datastore and I'd like to check with other Redis users if some of my conclusions are correct as well as ask a few questions. I'm using phpredis if that's relevant but I guess the questions should apply to any language as it's more of a pattern thing.

As an example, consider a CRUD interface to save websites (name and domain) with the following requirements:

  • Check for existing names/domains when saving/validating a new site (duplicate check)
  • Listing all websites with sorting and pagination

I have initially chosen the following "schema" to save this information:

  • A key "prefix:website_ids" in which I use INCR to generate new website id's
  • A set "prefix:wslist" in which I add the website id generated above
  • A hash for each website "prefix:ws:ID" with the fields name and website

The saving/validation issue

With the above information alone I was unable (as far as I know) to check for duplicate names or domains when adding a new website. To solve this issue I've done the following:

  • Two sets with keys "prefix:wsnames" and "prefix:wsdomains" where I also SADD the website name and domains.

This way, when adding a new website I can check if the submitted name or domain already exist in either of these sets with SISMEMBER and fail the validation if needed. Now if i'm saving data with 50 fields instead of just 2 and wanted to prevent duplicates I'd have to create a similar set for each of the fields I wanted to validate.

QUESTION 1: Is the above a common pattern to solve this problem or is there any other/better way people use to solve this type of issue?

The listing/sorting issue

To list websites and sort by name or domain (ascending or descending) as well as limiting results for pagination I use something like:

SORT prefix:wslist BY prefix:ws:*->name ALPHA ASC LIMIT 0 10

This gives me 10 website ids ordered by name. Now to get these results I came to the following options (examples in php):

Option 1:

$wslist = the sort command here;
$websites = array();
foreach($wslist as $ws) {
    $websites[$ws] = $redis->hGetAll('prefix:ws:'.$ws);
}

The above gives me a usable array with website id's as key and an array of fields. Unfortunately this has the problem that I'm doing multiple requests to redis inside a loop and common sense (at least coming from RDBMs) tells me that's not optimal. The better way it would seem to be to use redis pipelining/multi and send all request in a single go:

Option 2:

$wslist = the sort command here;
$redis->multi();
foreach($wslist as $ws) {
    $redis->hGetAll('prefix:ws:'.$ws);
}
$websites = $redis->exec();

The problem with this approach is that now I don't get each website's respective ID unless I then loop the $websites array again to associate each one. Another option is to maybe also save a field "id" with the respective website id inside the hash itself along with name and domain.

QUESTIONS 2/3: What's the best way to get these results in a usable array without having to loop multiple times? Is it correct or good practice to also save the id number as a field inside the hash just so I can also get it with the results?

Disclaimer: I understand that the coding and schema building paradigms when using a key->value datastores like Redis are different from RDBMs and document stores and so notions of "best way to do X" are likely to be different depending on the data and application at hand. I also understand that Redis might not even be the most suitable datastore to use in mostly CRUD type apps but I'd still like to get any insights from more experienced developers since CRUD interfaces are very common on most apps.

like image 846
dev Avatar asked Nov 01 '22 07:11

dev


1 Answers

Answer 1

Your proposal looks pretty common. I'm not sure why you need an auto-incrementing ID though. I imagine the domain name has to be unique, or the website name has to be unique, or at the very least the combination of the two has to be unique. If this is the case it sounds like you already have a perfectly good key, so why invent an integer key when you don't need it?

Having a SET for domains and a SET for website names is a perfect solution for quickly checking to see if a specific domain or website name already exists. Though, if one of those (domain or website name) is your key you might not even need these SETs since you could just look if the key prefix:ws:domain-or-ws-name-here exists.

Also, using a HASH for each website so you can store your 50 fields of details for the website inside is perfect. That is what hashes are for.

Answer 2

First, let me point out that if your websites and domain names are stored in SORTED SETs instead of SETs, they will already be alphabetized (assuming they are given the same score). If you are trying to support other sort options this might not help much, but wanted to point it out.

Your Option 1 and Option 2 are actually both relatively reasonable. Redis is lightning fast, so Option 1 isn't as unreasonable as it seems at first. Option 2 is clearly even more optimal from the perspective of redis since all the commands will be bufferred and executed all at once. Though, it will require additional processing in PHP afterwards as you noted if you want the array to be indexed by the id.

There is a 3rd option: lua scripting. You can have redis execute a Lua script that returns both the ids and hash values all in one shot. But, not being super familiar with PHP anymore and how redis's multibyte replies map to PHPs arrays I'm not 100% sure what the lua script would look like. You'll need to look for examples or do some trial and error. It should be a pretty simple script, though.

Conclusion

I think redis sounds like a decent solution for your problem. Just keep in mind the dataset needs to always be small enough to keep in memory. If that's not really a concern (unless your fields are huge, you should be able to fit thousands of websites into only a few MB) or if you don't mind having to upgrade your RAM to grow your DB, then Redis is perfectly suitable.

Be familiar with the various persistence options and configurations for redis and what they mean for availability and reliability. Also, make sure you have a backup solution in place. I would recommend having both a secondary redis instance that slaves off of your main instance, and a recurring process that backs up your redis database file at least daily.

like image 55
Carl Zulauf Avatar answered Nov 09 '22 16:11

Carl Zulauf