I'm encountering a problem where data in my database is getting reverted to an old state. I think I have narrowed the problem down to this situation.
Imagine a sequence of two purchases occurring like this:
We have now lost data because the database record was re-written over with partially out of date information.
How can I prevent this using PHP5 and libmemcached with persistent connections? I think what I want is for a cache node to not failover at all; it should just fail to read and write to that node but not remove it from the pool so that I don't end up with duplicate records.
This will increase load on my database by 1/n (where n is the total number of cache nodes) when a node goes down but it's better than ending up with inconsistent data.
Unfortunately I'm having trouble understanding what settings I should change to get this behavior.
Caches are generally small stores of temporary memory. If they get too large, they can cause performance to degrade. They also can consume memory that other applications might need, negatively impacting application performance.
Caching as a solution to the performance/latency/throughput problems means there is more complexity, which will lead to more bugs. Bugs with caches can be subtle and difficult to debug, and bugs with caches can also cause live site outages.
checkperiod: (default: 600) The period in seconds, as a number, used for the automatic delete check interval. 0 = no periodic check. const NodeCache = require( "node-cache" ); const myCache = new NodeCache({ checkperiod: 120 }); // will check every 120 seconds.
I like the versioning and optimistic lock approach implemented in Doctrine ORM. You can do the same. It won't increase load on your database, but will require some refactoring.
Basically, you add a version number to all tables you are caching, change your update
queries to increment version version = version + 1
and add where version=$version
condition (please note $version
comes from your php/memcache). You will need to check number of affected rows, and throw an exception if it is 0.
It is up to you how to handle such exception. You can just invalidate cache for this record, and ask user to re-submit the form, or you can try to merge the changes. At this point you have stale data from the cache, update from the user input, and fresh data from the DB, so the only unrecoverable case is when you have 3 different values for the same column.
you are making problem more complex, a simple approach should just mark the cache dirty and rebuild it, not just put it back in service with inconsistent data on it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With