I'm looking for suggestions for an efficient solution for dealing with opening memcached connections given the FAQ quote:
Remember nothing is stopping you from accidentally connecting many times. If you instantiate a memcached client object as part of the object you're trying to store, don't be surprised when 1,000 objects in one request create 1,000 parallel connections. Look carefully for bugs like this before hopping on the list.
See also: Initializing a Memcached Client and Managing Connection Objects.
I considered using a singleton in our caching assembly to provide the memcached client, though I'm sure there must be better methods as the locks would introduce (unneeded?) overhead.
I am clear on the patterns for use of the client, what I'm not clear on is how to use the client efficiently with regards to scalability and performance. How do other people deal with using the memcached clients?
There's a bounty of 50 in it for you.
We had a similar scenario with a redis client, and originally our solution was to have a common single instance that we synchronised access to via lock
. This was fine, but to avoid the latency and blocking we eventually wrote a thread-safe pipelined client, which allows concurrent use without any blocking. I don't know as much about the men ached protocol, but I wonder if something similar could apply here. I'm actualy tempted to try investigating to see if I could add this to BookSleeve (our custom OSS redis client) if you can wait a little while.
But we were generally able to keep up just using a synchronised shared instance (pretty much the same thing as a singleton, depending on how purist you are).
Glancing at the FAQ, pipeline is indeed a possibility; and I'm entirely open to the option of writing an async/pipelined memcached client inside booksleeve. Most of the raw IO / multiplexing would be pretty common with redis. The other tricks you can consider is using get_multi etc rather than separate gets where possible - I don't know whether your current client supports this, though (IK haven't looked).
But: I don't know how it contrasts memcached to redis, but in our case, switching to a pipelined/multiplexed API meant we didn't need to use much pooling (many connections) - a single connection (properly pipelined) is capable of supporting lots of concurrent usage from a single node.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With