my app is for query, has m:n mysql tables:
father(id,name)
child(id)
join_f_c(fatherId,childId):middle table between father and child
there are there query scenario:
(a): select * from father f,child c,join_f_c jfc where f.name=xxx and join_f_c.fatherId=f.id and join_f_c.childId=c.id
(b): select * from father where f.id=xxx
(c): select * from father where f.name=xxx
father table has 100,000 rows, child table has 1000,000+ rows, grandChild table has 1000,000+ rows.
In this app, query performance is high needed for all the query - not only the second query for certain row, so I want load all tables data into memcached when app startup, these are questions:
1.each (a) query will ask memcached 3 times: get father id by name,then get childId by fatherId from join_f_c ,and at last get child by childId from child, These will cause about 0.5ms. Is memcached appropriate for this join query? If I just put query result into memcached , there will be a lot space waste - many child rows will have many copys, eg: father1-child1 , father2-child1 .
2.for (b) and (c) query, I need put each father table row twice into memcached :one key is id,the other key is name , this waste cache space a lot.Is there a better way for this purpose ?
Any help will be appreciate
keep in mind that memcache is a simple key/value storage, not a database capable of any search or join logic.
Redis and Memcached are both in-memory data storage systems. Memcached is a high-performance distributed memory cache service, and Redis is an open-source key-value store. Redis and Memcached are both in-memory data storage systems.
Redis uses a single core and shows better performance than Memcached in storing small datasets when measured in terms of cores. Memcached implements a multi-threaded architecture by utilizing multiple cores. Therefore, for storing larger datasets, Memcached can perform better than Redis.
Memcached is a key/value pair storage. You cannot query the contents of a Memcached value. You can only request the full value.
Obviously storing an entire table (with 100.000 rows) in Memcached and requesting the entire value of the table to be read/used by PHP would be a huge memory and computation effort for PHP that would slow you down significantly.
Storing only one table-record per key would be manageable for PHP and would give a performance boost but it would be pretty wasteful in terms of communications between PHP & Memcached.
Memecached generally is used to store things usually bigger than one table-record for example if you are creating a menu based on categories/subcategories you can store the entire category tree inside one key. You will likely need that category tree to be rendered on every page or ajaxed on every page so having it in a cache would be practical.
Many times Memcached is used to store read-made output such as entire HTML pages, or HTML pieces of a page (blocks) rendered using some data + a template this means that you can save on the computation effort of rebuilding that block.
Key naming conventions will help you correctly retrieve the data you've put into Memcached.
When data changes in the database you may want to either update it in Memcached as well or just delete it from Memcached so that the next process that needs it will rebuild it, or leave the memcached version untouched (stale data) because your application can accept older data being displayed (for a certain "acceptable" period).
The naming conventions would be very important in updating or deleting stale data from Memcached because one tiny change in the database can possibly affect many things stored in Memcached values, being able to find out what those things are is important for accurate cache busting.
It looks to me like you are just trying to duplicate a database into memory, and taking far more space and effort to do it. I would just make sure that you have enough memory on the database server to hold a working set in memory (if not the whole thing). Then just do a join to get all the information you need in a single request.
Databases are good at their jobs - use them, until you run into the limits, then start working around them by rewriting your queries to avoid joins and use caching.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With