Apologies for the long question, but I'd be really grateful for some thoughts/help on the best strategy for cache invalidation & refetching queries in Apollo Client 3.
First, some information about the scenario I'm imagining:
Account
component (example below) which uses the useQuery
hook from react-apollo to fetch & display some basic information about an account and a list of transactions for that accountCreateTransactionForm
component that uses a mutation to insert a new transaction. This is a separate component which lives at a different location in the component tree and is not necessarily a child of the AccountComponent
)A simplistic version of my Account
component might look something like this:
import { gql, useQuery } from '@apollo/client';
import React from 'react';
import { useParams } from 'react-router-dom';
const GET_ACCOUNT_WITH_TRANSACTIONS = gql`
query getAccountWithTransactions($accountId: ID!) {
account(accountId: $accountId) {
_id
name
description
currentBalance
transactions {
_id
date
amount
runningBalance
}
}
}
`;
export const Account: React.FunctionComponent = () => {
const { accountId } = useParams();
const { loading, error, data } = useQuery(GET_ACCOUNT_WITH_TRANSACTIONS, {
variables: { accountId },
});
if (loading) { return <p>Loading...</p>; }
if (error) { return <p>Error</p>; }
return (
<div>
<h1>{data.account.name}</h1>
{data.account.transactions.map(transaction => (
<TransactionRow key={transaction._id} transaction={transaction} />
))}
</div>
);
};
I'm evaluating the various options for invalidating parts of the Apollo Client cache and refetching the appropriate data after inserting a transaction. From what I've learned so far, there are a few potential strategies:
a) call the refetch
method returned by useQuery
to force the Account
component to reload its data
CreateTransactionForm
would need to be (directly or indirectly) coupled to the Account
component because something needs to trigger that call to refetch
b) pass the query name (getAccountWithTransactions
) into the refetchQueries
option of the mutation
CreateTransactionForm
would need to have knowledge about every other component/query that exists in the app and could be affected by the mutation (and if more are added in the future, it would mean remembering to update the CreateTransactionForm
)c) manually modify the contents of the cache after performing mutations
CreateTransactionForm
would need to know exactly what data has changed as a result of the server's actions. As mentioned, this might not be a trivial amount of data and after performing the mutation we'd need to retrieve updated data not only about the transaction that was inserted but also any others that have been updated as a side effect, plus affected accounts, etc. It also might not be very efficient because some of that information may never be viewed in the client again.My intuition is that none of the options above feel ideal. In particular, I am worried about maintainability as the app grows; if components need to have explicit knowledge about exactly which other components/queries may be affected by changes made to the data graph, then it feels like it would be very easy to miss one and introduce subtle bugs once the app grows to be larger and more complex.
I am very interested in the new evict
and gc
methods introduced in Apollo Client 3 and am wondering whether they could provide a neater solution.
What I'm considering is, after calling the mutation I could use these new capabilities to:
transactions
array on all accounts that are included in the transactioncurrentBalance
field on any affected accountsfor example:
const { cache } = useApolloClient();
...
// after calling the mutation:
cache.evict(`Account:${accountId}`, 'transactions');
cache.evict(`Account:${accountId}`, 'currentBalance');
cache.gc();
The above provides an easy way to remove stale data from the cache and ensures that the components will go to the network the next time those fields queries are performed. This works well if I navigate away to a different page and back to the Account
page, for example.
This leads onto the main piece of the puzzle that I'm unsure about:
is there any way to detect that some or all of the data referenced in a query has been evicted from the cache?
I'm not sure if this is a feasible thing to expect of the library, but if it's possible I think it could result in simpler code and less coupling between different parts of the app.
My thinking is that this would allow each component to become more "reactive" - the components simply know which data they depend on and whenever that data goes missing from the underlying cached graph it could immediately react by triggering a refetch on its own query. It would be nice for components to be declaratively reacting to changes in the data that they depend on, rather than imperatively communicating to triggering actions on each other if that makes sense.
Cache eviction is a feature where file data blocks in the cache are released when fileset usage exceeds the fileset soft quota, and space is created for new files. The process of releasing blocks is called eviction. However, file data is not evicted if the file data is dirty.
By default, the contents of your cache are immediately restored asynchronously, and they're persisted on every write to the cache with a short configurable debounce interval.
Overview. Apollo Client stores the results of your GraphQL queries in a local, normalized, in-memory cache. This enables Apollo Client to respond almost immediately to queries for already-cached data, without even sending a network request.
That data is stored in Apollo’s cache, based on the name of the query you execute, as well as any variables you pass to it, like this:
If a user adds a new task or updates an existing task, then Apollo will update the corresponding single cache key for that individual task, but unless you tell it exactly what you want it to do, it won’t know which collection queries needs to be updated or invalidated. There are several ways to handle this.
The first argument supplied will be the Apollo cache, and the second will be the mutation result object. The cache is capable of reading the result of any existing query in the store via readQuery and then we can update the cache with new data from out mutation result using writeQuery.
Its built-in caching layer, which allows you to retrieve previously requested data without needing to make additional network requests to the server, has the potential to make any application feel snappier. Apollo even provides ways of warming up the cache to be used for later, freeing our frontends from the dreaded loading spinners.
There are two answers here, what is the standard practice way to do this? And how can we possibly make this Better Way?
Standard Practice
https://www.apollographql.com/blog/when-to-use-refetch-queries-in-apollo-client/
Apollo on their blog and on their documentation state that if you mutation has side effects outside of the returned query then you must use an update function if you want to maintain cache consistency. This is similar to using refetchQueries
but is more explicit as you are doing the updating of the cache yourself (it also takes less network requests). Although, you are right that this approach will lead to coupling between your transaction's mutation update and all other dependent components.
I believe the best way to structure it would then be to define an update function for all components whose mutation has side effects. And then transaction would call all components update functions that it effects. And these in turn would call their own dependencies. This could get messy so let us see if you could do the cache control yourself.
The Better Way
https://www.apollographql.com/docs/apollo-server/data/data-sources/#using-memcachedredis-as-a-cache-storage-backend
I don't believe you can check the Apollo cache for if data is present, but you could check a Redis cache for this same information and Apollo allows you to plug in other cache implementations. You could even write your own cache implementation and use Redis as a backend to your implementation. This would allow you the finer control over the cache that you want. But, this comes at a massive upfront engineering cost.
Conclusion:
Since your side effects seem to be within the transaction and account components (only two components), I would say you should write a custom update function for now. Once you have a more complex cache coherency problem I would think about diving into a custom caching implementation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With