Response Cache Eviction

Advanced cache eviction patterns using custom cache keys


tip
For a runnable example of this cache eviction solution, see the Response Cache Eviction repo.

Apollo Server's Full Response Cache plugin (@apollo/server-plugin-response-cache) caches the results of operations for a period of time (time-to-live or TTL). After that time expires, the results are evicted from the cache, and the server fully resolves the operation the next time a client executes it.

The most straightforward way to avoid stale data in the response cache is to set a short default TTL. However, this limits the cache's effectiveness for responses that rarely (or never) change.

The Full Response Cache plugin supports advanced cache eviction patterns via custom cache keys in versions 3.7.0 and later. This enables you to set a longer default TTL and increase the cache's hit rate, because you can selectively evict cached responses when relevant events occur.

Customizing the cache key

This works by defining a custom response cache key by a pattern that can later be searched on in the cache. What this key should be comprised of and how it should be structured depends on the use-case and what search patterns our cache implementation supports.

Ensuring cache key uniqueness

Keep in mind that each key links to a full response object, so if your key is too generic, you risk potentially returning the wrong data for queries. For example, generating a cache key based solely on the operation name would yield the same responses for all operations with the same name, even if the entire query is different. Make sure your keys are unique for each execution of the incoming operations that returns different data.

Defining a custom cache key

As noted above, the 3.7.0 of the Full Response Cache plugin introduced the generateCacheKey configuration method. The response from this function will be used as the cache key to store the current query response.

Here's the method signature:

TypeScript
1generateCacheKey(
2  requestContext: GraphQLRequestContext<Record<string, any>>,
3  keyData: unknown,
4): string;

The requestContext parameter holds data about the running GraphQL request, such as the request / response objects as well as the context object that is passed to your resolver functions. Any portion of these data objects can be used as part of your cache key.

The keyData parameter can be used to ensure the uniqueness of your key. In most cases, hashing this variable should be enough to generate a unique key per operation. In fact, the default implementation hashes a JSON.stringify version of this parameter as the cache key.

In this example, we prefix the default key with the name of the incoming operation:

TypeScript
1import { createHash } from 'crypto';
2
3function sha(s: string) {
4  return createHash('sha256').update(s).digest('hex');
5}
6
7generateCacheKey(requestContext, keyData) {
8  const operationName = requestContext.request.operationName ?? 'unnamed';
9  const key = operationName + ':' + sha(JSON.stringify(keyData));
10  return key;
11}

An example key for the named operation “MyOpName”:

Bash
keyv:fqc:MyOpName:e7eed80930547ed4ab4ece81a18955967831ff4c40757eda9bf1f0de84e042f8

This approach ensures that all cache keys are unique enough to store unique responses, but gives us a pattern we can use to selectively remove cache entries based on our operation names.

Evicting cache entries

There are two main strategies for evicting response cache entries: manually evicting from a shell prompt, or in response to some event, like a mutation.

Actually removing entries from the cache once a custom cache key is being used will depend on your caching backend, as each offer different ways to list and remove keys. We'll explore both options using Redis.

Evicting manually

If you need to evict cache entries for local testing or debugging, it might suffice to define a custom cache key pattern and delete entries as needed with redis-cli.

Here's an example of removing all keys in a Redis instance that match a given pattern:

Bash
redis-cli --raw KEYS "$PATTERN" | xargs redis-cli del

This command lists every key matching any glob-style "$PATTERN" and removes them one by one.

Here's an example using the operation name prefix pattern described above to remove all entries with unnamed operations:

Bash
redis-cli --raw KEYS "keyv:fqc:unnamed*" | xargs redis-cli del

The Redis docs for the KEYS command recommend NOT using the KEYS function in production application code and only executing against production with "extreme care." Redis specifically recommends instead using SCAN, which is described in Event-based eviction.

The utility of this approach will depend on the number of records stored in your cache and how performant the pattern search is, as well as the number of records that need to be removed. If your searches are scanning and/or returning millions of records, this approach probably should be avoided in a production environment.

Event-based eviction

Most other use cases need to evict cache entries in response to certain events. The Response Cache Eviction repo provides a full walkthrough of evicting certain operation responses from the cache when a specific mutation is executed.

Redis clients currently offer no way to batch delete entries based on a pattern. As a result, our event based solution needs to do a similar algorithm: look up keys by a pattern, then remove those keys.

The following snippet is from the repo mentioned above:

TypeScript
1import {createClient, RedisClientType} from 'redis';
2
3const deleteByPrefix = async (prefix: string) => {
4  const client = createClient({url: 'redis://localhost:6379'});
5  await client.connect();
6
7  const scanIterator = client.scanIterator({
8    MATCH: `keyv:fqc:${prefix}*`,
9    COUNT: 2000
10  });
11
12  let keys = [];
13
14  for await (const key of scanIterator) {
15    keys.push(key);
16  }
17
18  if (keys.length > 0) {
19    await client.del(keys); // This is blocking, consider handling async in production if the number of keys is large
20  }
21
22  return keys;
23};

This solution uses the scanIterator function (which uses the SCAN Redis function) to scan through cache entries in a memory-efficient way, as opposed to the KEYS method mentioned above. The SCAN method is more appropriate to use in a production environment.

The deleteByPrefix method can be added to your context object and then executed in your mutation resolvers to remove certain operations from the cache.

Final thoughts

Either of the eviction solutions mentioned above should be used with caution. Make sure you have an understanding of the sorts of effects that your setup will have on your cache. Its a good idea to monitor your caching server when testing your different use cases to ensure that you aren't overloading your cache.

Feedback

Edit on GitHub

Forums