Customizing the behavior of cached fields
You can customize how a particular field in your Apollo Client cache is read and written. To do so, you define a field policy for the field. A field policy can include:
A
read
function that specifies what happens when the field's cached value is readA
merge
function that specifies what happens when field's cached value is writtenAn array of key arguments that help the cache avoid storing unnecessary duplicate data.
You provide field policies to the constructor of InMemoryCache
. Each field policy is defined inside whichever TypePolicy
object corresponds to the field's parent type.
The following example defines a field policy for the name
field of a Person
type:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Person: {
4 fields: {
5 name: {
6 read(name) {
7 // Return the cached name, transformed to upper case
8 return name.toUpperCase();
9 }
10 }
11 },
12 },
13 },
14});
This field policy defines a read
function that specifies what the cache returns whenever Person.name
is queried.
The read
function
If you define a read
function for a field, the cache calls that function whenever your client queries for the field. In the query response, the field is populated with the read
function's return value, instead of the field's cached value.
Every read
function is passed two parameters:
The first parameter is the field's currently cached value (if one exists). You can use this to help calculate the value to return.
The second parameter is an object that provides access to several properties and helper functions, including any arguments passed to the field.
See the fields of the
FieldFunctionOptions
type inFieldPolicy
API reference.
The following read
function returns a default value of UNKNOWN NAME
for the name
field of a Person
type whenever a value isn't available in the cache. If a cached value is available, it's returned unmodified.
1const cache = new InMemoryCache({
2 typePolicies: {
3 Person: {
4 fields: {
5 name: {
6 read(name = "UNKNOWN NAME") {
7 return name;
8 }
9 },
10 },
11 },
12 },
13});
Handling field arguments
If a field accepts arguments, the read
function's second parameter includes an args
object that contains the values provided for those arguments.
For example, the following read
function checks whether the maxLength
argument was provided for the name
field. If it was provided, the function returns only the first maxLength
characters of the person's name. Otherwise, the person's full name is returned.
1const cache = new InMemoryCache({
2 typePolicies: {
3 Person: {
4 fields: {
5 // If a field's TypePolicy would only include a read function,
6 // you can optionally define the function like so, instead of
7 // nesting it inside an object as shown in the previous example.
8 name(name: string, { args }) {
9 if (args && typeof args.maxLength === "number") {
10 return name.substring(0, args.maxLength);
11 }
12 return name;
13 },
14 },
15 },
16 },
17});
If a field requires numerous parameters then each parameter must be wrapped in a variable that is then destructured and returned. Each parameter will be available as individual subfields.
The following read
function assigns a default value of UNKNOWN FIRST NAME
to the firstName
subfield of a fullName
field and a UNKNOWN LAST NAME
to the lastName
of a fullName
field.
1const cache = new InMemoryCache({
2 typePolicies: {
3 Person: {
4 fields: {
5 fullName: {
6 read(fullName = {
7 firstName: "UNKNOWN FIRST NAME",
8 lastName: "UNKNOWN LAST NAME",
9 }) {
10 return { ...fullName };
11 },
12 },
13 },
14 },
15 },
16});
The following query
returns the firstName
and lastName
subfields from the fullName
field:
1query personWithFullName {
2 fullName {
3 firstName
4 lastName
5 }
6}
You can define a read
function for a field that isn't even defined in your schema. For example, the following read
function enables you to query a userId
field that is always populated with locally stored data:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Person: {
4 fields: {
5 userId() {
6 return localStorage.getItem("loggedInUserId");
7 },
8 },
9 },
10 },
11});
Note that to query for a field that is only defined locally, your query should include the
@client
directive on that field so that Apollo Client doesn't include it in requests to your GraphQL server.
Other use cases for a read
function include:
Transforming cached data to suit your client's needs, such as rounding floating-point values to the nearest integer
Deriving local-only fields from one or more schema fields on the same object (such as deriving an
age
field from abirthDate
field)Deriving local-only fields from one or more schema fields across multiple objects
For a full list of the options provided to the read
function, see the API reference. You will almost never need to use all of these options, but each one has an important role when reading fields from the cache.
The merge
function
If you define a merge
function for a field, the cache calls that function whenever the field is about to be written with an incoming value (such as from your GraphQL server). When the write occurs, the field's new value is set to the merge
function's return value, instead of the original incoming value.
Merging arrays
A common use case for a merge
function is to define how to write to a field that holds an array. By default, the field's existing array is completely replaced by the incoming array. In many cases, it's preferable to concatenate the two arrays instead, like so:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Agenda: {
4 fields: {
5 tasks: {
6 merge(existing = [], incoming: any[]) {
7 return [...existing, ...incoming];
8 },
9 },
10 },
11 },
12 },
13});
This pattern is especially common when working with paginated lists.
Note that existing
is undefined the very first time this function is called for a given instance of the field, because the cache does not yet contain any data for the field. Providing the existing = []
default parameter is a convenient way to handle this case.
Your
merge
function cannot push theincoming
array directly onto theexisting
array. It must instead return a new array to prevent potential errors. In development mode, Apollo Client prevents unintended modification of theexisting
data withObject.freeze
.
Merging non-normalized objects
You can use a merge
function to intelligently combine nested objects that are not normalized in your cache, assuming those objects are nested within the same normalized parent.
See the code
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 author: { // Non-normalized Author object within Book
6 merge(existing, incoming, { mergeObjects }) {
7 return mergeObjects(existing, incoming);
8 },
9 },
10 },
11 },
12 },
13});
Example
Let's say our graph's schema includes the following types:
1type Book {
2 id: ID!
3 title: String!
4 author: Author!
5}
6
7type Author { # Has no key fields
8 name: String!
9 dateOfBirth: String!
10}
11
12type Query {
13 favoriteBook: Book!
14}
With this schema, our cache can normalize Book
objects because they have an id
field. However, Author
objects have no id
field, and they also have no other fields that can uniquely identify a particular instance. Therefore, the cache can't normalize Author
objects, and it can't tell when two different Author
objects actually represent the same author.
Now, let's say our client executes the following two queries, in order:
1query BookWithAuthorName {
2 favoriteBook {
3 id
4 author {
5 name
6 }
7 }
8}
9
10query BookWithAuthorBirthdate {
11 favoriteBook {
12 id
13 author {
14 dateOfBirth
15 }
16 }
17}
When the first query returns, Apollo Client writes a Book
object like the following to the cache:
1{
2 "__typename": "Book",
3 "id": "abc123",
4 "author": {
5 "__typename": "Author",
6 "name": "George Eliot"
7 }
8}
Remember that because
Author
objects can't be normalized, they're nested directly within their parent object.
Now, when the second query returns, the cached Book
object is updated to the following:
1{
2 "__typename": "Book",
3 "id": "abc123",
4 "author": {
5 "__typename": "Author",
6 "dateOfBirth": "1819-11-22"
7 }
8}
The Author
's name
field has been removed! This is because Apollo Client can't be sure that the Author
objects returned by the two queries actually refer to the same author. So instead of merging fields of the two objects, Apollo Client completely overwrites the object (and logs a warning).
However, we are confident that these two objects represent the same author, because a book's author virtually never changes. Therefore, we can tell the cache to treat Book.author
objects as the same object as long as they belong to the same Book
. This enables the cache to merge the name
and dateOfBirth
fields returned by different queries above.
To achieve this, we can define a custom merge
function for the author
field within the type policy for Book
:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 author: {
6 merge(existing, incoming, { mergeObjects }) {
7 return mergeObjects(existing, incoming);
8 },
9 },
10 },
11 },
12 },
13});
Here, we use the mergeObjects
helper function to merge values from the existing
and incoming
Author
objects. It's important to use mergeObjects
here instead of merging the objects with object spread syntax, because mergeObjects
makes sure to call any defined merge
functions for subfields of Book.author
.
Notice that this merge
function has zero Book
- or Author
-specific logic in it! This means you can reuse it for any number of non-normalized object fields. And because this exact merge
function definition is so common, you can also define it with the following shorthand:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 author: {
6 // Equivalent to options.mergeObjects(existing, incoming).
7 merge: true,
8 },
9 },
10 },
11 },
12});
In summary, the Book.author
policy above enables the cache to intelligently merge all of the author
objects associated with any particular normalized Book
object.
Remember that for
merge: true
to merge two non-normalized objects, all of the following must be true:
The two objects must occupy the exact same field of the exact same normalized object in the cache.
The two objects must have the same
__typename
.
This is important for fields with an interface or union return type, which might return one of multiple object types.
If you require behavior that violates any of these rules, you need to write a custom
merge
function instead of usingmerge: true
.
Merging arrays of non-normalized objects
Make sure you've read Merging arrays and Merging non-normalized objects first.
Consider what happens if a Book
can have multiple authors
:
1query BookWithAuthorNames {
2 favoriteBook {
3 isbn
4 title
5 authors {
6 name
7 }
8 }
9}
10
11query BookWithAuthorLanguages {
12 favoriteBook {
13 isbn
14 title
15 authors {
16 language
17 }
18 }
19}
The favoriteBook.authors
field contains a list of non-normalized Author
objects. In this case, we need to define a more sophisticated merge
function to make sure the name
and language
fields returned by the two queries above are correctly associated with each other.
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 authors: {
6 merge(existing: any[], incoming: any[], { readField, mergeObjects }) {
7 const merged: any[] = existing ? existing.slice(0) : [];
8 const authorNameToIndex: Record<string, number> = Object.create(null);
9 if (existing) {
10 existing.forEach((author, index) => {
11 authorNameToIndex[readField<string>("name", author)] = index;
12 });
13 }
14 incoming.forEach(author => {
15 const name = readField<string>("name", author);
16 const index = authorNameToIndex[name];
17 if (typeof index === "number") {
18 // Merge the new author data with the existing author data.
19 merged[index] = mergeObjects(merged[index], author);
20 } else {
21 // First time we've seen this author in this array.
22 authorNameToIndex[name] = merged.length;
23 merged.push(author);
24 }
25 });
26 return merged;
27 },
28 },
29 },
30 },
31 },
32});
Instead of replacing the existing authors
array with the incoming array, this code concatenates the arrays together, while also checking for duplicate author names. Whenever a duplicate name is found, the fields of the repeated Author
objects are merged.
The readField
helper function is more robust than using author.name
directly, because it tolerates the possibility that the author
is a Reference
object referring to data elsewhere in the cache. This is important if the Author
type eventually defines keyFields
and therefore becomes normalized.
As this example suggests, merge
functions can become quite sophisticated. When this happens, you can often extract the generic logic into a reusable helper function:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 authors: {
6 merge: mergeArrayByField<AuthorType>("name"),
7 },
8 },
9 },
10 },
11});
Now that you've hidden the details behind a reusable abstraction, it no longer matters how complicated the implementation gets. This is liberating, because it allows you to improve your client-side business logic over time, while keeping related logic consistent across your entire application.
Defining a merge
function at the type level
In Apollo Client 3.3 and later, you can define a default merge
function for a non-normalized object type. If you do, every field that returns that type uses your default merge
function unless it's overridden on a field-by-field basis.
You define this default merge
function in the type policy for the non-normalized type. Here's what that looks like for the non-normalized Author
type from Merging non-normalized objects:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 // No longer required!
6 // author: {
7 // merge: true,
8 // },
9 },
10 },
11
12 Author: {
13 merge: true,
14 },
15 },
16});
As shown above, the field-level merge
function for Book.author
is no longer required. The net result in this basic example is identical, but this strategy automatically applies the default merge
function to any other Author
-returning fields you might add in the future (such as Essay.author
).
Handling pagination
When a field holds an array, it's often useful to paginate that array's results, because the total result set can be arbitrarily large.
Typically, a query includes pagination arguments that specify:
Where to start in the array, using either a numeric offset or a starting ID
The maximum number of elements to return in a single "page"
If you implement pagination for a field, it's important to keep pagination arguments in mind if you then implement read
and merge
functions for the field:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Agenda: {
4 fields: {
5 tasks: {
6 merge(existing: any[], incoming: any[], { args }) {
7 const merged = existing ? existing.slice(0) : [];
8 // Insert the incoming elements in the right places, according to args.
9 const end = args.offset + Math.min(args.limit, incoming.length);
10 for (let i = args.offset; i < end; ++i) {
11 merged[i] = incoming[i - args.offset];
12 }
13 return merged;
14 },
15
16 read(existing: any[], { args }) {
17 // If we read the field before any data has been written to the
18 // cache, this function will return undefined, which correctly
19 // indicates that the field is missing.
20 const page = existing && existing.slice(
21 args.offset,
22 args.offset + args.limit,
23 );
24 // If we ask for a page outside the bounds of the existing array,
25 // page.length will be 0, and we should return undefined instead of
26 // the empty array.
27 if (page && page.length > 0) {
28 return page;
29 }
30 },
31 },
32 },
33 },
34 },
35});
As this example shows, your read
function often needs to cooperate with your merge
function, by handling the same arguments in the inverse direction.
If you want a given "page" to start after a specific entity ID instead of starting from args.offset
, you can implement your merge
and read
functions as follows, using the readField
helper function to examine existing task IDs:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Agenda: {
4 fields: {
5 tasks: {
6 merge(existing: any[], incoming: any[], { args, readField }) {
7 const merged = existing ? existing.slice(0) : [];
8 // Obtain a Set of all existing task IDs.
9 const existingIdSet = new Set(
10 merged.map(task => readField("id", task)));
11 // Remove incoming tasks already present in the existing data.
12 incoming = incoming.filter(
13 task => !existingIdSet.has(readField("id", task)));
14 // Find the index of the task just before the incoming page of tasks.
15 const afterIndex = merged.findIndex(
16 task => args.afterId === readField("id", task));
17 if (afterIndex >= 0) {
18 // If we found afterIndex, insert incoming after that index.
19 merged.splice(afterIndex + 1, 0, ...incoming);
20 } else {
21 // Otherwise insert incoming at the end of the existing data.
22 merged.push(...incoming);
23 }
24 return merged;
25 },
26
27 read(existing: any[], { args, readField }) {
28 if (existing) {
29 const afterIndex = existing.findIndex(
30 task => args.afterId === readField("id", task));
31 if (afterIndex >= 0) {
32 const page = existing.slice(
33 afterIndex + 1,
34 afterIndex + 1 + args.limit,
35 );
36 if (page && page.length > 0) {
37 return page;
38 }
39 }
40 }
41 },
42 },
43 },
44 },
45 },
46});
Note that if you call readField(fieldName)
, it returns the value of the specified field from the current object. If you pass an object as a second argument to readField
, (e.g., readField("id", task)
), readField
instead reads the specified field from the specified object. In the above example, reading the id
field from existing Task
objects allows us to deduplicate the incoming
task data.
The pagination code above is complicated, but after you implement your preferred pagination strategy, you can reuse it for every field that uses that strategy, regardless of the field's type. For example:
1function afterIdLimitPaginatedFieldPolicy<T>() {
2 return {
3 merge(existing: T[], incoming: T[], { args, readField }): T[] {
4 ...
5 },
6 read(existing: T[], { args, readField }): T[] {
7 ...
8 },
9 };
10}
11
12const cache = new InMemoryCache({
13 typePolicies: {
14 Agenda: {
15 fields: {
16 tasks: afterIdLimitPaginatedFieldPolicy<Reference>(),
17 },
18 },
19 },
20});
Disabling merge
functions
In some cases, you might want to completely disable merge functions for certain fields. To do so, pass merge: false
like so:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Book: {
4 fields: {
5 // No longer necessary!
6 // author: {
7 // merge: true,
8 // },
9 },
10 },
11
12 Author: {
13 merge: false,
14 },
15 },
16});
Specifying key arguments
If a field accepts arguments, you can specify an array of keyArgs
in the field's FieldPolicy
. This array indicates which arguments are key arguments that affect the field's return value. Specifying this array can help reduce the amount of duplicate data in your cache.
Example
Let's say your schema's Query
type includes a monthForNumber
field. This field returns the details of particular month, given a provided number
argument (January for 1
and so on). The number
argument is a key argument for this field, because its value affects the field's return value:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Query: {
4 fields: {
5 monthForNumber: {
6 keyArgs: ["number"],
7 },
8 },
9 },
10 },
11});
An example of a non-key argument is an access token, which is used to authorize a query but not to calculate its result. If monthForNumber
also accepts an accessToken
argument, the value of that argument does not affect which month's details are returned.
By default, all of a field's arguments are key arguments. This means that the cache stores a separate value for every unique combination of argument values you provide when querying a particular field.
If you specify a field's key arguments, the cache understands that the rest of that field's arguments aren't key arguments. This means that the cache doesn't need to store a completely separate value when a non-key argument changes.
For example, let's say you execute two different queries with the monthForNumber
field, passing the same number
argument but different accessToken
arguments. In this case, the second query response will overwrite the first, because both invocations use an identical value for the only key argument.
Providing a keyArgs
function
If you need more control over a particular field's keyArgs
, you can pass a function instead of an array of argument names. This keyArgs
function takes two parameters:
An
args
object containing all argument values provided for the fieldA
context
object providing other relevant details
For details, see KeyArgsFunction
in the API reference below.
FieldPolicy
API reference
Here are the definitions for the FieldPolicy
type and its related types:
1// These generic type parameters will be inferred from the provided policy in
2// most cases, though you can use this type to constrain them more precisely.
3type FieldPolicy<
4 TExisting,
5 TIncoming = TExisting,
6 TReadResult = TExisting,
7> = {
8 keyArgs?: KeySpecifier | KeyArgsFunction | false;
9 read?: FieldReadFunction<TExisting, TReadResult>;
10 merge?: FieldMergeFunction<TExisting, TIncoming> | boolean;
11};
12
13type KeySpecifier = (string | KeySpecifier)[];
14
15type KeyArgsFunction = (
16 args: Record<string, any> | null,
17 context: {
18 typename: string;
19 fieldName: string;
20 field: FieldNode | null;
21 variables?: Record<string, any>;
22 },
23) => string | KeySpecifier | null | void;
24
25type FieldReadFunction<TExisting, TReadResult = TExisting> = (
26 existing: Readonly<TExisting> | undefined,
27 options: FieldFunctionOptions,
28) => TReadResult;
29
30type FieldMergeFunction<TExisting, TIncoming = TExisting> = (
31 existing: Readonly<TExisting> | undefined,
32 incoming: Readonly<TIncoming>,
33 options: FieldFunctionOptions,
34) => TExisting;
35
36// These options are common to both read and merge functions:
37interface FieldFunctionOptions {
38 cache: InMemoryCache;
39
40 // The final argument values passed to the field, after applying variables.
41 // If no arguments were provided, this property will be null.
42 args: Record<string, any> | null;
43
44 // The name of the field, equal to options.field.name.value when
45 // options.field is available. Useful if you reuse the same function for
46 // multiple fields, and you need to know which field you're currently
47 // processing. Always a string, even when options.field is null.
48 fieldName: string;
49
50 // The FieldNode object used to read this field. Useful if you need to
51 // know about other attributes of the field, such as its directives. This
52 // option will be null when a string was passed to options.readField.
53 field: FieldNode | null;
54
55 // The variables that were provided when reading the query that contained
56 // this field. Possibly undefined, if no variables were provided.
57 variables?: Record<string, any>;
58
59 // Easily detect { __ref: string } reference objects.
60 isReference(obj: any): obj is Reference;
61
62 // Returns a Reference object if obj can be identified, which requires,
63 // at minimum, a __typename and any necessary key fields. If true is
64 // passed for the optional mergeIntoStore argument, the object's fields
65 // will also be persisted into the cache, which can be useful to ensure
66 // the Reference actually refers to data stored in the cache. If you
67 // pass an ID string, toReference will make a Reference out of it. If
68 // you pass a Reference, toReference will return it as-is.
69 toReference(
70 objOrIdOrRef: StoreObject | string | Reference,
71 mergeIntoStore?: boolean,
72 ): Reference | undefined;
73
74 // Helper function for reading other fields within the current object.
75 // If a foreign object or reference is provided, the field will be read
76 // from that object instead of the current object, so this function can
77 // be used (together with isReference) to examine the cache outside the
78 // current object. If a FieldNode is passed instead of a string, and
79 // that FieldNode has arguments, the same options.variables will be used
80 // to compute the argument values. Note that this function will invoke
81 // custom read functions for other fields, if defined. Always returns
82 // immutable data (enforced with Object.freeze in development).
83 readField<T = StoreValue>(
84 nameOrField: string | FieldNode,
85 foreignObjOrRef?: StoreObject | Reference,
86 ): T;
87
88 // Returns true for non-normalized StoreObjects and non-dangling
89 // References, indicating that readField(name, objOrRef) has a chance of
90 // working. Useful for filtering out dangling references from lists.
91 canRead(value: StoreValue): boolean;
92
93 // A handy place to put field-specific data that you want to survive
94 // across multiple read function calls. Useful for field-level caching,
95 // if your read function does any expensive work.
96 storage: Record<string, any>;
97
98 // Instead of just merging objects with { ...existing, ...incoming }, this
99 // helper function can be used to merge objects in a way that respects any
100 // custom merge functions defined for their fields.
101 mergeObjects<T extends StoreObject | Reference>(
102 existing: T,
103 incoming: T,
104 ): T | undefined;
105}