Cursor-based pagination
We recommend reading Core pagination API before learning about considerations specific to cursor-based pagination.
Using list element IDs as cursors
Since numeric offsets within paginated lists can be unreliable, a common improvement is to identify the beginning of a page using some unique identifier that belongs to each element of the list.
If the list represents a set of elements without duplicates, this identifier could simply be the unique ID of each object, allowing additional pages to be requested using the ID of the last object in the list, together with some limit
argument. With this approach, the requested cursor
ID should not appear in the new page, since it identifies the item just before the beginning of the page.
Since the elements of the list could be normalized Reference
objects, you will probably want to use the options.readField
helper function to read the id
field in your merge
and/or read
functions:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Query: {
4 fields: {
5 feed: {
6 keyArgs: ["type"],
7
8 merge(existing, incoming, {
9 args: { cursor },
10 readField,
11 }) {
12 const merged = existing ? existing.slice(0) : [];
13 let offset = offsetFromCursor(merged, cursor, readField);
14 // If we couldn't find the cursor, default to appending to
15 // the end of the list, so we don't lose any data.
16 if (offset < 0) offset = merged.length;
17 // Now that we have a reliable offset, the rest of this logic
18 // is the same as in offsetLimitPagination.
19 for (let i = 0; i < incoming.length; ++i) {
20 merged[offset + i] = incoming[i];
21 }
22 return merged;
23 },
24
25 // If you always want to return the whole list, you can omit
26 // this read function.
27 read(existing, {
28 args: { cursor, limit = existing.length },
29 readField,
30 }) {
31 if (existing) {
32 let offset = offsetFromCursor(existing, cursor, readField);
33 // If we couldn't find the cursor, default to reading the
34 // entire list.
35 if (offset < 0) offset = 0;
36 return existing.slice(offset, offset + limit);
37 }
38 },
39 },
40 },
41 },
42 },
43});
44
45function offsetFromCursor(items, cursor, readField) {
46 // Search from the back of the list because the cursor we're
47 // looking for is typically the ID of the last item.
48 for (let i = items.length - 1; i >= 0; --i) {
49 const item = items[i];
50 // Using readField works for both non-normalized objects
51 // (returning item.id) and normalized references (returning
52 // the id field from the referenced entity object), so it's
53 // a good idea to use readField when you're not sure what
54 // kind of elements you're dealing with.
55 if (readField("id", item) === cursor) {
56 // Add one because the cursor identifies the item just
57 // before the first item in the page we care about.
58 return i + 1;
59 }
60 }
61 // Report that the cursor could not be found.
62 return -1;
63}
Since items can be removed from, added to, or moved around within the list without altering their id
fields, this pagination strategy tends to be more resilient to list mutations than the offset
-based strategy we saw above.
However, this strategy works best when your merge
function always appends new pages to the existing data, since it doesn't take any precautions to avoid overwriting elements if the cursor
falls somewhere in the middle of the existing data.
Using a map to store unique items
If your paginated field logically represents a set of unique items, you can store it internally using a more convenient data structure than an array.
In fact, your merge
function can return internal data in any format you like, as long as your read
function cooperates by turning that internal representation back into a list:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Query: {
4 fields: {
5 feed: {
6 keyArgs: ["type"],
7
8 // While args.cursor may still be important for requesting
9 // a given page, it no longer has any role to play in the
10 // merge function.
11 merge(existing, incoming, { readField }) {
12 const merged = { ...existing };
13 incoming.forEach(item => {
14 merged[readField("id", item)] = item;
15 });
16 return merged;
17 },
18
19 // Return all items stored so far, to avoid ambiguities
20 // about the order of the items.
21 read(existing) {
22 return existing && Object.values(existing);
23 },
24 },
25 },
26 },
27 },
28});
With this internal representation, you no longer have to worry about incoming items overwriting unrelated existing items, because an assignment to the map can only replace an item with the same id
field.
However, this approach leaves an important question unanswered: what cursor
should we use when requesting the next page? Thanks to the predictable ordering of JavaScript object keys by insertion order, you should be able to use the id
field of the last element returned by the read
function as the cursor
for the next request—though you're not alone if relying on this behavior makes you nervous. In the next section we'll see a slightly different approach that makes the next cursor
more explicit.
Keeping cursors separate from items
Pagination cursors are often derived from ID fields of list items, but not always. In cases where the list could have duplicates, or is sorted or filtered according to some criteria, the cursor may need to encode not just a position within the list but also the sorting/filtering logic that produced the list. In such situations, since the cursor does not logically belong to the elements of the list, the cursor may be returned separately from the list:
1const MORE_COMMENTS_QUERY = gql`
2 query MoreComments($cursor: String, $limit: Int!) {
3 moreComments(cursor: $cursor, limit: $limit) {
4 cursor
5 comments {
6 id
7 author
8 text
9 }
10 }
11 }
12`;
13
14function CommentsWithData() {
15 const {
16 data,
17 loading,
18 fetchMore,
19 } = useQuery(MORE_COMMENTS_QUERY, {
20 variables: { limit: 10 },
21 });
22
23 if (loading) return <Loading/>;
24
25 return (
26 <Comments
27 entries={data.moreComments.comments || []}
28 onLoadMore={() => fetchMore({
29 variables: {
30 cursor: data.moreComments.cursor,
31 },
32 })}
33 />
34 );
35}
To demonstrate the flexibility of the field policy system, here's an implementation of the Query.moreComments
field that uses a map internally, but returns an array of unique comments
:
1const cache = new InMemoryCache({
2 typePolicies: {
3 Query: {
4 fields: {
5 moreComments: {
6 keyArgs: false,
7 merge(existing, incoming, { readField }) {
8 const comments = existing ? { ...existing.comments } : {};
9 incoming.comments.forEach(comment => {
10 comments[readField("id", comment)] = comment;
11 });
12 return {
13 cursor: incoming.cursor,
14 comments,
15 };
16 },
17
18 read(existing) {
19 if (existing) {
20 return {
21 cursor: existing.cursor,
22 comments: Object.values(existing.comments),
23 };
24 }
25 },
26 },
27 },
28 },
29 },
30});
Now there's less ambiguity about where the next cursor
comes from, because it is explicitly stored and returned as part of the query.
Relay-style cursor pagination
The InMemoryCache
field policy API allows for any conceivable style of pagination, even though some of the simpler approaches have known drawbacks.
If you were designing a GraphQL client without the flexibility that read
and merge
functions provide, you would most likely attempt to standardize around a one-size-fits-all style of pagination that you felt was sophisticated enough to support most use cases. That's the path Relay, another popular GraphQL client, has chosen with their Cursor Connections Specification. As a consequence, a number of public GraphQL APIs have adopted the Relay connection specification to be maximally compatible with Relay clients.
Using Relay-style connections is similar to cursor-based pagination, but differs in the format of the query response, which affects the way cursors are managed. In addition to connection.edges
, which is a list of { cursor, node }
objects, where each edge.node
is a list item, Relay provides a connection.pageInfo
object which gives the cursors of the first and last items in connection.edges
as connection.pageInfo.startCursor
and connection.pageInfo.endCursor
, respectively. The pageInfo
object also contains the boolean properties hasPreviousPage
and hasNextPage
, which can be used to determine if there are more results available (both forwards and backwards):
1const COMMENTS_QUERY = gql`
2 query Comments($cursor: String) {
3 comments(first: 10, after: $cursor) {
4 edges {
5 node {
6 author
7 text
8 }
9 }
10 pageInfo {
11 endCursor
12 hasNextPage
13 }
14 }
15 }
16`;
17
18function CommentsWithData() {
19 const { data, loading, fetchMore } = useQuery(COMMENTS_QUERY);
20
21 if (loading) return <Loading />;
22
23 const nodes = data.comments.edges.map((edge) => edge.node);
24 const pageInfo = data.comments.pageInfo;
25
26 return (
27 <Comments
28 entries={nodes}
29 onLoadMore={() => {
30 if (pageInfo.hasNextPage) {
31 fetchMore({
32 variables: {
33 cursor: pageInfo.endCursor,
34 },
35 });
36 }
37 }}
38 />
39 );
40}
Fortunately, Relay-style pagination can be implemented in Apollo Client using merge
and read
functions, which means all the thorny details of connections and edges
and pageInfo
can be abstracted away, into a single, reusable helper function:
1import { relayStylePagination } from "@apollo/client/utilities";
2
3const cache = new InMemoryCache({
4 typePolicies: {
5 Query: {
6 fields: {
7 comments: relayStylePagination(),
8 },
9 },
10 },
11});
Whenever you need to consume a Relay pagination API using Apollo Client, relayStylePagination
is a great tool to try first, even if you end up copy/pasting its code and making changes to suit your specific needs.
Note that the relayStylePagination
function generates a field policy with a read
function that simply returns all available data, ignoring args
, which makes relayStylePagination
easier to use with fetchMore
. This is a non-paginated read
function. There's nothing stopping you from adapting this read
function to use args
to return individual pages, as long as you remember to update the variables of your original query after calling fetchMore
.