Offset-based pagination
We recommend reading Core pagination API before learning about considerations specific to offset-based pagination.
With offset-based pagination, a list field accepts an offset
argument that indicates where in the list the server should start when returning items for a particular query. The field usually also accepts a limit
argument that indicates the maximum number of items to return:
1type Query {
2 feed(offset: Int, limit: Int): [FeedItem!]
3}
4
5type FeedItem {
6 id: ID!
7 message: String!
8}
This pagination strategy works well for immutable lists, or for lists where each item's index never changes. In other cases, you should avoid it in favor of cursor-based pagination, because moving or removing items can shift offsets. This causes items to be skipped or duplicated if changes occur between paginated queries.
Although it has limitations, offset-based pagination is a common pattern in many applications, in part because it's relatively straightforward to implement.
The offsetLimitPagination
helper
Apollo Client provides an offsetLimitPagination
helper function that you can use to generate a field policy for every relevant list field.
This example uses offsetLimitPagination
to generate a field policy for Query.feed
:
1import { InMemoryCache } from "@apollo/client";
2import { offsetLimitPagination } from "@apollo/client/utilities";
3
4const cache = new InMemoryCache({
5 typePolicies: {
6 Query: {
7 fields: {
8 feed: offsetLimitPagination()
9 },
10 },
11 },
12});
This defines a merge
function for the field that handles merging paginated results in the cache for you (see the source).
Using with fetchMore
If you use offsetLimitPagination
to set your feed policy as shown above, then you can use fetchMore
with useQuery
like so:
1const FeedData() {
2 const { loading, data, fetchMore } = useQuery(FEED_QUERY, {
3 variables: {
4 offset: 0,
5 limit: 10
6 },
7 });
8
9 // If you want your component to rerender with loading:true whenever
10 // fetchMore is called, add notifyOnNetworkStatusChange:true to the
11 // options you pass to useQuery above.
12 if (loading) return <Loading/>;
13
14 return (
15 <Feed
16 entries={data.feed || []}
17 onLoadMore={() => fetchMore({
18 variables: {
19 offset: data.feed.length
20 },
21 })}
22 />
23 );
24}
By default, fetchMore
uses the original query and variables
, so we only need to pass the variable that's changing: offset
. When new data is returned from the server, it's automatically merged with any existing Query.feed
data in the cache. This causes useQuery
to rerender with the expanded list of data.
In this example, the Feed
component receives the entire cached list (data.feed
) every time it renders, which includes data from all pages received so far. This is a non-paginated read
function.
Using with a paginated read
function
In the example above, the GraphQL server returns individual pages of results, but each query then returns all cached results received so far. To limit each query's result to only the items you requested, you can include a paginated read
function in your field policy.
Because the offsetLimitPagination
helper is currently defining your field policy, you combine your read
function with the helper's result, like so:
1import { InMemoryCache } from "@apollo/client";
2import { offsetLimitPagination } from "@apollo/client/utilities";
3
4const cache = new InMemoryCache({
5 typePolicies: {
6 Query: {
7 fields: {
8 feed: {
9 ...offsetLimitPagination(),
10 read(existing, { args }) {
11 // Implement here
12 }
13 }
14 },
15 },
16 },
17});
For example implementations, see Paginated
read
functions.
If you use a paginated read
function, you probably need to update your offset
and limit
variables as required by your use case after you call fetchMore
. Otherwise, you'll continue rendering only the first page of results.
For example, to display all the data received so far, you could modify the previous example as follows:
1const FeedData = () => {
2 const [limit, setLimit] = useState(10);
3 const { loading, data, fetchMore } = useQuery(FEED_QUERY, {
4 variables: {
5 offset: 0,
6 limit,
7 },
8 });
9
10 if (loading) return <Loading/>;
11
12 return (
13 <Feed
14 entries={data.feed || []}
15 onLoadMore={() => {
16 const currentLength = data.feed.length;
17 fetchMore({
18 variables: {
19 offset: currentLength,
20 limit: 10,
21 },
22 }).then(fetchMoreResult => {
23 // Update variables.limit for the original query to include
24 // the newly added feed items.
25 setLimit(currentLength + fetchMoreResult.data.feed.length);
26 });
27 }}
28 />
29 );
30}
This code uses a React useState
Hook to store the current limit
value, which it updates by calling setLimit
in a callback attached to the Promise
returned by fetchMore
.
You could store offset
in a React useState
Hook as well, if you need the offset
to change. Exactly when and how these variables
change is up to your component, and may not always be the result of calling fetchMore
, so it makes sense to use React component state to store these variable values.
If you are not using React and
useQuery
, theObservableQuery
object returned byclient.watchQuery
has a method calledsetVariables
that you can call to update the original variables.
Because fetchMore
requires some extra work to update the original variables if you're using a read
function that is sensitive to those variables (the second kind of read
function), it's fair to say fetchMore
encourages the first kind of read
function, which simply returns all available data.
However, now that you understand your options, there's nothing wrong with moving read-time pagination logic out of your application code and into your field read
functions. Both kinds of read
functions have their uses, and both can be made to work with fetchMore
.
Setting keyArgs
with offsetLimitPagination
If a paginated field accepts arguments besides offset
and limit
, you might need to specify the key arguments that indicate whether two result sets belong to the same list or different lists.
To set keyArgs
for the field policy generated by offsetLimitPagination
, provide an array of argument names to the function as a parameter:
1fields {
2 // Results belong to the same list only if both the type
3 // and userId arguments match exactly
4 feed: offsetLimitPagination(["type", "userId"])
5}
By default, offsetLimitPagination
uses keyArgs: false
(no key arguments).