3. In-memory caching
10m

Overview

The comes equipped to do a lot of its own caching, without any additional configuration required from us. From responses to , the cache accelerates our router's ability to execute queries over and over again.

In this lesson, we will:

  • Learn about s
  • Inspect the the generates
  • Learn how to warm up the cache on schema changes

Query plans

When the receives a for the first time, it creates a plan for how to resolve it: the query plan, the set of instructions the follows to fetch and assemble data from different .

Starting from the top-level of the , the consults the to determine which is responsible for resolving the field, and starts to build its . The router continues like this, checking each field in the query against the supergraph schema, and adding it to the query plan.

A diagram showing the router receiving a query and building a query plan for executing it

After all the are accounted for, the goes on to execute the plan, making requests to in the order outlined in the plan, and assembling the data before returning it back to the client.

Caching query plans

The uses a for every it resolves; but it doesn't necessarily generate the each time. When it receives an , the first consults its own in-memory cache for the corresponding .

A diagram showing the router locating a previously cached query plan

If it finds it, then great—clearly, it's done this work before! It plucks the plan from the cache and follows the instructions to resolve the .

A diagram showing the router executing a query using a cached query plan

If the does not find the , it proceeds with generating it from scratch. The also stores the plan in its in-memory cache, to be used for next time.

A diagram showing the router generating, executing, and caching a query plan

Query plans in action

Alright, let's see this in action! First, let's take a look at the for one of our .

Jump into and paste the same from the previous lesson into the Operation panel.

query GetFeaturedListings {
featuredListings {
id
title
description
numOfBeds
}
}
studio.apollographql.com

Screenshot of Explorer with the operation

In the Response panel, we'll find a dropdown. Here we can select the option Query Plan Preview. This updates the content of the Response panel with a representation of how the will be resolved.

studio.apollographql.com

Screenshot of Explorer with the operation, with query plan open

There are two options here: we can view the plan as a chart, or as text. Click Show plan as text, as this gives us a little bit more information about what the does under the hood.

QueryPlan {
Fetch(service: "listings") {
{
featuredListings {
id
title
description
numOfBeds
}
}
},
}

This looks a lot like our original query, doesn't it? The biggest difference is the Fetch statement we see at the top. This tells us which service—namely, "listings"—the plans to fetch the following types and from. Because our involves a single service, the can fetch all of the data in a single request to the listings service.

Now let's look at a that involves both . In this case, we'll see that the response from one subgraph depends on the response from another.

In a new tab, paste the following .

query GetListingAndReviews($listingId: ID!) {
listing(id: $listingId) {
title
description
numOfBeds
overallRating
reviews {
id
text
}
}
}

And in the Variables panel:

{
"listingId": "listing-1"
}

To resolve this , the has to coordinate data between the two services. In order for the reviews service to provide the reviews and overallRating , it first needs to understand which listing object it's providing data for. We see this reflected in the :

QueryPlan {
Sequence {
Fetch(service: "listings") {
{
listing(id: $listingId) {
__typename
id
title
description
numOfBeds
}
}
},
Flatten(path: "listing") {
Fetch(service: "reviews") {
{
... on Listing {
__typename
id
}
} =>
{
... on Listing {
overallRating
reviews {
id
text
}
}
}
},
},
},
}

Here the defines a Sequence of steps. It first conducts a Fetch to the listings service for all the listing-specific types and . Next, it conveys the information the reviews needs to provide data for a particular listing: __typename and id, the Listing 's primary key . With this information, the can then request the corresponding reviews data from the reviews . All the data is packaged together, and returned!

Note: If the response from the reviews did not depend on the response from the listings , the would reflect that the two requests run in Parallel, rather than as part of a Sequence.

Now that we've seen what look like in action, let's take a closer look at the 's side of things.

Query plans & the router

To investigate what the is doing, we'll need to provide a new flag to our router start command: --log. This lets us specify the level of verbosity we want in our logs from the . For our purposes, we'll set the level at trace to better follow what the is doing when it comes to and caching.

APOLLO_KEY=<APOLLO_KEY> \
APOLLO_GRAPH_REF=<APOLLO_GRAPH_REF> \
./router --config router-config.yaml \
--log trace

Note: Check out the official Apollo documentation to review other log level options.

Restart the using the command given above, substituting in your own APOLLO_KEY and APOLLO_GRAPH_REF values. When we run the command, we'll see our terminal fill up with a lot more output messages from the .

Back in Explorer, let's run the GetListingAndReviews again.

Jumping back to the terminal logs, we can scroll to find the output. (You can also search for "query plan" in the logs, and pan through until you find the outputted query plan.)

Router query plan output
query plan
Sequence { nodes: [Fetch(FetchNode { service_name: "listings",
requires: [], variable_usages: ["listingId"],
operation: "query GetListingAndReviews__listings__0($listingId:ID!){listing(id:$listingId){__typename id title description numOfBeds}}",
// ...etc

Just below this, we can see the more explicit path the takes to execute its plan. It narrates its first request to the listings , and logs the response. Then it's able to use this information about the listing we're to fetch the corresponding data with a new request to the reviews !

Diagram showing the query plan steps: fetching from listings, then using the listing data to fetch the corresponding reviews

To see more detail about where the locates its stored , we'll set up a Prometheus dashboard later on. Stay tuned!

Warming up the cache

When we push updates to our schema, it's likely that the for many of our will change. We might have migrated from one service to another, for instance. This means that when the loads a new schema, some of its cached query plans might no longer be usable.

To counteract this, the automatically pre-computes for:

  • The most used queries in its cache
  • The entire list of (more on this in the next lesson!)

To learn more about configuring the number of queries to "warm up" on schema reload, visit the official Apollo documentation.

Practice

Which of the following pieces of data are included in the router's query plan?

Key takeaways

  • The generates a to describe the steps it needs to take to resolve each query it receives.
  • Before generating a , the first checks its in-memory cache for it. If it finds it, it uses the cached query plan instead of generating a new one.
  • When the receives an updated , it automatically pre-computes for the most used queries in its cache before switching to the new schema.
  • We can use the 's config file, along with the warmed_up_queries key, to define a custom number of to pre-compute.

Up next

There's another category of work that the takes care of caching in-memory: the hashes for (). Let's take a closer look at this in the next lesson.

Previous

Share your questions and comments about this lesson

This course is currently in

beta
. Your feedback helps us improve! If you're stuck or confused, let us know and we'll help you out. All comments are public and must follow the Apollo Code of Conduct. Note that comments that have been resolved or addressed may be removed.

You'll need a GitHub account to post below. Don't have one? Post in our Odyssey forum instead.