In-Memory Caching
Configure router caching for query plans and automatic persisted queries
Both GraphOS Router and Apollo Router Core use an in-memory LRU cache to store the following data:
Introspection responses
You can configure certain caching behaviors for generated query plans and APQ (but not introspection responses).
Performance improvements vs stability
The router is a highly scalable and low-latency runtime. Even with all caching disabled, the time to process operations and query plans will be very minimal (nanoseconds to milliseconds) when compared to the overall supergraph request, except in the edge cases of extremely large operations and supergraphs. Caching offers stability to those running a large graph so that your overhead for given operations stays consistent, not that it dramatically improves. If you would like to validate the performance wins of operation caching, check out the traces and metrics in the router to take measurements before and after. In extremely large edge cases though, we have seen the cache save 2-10x time to create the query plan, which is still a small part of the overall request.
Caching query plans
Whenever your router receives an incoming GraphQL operation, it generates a query plan to determine which subgraphs it needs to query to resolve that operation.
By caching previously generated query plans, your router can skip generating them again if a client later sends the exact same operation. This improves your router's responsiveness.
The GraphOS Router enables query plan caching by default. In your router's YAML config file, you can configure the maximum number of query plan entries in the cache like so:
1supergraph:
2 query_planning:
3 cache:
4 in_memory:
5 limit: 512 # This is the default value.
Cache warm-up
When loading a new schema, a query plan might change for some queries, so cached query plans cannot be reused.
To prevent increased latency upon query plan cache invalidation, the router precomputes query plans for the most used queries from the cache when a new schema is loaded.
Precomputed plans will be cached before the router switches traffic over to the new schema.
By default, the router warms up the cache with 30% of the queries already in cache, but it can be configured as follows:
1supergraph:
2 query_planning:
3 # Pre-plan the 100 most used operations when the supergraph changes
4 warmed_up_queries: 100
(In addition, the router can use the contents of the persisted query list to prewarm the cache. By default, it does this when loading a new schema but not on startup; you can configure it to change either of these defaults.)
To get more information on the planning and warm-up process use the following metrics (where <storage>
can be redis
for distributed cache or memory
):
counters:
apollo_router_cache_hit_count{kind="query planner", storage="<storage>"}
apollo_router_cache_miss_count{kind="query planner", storage="<storage>"}
histograms:
apollo.router.query_planning.plan.duration
: time spent planning queriesapollo_router_schema_loading_time
: time spent loading a schemaapollo_router_cache_hit_time{kind="query planner", storage="<storage>"}
: time to get a value from the cacheapollo_router_cache_miss_time{kind="query planner", storage="<storage>"}
gauges
apollo_router_cache_size{kind="query planner", storage="memory"}
: current size of the cache (only for in-memory cache)apollo.router.cache.storage.estimated_size{kind="query planner", storage="memory"}
: estimated storage size of the cache (only for in-memory query planner cache)
Typically, we would look at apollo_router_cache_size
and the cache hit rate to define the right size of the in memory cache,
then look at apollo_router_schema_loading_time
and apollo.router.query_planning.plan.duration
to decide how much time we want to spend warming up queries.
Cache warm-up with distributed caching
If the router is using distributed caching for query plans, the warm-up phase will also store the new query plans in Redis. Since all Router instances might have the same distributions of queries in their in-memory cache, the list of queries is shuffled before warm-up, so each Router instance can plan queries in a different order and share their results through the cache.
Schema aware query hashing
The query plan cache key uses a hashing algorithm specifically designed for GraphQL queries, using the schema. If a schema update does not affect a query (example: a field was added), then the query hash will stay the same. The query plan cache can use that key during warm up to check if a cached entry can be reused instead of planning it again.
It can be activated through this option:
1supergraph:
2 query_planning:
3 warmed_up_queries: 100
4 experimental_reuse_query_plans: true
Caching automatic persisted queries (APQ)
Automatic Persisted Queries (APQ) enable GraphQL clients to send a server the hash of their query string, instead of sending the query string itself. When query strings are very large, this can significantly reduce network usage.
The router supports using APQ in its communications with both clients and subgraphs:
In its communications with clients, the router acts as a GraphQL server, because it receives queries from clients.
In its communications with subgraphs, the router acts as a GraphQL client, because it sends queries to subgraphs.
Because the router's role differs between these two interactions, you configure these APQ settings separately.
APQ with clients
The router enables APQ caching for client operations by default. In your router's YAML config file, you can configure the maximum number of APQ entries in the cache like so:
1apq:
2 router:
3 cache:
4 in_memory:
5 limit: 512 # This is the default value.
You can also disable client APQ support entirely like so:
1apq:
2 enabled: false
APQ with subgraphs
By default, the router does not use APQ when sending queries to its subgraphs.
In your router's YAML config file, you can configure this APQ support with a combination of global and per-subgraph settings:
1apq:
2 subgraph:
3 # Disables subgraph APQ globally except where overridden per-subgraph
4 all:
5 enabled: false
6 # Override global APQ setting for individual subgraphs
7 subgraphs:
8 products:
9 enabled: true
In the example above, subgraph APQ is disabled except for the products
subgraph.