Normalized caches in Apollo Kotlin
Apollo Kotlin provides two built-in normalized caches for storing and reusing the results of GraphQL operations:
An in-memory cache (
MemoryCache
)A SQLite-backed cache (
SqlNormalizedCache
)
You can use one (or both!) of these caches in your app to improve its responsiveness for most operations.
To get started with a coarser caching strategy that's faster to set up, take a look at the HTTP cache.
What is a normalized cache?
In a GraphQL client, a normalized cache breaks each of your GraphQL operation responses into the individual objects it contains. Then, each object is cached as a separate entry based on its cache ID. This means that if multiple responses include the same object, that object can be deduplicated into a single cache entry. This reduces the overall size of the cache and helps keep your cached data consistent and fresh.
You can also use a normalized cache as a single source of truth for your UI, enabling it to react to changes in the cache. To learn more about the normalization process, see this blog post.
Normalizing responses
Look at this example query:
1query GetFavoriteBook {
2 favoriteBook { # Book object
3 id
4 title
5 author { # Author object
6 id
7 name
8 }
9 }
10}
This query returns a Book
object, which in turn includes an Author
object. An example response from the GraphQL server looks like this:
1{
2 "favoriteBook": {
3 "id": "bk123",
4 "title": "Les Guerriers du silence",
5 "author": {
6 "id": "au456",
7 "name": "Pierre Bordage"
8 }
9 }
10}
A normalized cache does not store this response directly. Instead, it breaks it up into the following entries by default:
1"favoriteBook": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
2"favoriteBook.author": {"id": "au456", "name": "Pierre Bordage"}
3"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference{favoriteBook}"}
⚠️ These default generated cache IDs (
favoriteBook
andfavoriteBook.author
) are undesirable for data deduplication. See Specifying cache IDs.
Notice that the
author
field of theBook
entry now contains the stringApolloCacheReference{favoriteBook.author}
. This is a reference to theAuthor
cache entry.Notice also the
QUERY_ROOT
entry, which is always present if you've cached results from at least one query. This entry contains a reference for each top-level field you've included in a query (e.g.,favoriteBook
).
Provided caches
In-memory cache
Apollo Kotlin's MemoryCache
is a normalized, in-memory cache for storing objects from your GraphQL operations. To use it, first add the apollo-normalized-cache
artifact to your dependencies in your build.gradle[.kts]
file:
1dependencies {
2 implementation("com.apollographql.apollo3:apollo-normalized-cache:3.8.5")
3}
Then include the cache in your ApolloClient
initialization, like so:
1// Creates a 10MB MemoryCacheFactory
2val cacheFactory = MemoryCacheFactory(maxSizeBytes = 10 * 1024 * 1024)
3// Build the ApolloClient
4val apolloClient = ApolloClient.Builder()
5 .serverUrl("https://...")
6 // normalizedCache() is an extension function on ApolloClient.Builder
7 .normalizedCache(cacheFactory)
8 .build()
Because the normalized cache is optional, normalizedCache()
is an extension function on ApolloClient.Builder()
that's defined in the apollo-normalized-cache
artifact. It takes a NormalizedCacheFactory
as a parameter so that it can create the cache outside the main thread if needed.
A MemoryCache
is a least recently used (LRU) cache. It keeps entries in memory according to the following conditions:
Name | Description |
---|---|
maxSizeBytes | The cache's maximum size, in bytes. |
expireAfterMillis | The timeout for expiring existing cache entries, in milliseconds. By default, there is no timeout. |
When your app is stopped, data in the MemoryCache
is lost forever. If you need to persist data, you can use the SQLite cache.
SQLite cache
Apollo Kotlin's SQLite cache uses SQLDelight to store data persistently. You can use it to persist data across app restarts, or if your cached data becomes too large to fit in memory.
To enable SQLite cache support, add the apollo-normalized-cache-sqlite
dependency to your project's build.gradle
file:
1dependencies {
2 implementation("com.apollographql.apollo3:apollo-normalized-cache-sqlite:3.8.5")
3}
Then include the SQLite cache in your ApolloClient
initialization according to your platform target (different platforms use different drivers):
1// Android
2val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("apollo.db")
3// JVM
4val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("jdbc:sqlite:apollo.db")
5// iOS
6val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("apollo.db")
7
8// Build the ApolloClient
9val apolloClient = ApolloClient.Builder()
10 .serverUrl("https://...")
11 .normalizedCache(sqlNormalizedCacheFactory)
12 .build()
You can then use the SQLite cache just like you'd use the MemoryCache
.
Chaining caches
To get the most out of both normalized caches, you can chain a MemoryCacheFactory
with a SqlNormalizedCacheFactory
:
1val memoryFirstThenSqlCacheFactory = MemoryCacheFactory(10 * 1024 * 1024)
2 .chain(SqlNormalizedCacheFactory(context, "db_name"))
Whenever Apollo Kotlin attempts to read cached data, it checks each chained cache in order until it encounters a hit. It then immediately returns that cached data without reading any additional caches.
Whenever Apollo Kotlin writes data to the cache, those writes propagate down all caches in the chain.
Setting a fetch policy
After you add a normalized cache to your ApolloClient
initialization, Apollo Kotlin automatically uses FetchPolicy.CacheFirst
as the default (client-wide) fetch policy for all queries. To change the default, you can call fetchPolicy
on the client builder:
1val apolloClient = ApolloClient.Builder()
2 .serverUrl("https://...")
3 .fetchPolicy(FetchPolicy.NetworkOnly)
4 .build()
You can also customize how the cache is used for a particular query by setting a fetch policy for that query.
The following snippets show how to set all available fetch policies and their behavior:
1val response = apolloClient.query(query)
2
3 // (Default) Check the cache, then only use the network if data isn't present
4 .fetchPolicy(FetchPolicy.CacheFirst)
5
6 // Check the cache and never use the network, even if data isn't present
7 .fetchPolicy(FetchPolicy.CacheOnly)
8
9 // Always use the network, then check the cache if network fails
10 .fetchPolicy(FetchPolicy.NetworkFirst)
11
12 // Always use the network and never check the cache, even if network fails
13 .fetchPolicy(FetchPolicy.NetworkOnly)
14
15 // Execute the query
16 .execute()
The CacheAndNetwork
policy can emit multiple values, so you call toFlow()
instead of execute()
:
1apolloClient.query(query)
2
3 // Check the cache and also use the network (1 or 2 values can be emitted)
4 .fetchPolicy(FetchPolicy.CacheAndNetwork)
5
6 // Execute the query and collect the responses
7 .toFlow().collect { response ->
8 // ...
9 }
As with normalizedCache(NormalizedCacheFactory)
, fetchPolicy(FetchPolicy)
is an extension function on ApolloClient.Builder()
, so you need apollo-normalized-cache
in your classpath for this to work.
Because the normalized cache deduplicates data, it enables you to react to cache changes. You do this with watchers
that listen for cache changes. Learn more about query watchers.
Specifying cache IDs
By default, Apollo Kotlin uses an object's GraphQL field path as its cache ID. For example, recall the following query and its resulting cache entries from earlier:
1query GetFavoriteBook {
2 favoriteBook { # Book object
3 id
4 title
5 author { # Author object
6 id
7 name
8 }
9 }
10}
1"favoriteBook": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
2"favoriteBook.author": {"id": "au456", "name": "Pierre Bordage"}
3"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference{favoriteBook}"}
Now, what happens if we execute a different query to fetch the same Author
object with id
au456
?
1query AuthorById($id: String!) {
2 author(id: $id) {
3 id
4 name
5 }
6 }
7}
After executing this query, our cache looks like this:
1"favoriteBook": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
2"favoriteBook.author": {"id": "au456", "name": "Pierre Bordage"}
3"author(\"id\": \"au456\")": {"id": "au456", "name": "Pierre Bordage"}
4"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference{favoriteBook}", "author(\"id\": \"au456\")": "ApolloCacheReference{author(\"id\": \"au456\")}"}
We're now caching two identical entries for the same Author
object! This is undesirable for a few reasons:
It takes up more space.
Modifying one of these objects does not notify any watchers of the other object.
We want to deduplicate entries like these by making sure they're assigned the same cache ID when they're written, resulting in a cache that looks more like this:
1"Book:bk123": {"id": "bk123", "title": "Les guerriers du silence", "author": "ApolloCacheReference{Author:au456}"}
2"Author:au456": {"id": "au456", "name": "Pierre Bordage"}
3"QUERY_ROOT": {"favoriteBook": "ApolloCacheReference(Book:bk123)", "author(\"id\": \"au456\")": "ApolloCacheReference{Author:au456}"}
Fortunately, all of our objects have an id
field that we can use for this purpose. If an id
is unique across all objects in your graph, you can use its value directly as a cache ID. Otherwise if it's unique per object type, you can prefix it with the type name (as shown above).
Methods
There are two methods for specifying an object's cache ID:
Declaratively (recommended). You can specify schema extensions that tell the codegen where to find the ID and make sure at compile time that all the
id
fields are requested so that all objects can be identified. Declarative IDs also prefix each ID with the typename to ensure global uniqueness.Programmatically. You can implement custom APIs that retrieve the ID for an object. Because you can execute arbitrary code, this solution is more flexible, but it's also more error prone and requires that you manually request
id
fields.