Back to home

PrerequisitesGetting startedServer-side caching with RedisSetting up our client-side applicationExploring the frontend applicationCaching on the client-sideOptimizing network calls on the client-side with React QueryPersisting server-side state with React QuerySummaryResources and further reading
Introduction To Caching With Next.js main image

Introduction To Caching With Next.js

"In computing, a cache is a high-speed data storage layer which stores a subset of data, typically transient in nature, so that future requests for that data are served up faster than is possible by accessing the data’s primary storage location. Caching allows you to efficiently reuse previously retrieved or computed data."

The above is a great definition from AWS's caching overview that captures the role of caching mechanisms in our applications.

Today's post is a great introduction to how caching works with Next.js with a focus on caching at different layers of our application using Redis and React Query.

Source code can be found here.

Prerequisites

  1. Basic familiarity with Create Next App.
  2. Basic familiarity with Next.js.
  3. Basic familiar with Redis.
  4. Basic familiarity with React Query.

Getting started

We will let create-next-app create the project directory introduction-to-caching-with-nextjs for us:

$ npx create-next-app introduction-to-caching-with-nextjs # ... creates Next.js app for us $ cd introduction-to-caching-with-nextjs $ npm i redis react-query axios

At this stage, a working Next.js app is ready for us.

Server-side caching with Redis

This post won't be diving too deep into Redis, but the expectation is that you have Redis installed locally and can connect to it from your application on port 6379. See Redis download for more information on how to download. Once downloaded and setup, you can run redis-server to start locally.

Once Redis is up and running, we can return to our repo. From the terminal, run mv pages/api/hello.js pages/api/cache-example.js and update the pages/api/cache-example.js file to be the following:

// Next.js API route support: https://nextjs.org/docs/api-routes/introduction import { performance } from "perf_hooks"; import redis from "redis"; import util from "util"; const redisPort = 6379; const client = redis.createClient(redisPort); const key = "example-key"; // Promisify the get command so we can use async/await. client.get = util.promisify(client.get); export default async function handler(req, res) { try { const startTime = performance.now(); const response = await client.get(key); if (response) { res.status(200).json(JSON.parse(response)); } else { // Waiting 1 second to simulate a slow response from another DB query await new Promise((resolve) => setTimeout(resolve, 1000)); // As a contrived example, let's say this is the expected result from the database const data = { name: "John Doe" }; // Here we are caching the result for 15 seconds to Redis client.setex(key, 15, JSON.stringify(data)); res.status(200).json(data); } const endTime = performance.now(); console.log(`Call took ${endTime - startTime} milliseconds`); } catch (err) { res.status(500).json({ error: "Server error" }); } }

Let's explain the above code in pieces. For starters, let's go over our imports.

  1. We are importing the perf_hooks module to assist in measuring the time for our Next.js lambda to complete an invocation.
  2. We are importing redis to connect to Redis.
  3. We are importing util to "promisify" the Redis get command. This will enable us to use it with async/await notation in our lambda.

After the imports, we are defining some variables with our configuration Redis. The key variable in this case is the key we will use to store our data in Redis. The example today will only ever use that key, so it is contrived but it will be helpful to demonstrate when we do and do not hit the Redis cache.

We then edit client.get to be the promise-based version of the Redis get command.

Finally, we declare our exported handler function for our lambda. The catch block will simply respond with a server error, while the try block does the following:

  1. We clock a startTime to be used for measuring our invocation time.
  2. We attempt to get a response from the Redis cache. The get function uses a provided key to retrieve the data from Redis.
  3. If we get a response, we send the response to the client. This means that we hit the cache and we don't need to hit the database.
  4. If we don't get a response, we wait 1 second to simulate a slow response from the database. This is contrived, but you could replace the setTimeout promise with any fetch to a database of your choice.
  5. Once we have our data (which we manually assign in this case to simulate a database query result), we cache the data for 15 seconds in Redis.

We return the response to the client (which I have purposes done in both if/else blocks to help with clarity) before finally clocking an endTime and logging the time it took to complete the lambda invocation.

The aim of the logging is that it should take us about 1 second to complete the lambda invocation if we do not hit the Redis cache on the server, or significantly less time if we do.

Startup the Next.js application with npm run dev. Once the app is up and running, we can start running curl in another terminal to test our lambda.

$ curl http://localhost:3000/api/cache-example {"name":"John Doe"}%

On the first invocation, we will see under our logs for the terminal running the Next.js application. The first time we hit the lambda, we will see our call takes >1000 millseconds.

Our Redis cache has been set to cache for 15 seconds (for demonstration purposes), so if we continue to call curl in the other terminal, we will see our call takes <1 second until the cache has cleared the expired value.

Here are some example logs from me running the lambda on my local machine with some comment annotations:

# Our first call which does not hit the Redis cache Call took 1004.2627627849579 milliseconds # Our subsequent calls that hit the Redis cache (all made within the first 15 seconds) Call took 1.8681368827819824 milliseconds Call took 0.5662209987640381 milliseconds Call took 0.57741379737854 milliseconds Call took 1.16709303855896 milliseconds # After waiting for 15 seconds for the result to be evicted from the cache. Call took 1001.504725933075 milliseconds

To put this into layperson terms, I could say the following:

  1. On our first invocation, we miss the cache, hit the database and the lambda takes >1000 milliseconds.
  2. On our second to fifth invocations, we hit the cache and the calls take between 0.5 and 1.9 milliseconds (an improvement up to 2000x!).
  3. After waiting for 15 seconds, we made our sixth call that missed the cache, hit the database and the lambda takes <1 second.

With our server caching optimized for performance on this endpoint (lol), we can shift our focuses onto the client-side.

This example has been contrived but at least demonstrates the pattern of writing back a result to the cache. It is entirely up to your application and use-case to decide how long something should live in the cache. In-memory caches are expensive and should be used appropriately.

Setting up our client-side application

Before we can start diving into client-side caching, we need to set up our client-side application to be able to operate in a way that we can see re-fetching and caching by example.

To manage server state on the client, we are using the React Query library. What is the difference between server state and global state you may ask? Server state assumes that the frontend application does not "own" the data displayed, we display a "snapshot" of the data that was retrieved at any given point in time. React Query provides us with tools to help define whether that server state is "fresh" or "stale" and to make operations and calls based upon our configuration for that.

I won't be doing a deep dive on React Query, but we will be touching base with configuring stale time and cache time in our application later in this tutorial to prevent unnecessary network requests.

We already have all the packages that we need installed, so we can begin by updating our pages/_app.js file to setup the React Query provider and pages/index.js to display our simple example.

Inside of pages/_app.js, add the following:

import { QueryClient, QueryClientProvider } from "react-query"; import { ReactQueryDevtools } from "react-query/devtools"; const queryClient = new QueryClient(); function MyApp({ Component, pageProps }) { return ( <QueryClientProvider client={queryClient}> <Component {...pageProps} /> <ReactQueryDevtools initialIsOpen={true} /> </QueryClientProvider> ); } export default MyApp;

In the above code, we are add the QueryClientProvider and ReactQueryDevtools to each page.

QueryClientProvider is a provider for the server state stored in React Query.

The ReactQueryDevtools provides us with a useful debugger to see the state of our in-memory store for React Query at any given time and displays what state our data is in (ie. fresh, fetching, stale or inactive).

Inside of pages/index.js, update the code to be the following:

import axios from "axios"; import { useQuery } from "react-query"; import * as React from "react"; function CacheExample() { const { status, data } = useQuery("nameData", () => axios.get("/api/cache-example").then(({ data }) => data) ); switch (status) { case "loading": return <p>Loading...</p>; case "error": return <p>Error!</p>; case "success": return <p>Name: {data.name}</p>; default: return null; } } export default function Home() { const [showCacheExample, setShowCacheExample] = React.useState(true); const handleClick = React.useCallback( () => setShowCacheExample(!showCacheExample), [showCacheExample, setShowCacheExample] ); return ( <div> <h1>Caching example</h1> <button onClick={handleClick}>Toggle hide/show</button> {showCacheExample && <CacheExample />} </div> ); }

Our home page is now configured to do the following:

  1. The default Home component manages a local toggle to display/hide the CacheExample component. This is invoked by the handleClick callback that is toggled by our button.
  2. The CacheExample component is a simple example of how to use the useQuery hook to fetch data from the server endpoint we made. We are destructuring the status and data from the useQuery hook and using a switch statement to display the data based on the status.

Before we add the headers to help with our client caching, let's explore the current state of the application.

Exploring the frontend application

Firstly, head to http://localhost:3000 in your browser and you should see the following:

  1. A Home component that displays a button to toggle the CacheExample component.
  2. A CacheExample component that displays a p tag with the name of the user we get back from the lambda function we wrote earlier.
  3. The ReactQueryDevtools component that displays the state of our in-memory store for React Query.

We can notice in our ReactQueryDevtools component that there is one entry for nameData. On the right-hand side of the dev tools, we can see the value. On the top toolbar, we can see the state of that value (stale).

Home page initial data

Home page initial data

With the default configurations for React Query, we have a stale time of 0. That means that as soon as we fetch our data, it is considered stale and we should make another request to update the data.

React Query has a number of options to configure when to refetch in the useQuery reference. For example, there is a refetchOnWindowFocus configuration that defaults to true. If you click off the browser and click back on, you will actually see that React Query will refetch the data for us when the browser is re-focused. You can confirm this in the network tab of your developer tools. This is incredibly powerful behavior, but also one that is susceptible to misuse and over-fetching. There is also another configuration option refetchOnMount that defaults to true. This will refetch the data when the component is mounted (which we will use to our advantage with the toggle).

Let's quickly recap our current situation with the server-side caching within our browser to see that in action. When we first opened the browser (in a state where Redis had no cached value), we can see in our network tab a call to /api/cache-example that was made and took > 1s to complete:

Uncached network request

Uncached network request

If we then use our button to hide and show the CacheExample component within the 15 second cache time we set for Redis, we can see that the network tab now shows a call to /api/cache-example that was made and took 15ms to complete.

Cached network request

Cached network request

Note: We are of course running on local. In production, we would expect the network request to take longer based on how long it takes to communicate with the server from the client.

As we make the network request, we can also see in our React Query dev tools that our nameData entry changes state to fetching:

Fetching state

Fetching state

Amazing, so far we can see our server-side caching in play from the browser. Let's now see how we can cache the response to our local machine on the client-side.

Caching on the client-side

We can control the lifecycle of the assets we cache on the client-side by setting a Cache Control header.

When in development mode, the default cache control headers are overwritten to prevent caching locally and are set to Cache-Control: no-cache, no-store, max-age=0, must-revalidate.

The default cache time for Next.js can be set in the next.config.js file, however we can also set it on a per-request basis by setting the cache control header in our response from the lambda. This will be what we do to display the client-side caching mechanism in our example.

Firstly, let's update our pages/api/cache-example.js file to add the cache control headers prior to responses.

// Next.js API route support: https://nextjs.org/docs/api-routes/introduction import { performance } from "perf_hooks"; import redis from "redis"; import util from "util"; const redisPort = 6379; const client = redis.createClient(redisPort); const key = "example-key"; // Promisify the get command so we can use async/await. client.get = util.promisify(client.get); export default async function handler(req, res) { try { const startTime = performance.now(); const response = await client.get(key); if (response) { // This value is considered fresh for five seconds (maxage=5). // If a request is repeated within the next 5 seconds, the previously // cached value will still be fresh. If the request is repeated after, it will request the // data again from the server. res.setHeader("Cache-Control", "public, max-age=5, immutable"); res.setHeader("X-Cache", "HIT"); res.status(200).json(JSON.parse(response)); } else { // Waiting 1 second to simulate a slow response from another DB query await new Promise((resolve) => setTimeout(resolve, 1000)); // As a contrived example, let's say this is the expected result from the database const data = { name: "John Doe" }; // Here we are caching the result for 15 seconds to Redis client.setex(key, 15, JSON.stringify(data)); // Set the cache-header res.setHeader("Cache-Control", "public, max-age=5, immutable"); res.setHeader("X-Cache", "MISS"); res.status(200).json(data); } const endTime = performance.now(); console.log(`Call took ${endTime - startTime} milliseconds`); } catch (err) { res.status(500).json({ error: "Server error" }); } }

I am being verbose here and adding the header prior to both responses in the if/else block just to colocate the code for example's sake.

The comment left above the first res.header code block explains how the cache control header works for our example. We are essentially telling the browser that the data will be fresh for 5 seconds. The client will render the client-cached value. If the browser makes another call between 5 and 10 seconds, then the cached value will be stale but still render.

To see this in action, let's now toggle our CachedExample component on and off over 20 seconds and see how the network tab changes.

Network tab changes over the toggled 20 second period

Network tab changes over the toggled 20 second period

You will see some interesting things happen:

  • One request to the server is made and takes > 1s. This returned a 304 as the data should have been unmodified from our last request.
  • The following 5 seconds of requests are served from the client-cache. This means no network request to the server is made, as we set the Cache-Control header of 5 seconds. This is our client-side caching in action.
  • The first request made after the 5 seconds is served from the server Redis cache. This is because we have gone beyond the 5 second client-cache limit that we set but we have not exceeded the 15 second expiry time of the Redis cache (after the first request missed the cache).

We added in the X-Cache header to show that the data was served from the server cache or not, so if explore the network tab, you can see which requests to the server came from the Redis cache and which did not.

Request served from Redis cache

Request served from Redis cache

Request missing the Redis cache and being server from our "else" statement

Request missing the Redis cache and being server from our "else" statement

Geez! There is a lot going on here so feel free to read over that a couple of times. We are now making the most of both a client-side and server-side caching mechanism within our application.

In our contrived example, we are constantly serving back the same data. If we know we will constantly serve the same data for "X" amount of time thanks to our system design, then we may be making redundant network calls to the server (regardless of it being served from the client-cache or server).

How do we rectify this? Let's take a look at how React Query empowers us to make these decisions based on the design of our application.

Optimizing network calls on the client-side with React Query

The useQuery function takes a third argument that allows us to modify the options. Two such options are the cacheTime and staleTime.

According to the docs, those two options operate as so:

  • cacheTime: The time in milliseconds that unused/inactive cache data remains in memory. When a query's cache becomes unused or inactive, that cache data will be garbage collected after this duration. When different cache times are specified, the longest one will be used.
  • staleTime: The time in milliseconds after data is considered stale. This value only applies to the hook it is defined on. Defaults to 0.

We have already the default stale time in action. In our ReactQueryDevtools component, we see the value of the nameData is always stale. This means that by default, we will also attempt to make the network request for the data.

Let us adjust these times in our pages/index.js file and see what happens:

import axios from "axios"; import { useQuery } from "react-query"; import * as React from "react"; function CacheExample() { const { status, data } = useQuery( "nameData", () => axios.get("/api/cache-example").then(({ data }) => data), // NEW: Added options { staleTime: 60 * 1000, // 1 minute cacheTime: 60 * 1000 * 10, // 10 minutes } ); switch (status) { case "loading": return <p>Loading...</p>; case "error": return <p>Error!</p>; case "success": return <p>Name: {data.name}</p>; default: return null; } } export default function Home() { const [showCacheExample, setShowCacheExample] = React.useState(true); const handleClick = React.useCallback( () => setShowCacheExample(!showCacheExample), [showCacheExample, setShowCacheExample] ); return ( <div> <h1>Caching example</h1> <button onClick={handleClick}>Toggle hide/show</button> {showCacheExample && <CacheExample />} </div> ); }

If we now toggle to show and hide the CacheExample component, we will see that the nameData is now toggling between inactive and fresh. With our new times set, React Query knows that the data we have stored is currently considered valid and does not make any network requests.

Inactive state in our React Query dev tools

Inactive state in our React Query dev tools

We can actually see this in the network tab itself. Open the network tab and begin toggling the CacheExample component. No network requests are being made at all! This is because React Query knows that the data is still fresh and does not need to make any network requests.

No unnecessary network requests after our initial request

No unnecessary network requests after our initial request

React Query itself stores the server state in-memory, so the data itself needs to be re-fetched if you do something such as reload the page (we will touch on this next). That being said, if you reload the page within the 5 second interval after fetching the data, you will see our client-cache is still in play and the result is served from the client-cache.

Awesome! So now we have seen how to optimize the database query on the server-side with the Redis cache, how to optimize network calls with the client-side cache using cache control headers and how to manage an in-memory cache with React Query.

Confused by the use-cases of React Query vs the client-side cache? Our example is contrived as we are serving data that is always the same, but a good example of something you would manage via the client-side cache and not React Query would be assets like images and fonts.

As mentioned previously, React Query operates with an in-memory store. That means that when we refresh the page, we will lose the data that we have stored in memory and the network request mechanism is required again. What happens in the use-case that we actually want to persist the data store? There is an experimental feature of React Query that allows us to do just that.

Persisting server-side state with React Query

React Query provides us with a persistQueryClient function that allows us to persist the data store in memory. This is an experimental feature, so be aware it is subject to change. This demonstration is just to demonstrate how we can dehydrate and rehydrate the in-memory data store.

Let's update our pages/_app.js to the following:

import { QueryClient, QueryClientProvider } from "react-query"; import { ReactQueryDevtools } from "react-query/devtools"; import { persistQueryClient } from "react-query/persistQueryClient-experimental"; import { createWebStoragePersistor } from "react-query/createWebStoragePersistor-experimental"; const queryClient = new QueryClient({ defaultOptions: { queries: { cacheTime: 1000 * 60 * 60 * 24, // 24 hours }, }, }); if (typeof window !== "undefined") { const localStoragePersistor = createWebStoragePersistor({ storage: window.localStorage, }); persistQueryClient({ queryClient, persistor: localStoragePersistor, }); } function MyApp({ Component, pageProps }) { return ( <QueryClientProvider client={queryClient}> <Component {...pageProps} /> <ReactQueryDevtools initialIsOpen={false} /> </QueryClientProvider> ); } export default MyApp;

The function does most of the heavy lifting for us, so defining this is all we need to go. However, as noted in the docs: for persist to work properly, you need to pass QueryClient a cacheTime value to override the default during hydration (as shown above).

The persistor works by the following:

  1. When your query/mutation cache is updated, it will be dehydrated and stored by the persistor you provided.
  2. When the page is refreshed, the persistor will rehydrate the data store from the dehydrated data. If a cache is found that is older than the maxAge (which by default is 24 hours), it will be discarded. This can be customized as you see fit.

In our particular example, we will be dehydrating the cache to store it in local storage. Upon the reload of the app, React Query will attempt to hydrate the data from local storage.

If we now reload the app, we can see this all in action. Checking the network tab and refreshing, you will notice that we no longer make the call to the /api/cache-example endpoint.

No network requests made to our /api/cache-example route after hydrating from local storage

No network requests made to our /api/cache-example route after hydrating from local storage

That is because as we load the app, our data store is hydrated from local storage. React Query will persist the data to local storage by dehydrating periodically as mentioned by how the cache works above.

We can confirm this as we now see the data we have stored in local storage:

Persisted data that was dehydrated from in-memory to local storage

Persisted data that was dehydrated from in-memory to local storage

A word of caution: Incorrect implementation of the persistor can lead to undesired outcomes. Sometimes you may make changes to your application or data that immediately invalidate any and all cached data. This can be done in the persistQueryClient function by passing a buster value for cache busting.

Although brief, this is a useful concept to understand and add to the tool belt when designing an efficient application. Just be aware of the possible trade-offs when using something such as this.

Summary

Today's post was a big one! We demonstrated the following with our Next.js application:

  • Server-side caching to speed up an "expensive" database query using Redis and how to set up an expiry on that data.
  • Client-side caching to speed up a network request using cache-control headers and how to set up an expiry on that data for our local browser.
  • Server state management on the client-side with React Query and how to prevent unnecessary network requests based on your application needs.
  • An example of how to persist data in-memory to local storage and how to hydrate that data when the page is refreshed.

The remote data we fetched itself was contrived, but it is a good example of how to use these concepts across the board.

Something that we did not go over was the CDN for caching media assets, but that can saved for another time!

Resources and further reading

Photo credit: dilucidus

Personal image

Dennis O'Keeffe

@dennisokeeffe92
  • Melbourne, Australia

Hi, I am a professional Software Engineer. Formerly of Culture Amp, UsabilityHub, Present Company and NightGuru.
I am currently working on workingoutloud.dev, Den Dribbles and LandPad .

1,200+ PEOPLE ALREADY JOINED ❤️️

Get fresh posts + news direct to your inbox.

No spam. We only send you relevant content.