Back to home

PrerequisitesGetting startedDemonstrating our problemIsolation with RedlockToo many retriesSummaryResources and further reading
Locking Redis Transactions In Node.js main image

Locking Redis Transactions In Node.js

Isolation in database systems determines how transaction integrity is visible to other users and systems (from our good friend, Wikipedia).

For highly contested resources, it is important to ensure that the data is consistent. Upping concurrency can lead to a loss of data integrity through undesired side-effects such as dirty reads or lost updates.

In this blog post, we will see how dirty reads affect our Redis cache updates locally and how we can resolve issues around those hotly contested key-value pairs with Node Redlock.

This post itself is only a proof of concept. It will only demonstrate with one client locally.

Source code can be found here

Prerequisites

  1. Basic familiarity with npm.
  2. Basic familiarity with Node.js.
  3. Redis must be installed.

Getting started

$ mkdir hello-redis-redlock $ cd hello-redis-redlock # Create our files to work from $ touch index.js dirty-read.js # initialise npm project with basics $ npm init -y $ npm install ioredis@^4.27.9 redlock@^4.2.0

At this stage, we are now ready to demonstrate our dirty reads.

Demonstrating our problem

For our example today, we want to create a loop that will iteratively (but asynchronously) fetch a value from the Redis cache and re-write an update to that cache.

The first input we want to the database will be the following:

{ "data": { "1": 1 } }

And we iteratively want to add the value from our loop as the key within the data object with a value of 1 until we reach the key "100". For example, our final value would be:

{ "data": { "1": 1, "2": 1, "3": 1, // ... continues ... "100": 1 } }

This contrived example will demonstrate what happens for a hotly contested key-value pair that we grab from Redis.

In the dirty-read file, add the following:

const Client = require("ioredis"); // Initialise our Redis client const redis = new Client(); /** * A helper function to grab an array of numbers. * @example range(1,3) // [1,2,3] * @example range(5,8) // [5,6,7,8] */ function range(start, end) { return Array(end - start + 1) .fill() .map((_, idx) => start + idx); } async function main() { console.log("Starting..."); // Ensure key is clean await redis.del("doc"); // Initialise so that we have a base case to compare against let database = { data: { 1: 1, }, }; await redis.set("doc", JSON.stringify(database)); // Set an array from 1 - 100, we want an object that contains keys 1 - 100 with the value 1 const valuesToSet = range(1, 100); // Iterate over our values and for each one, RETURN a promise while the function is running asynchronously const allPromises = valuesToSet.map( (value) => new Promise(async (resolve, reject) => { try { // Read 'doc' from Redis const docState = await redis.get("doc"); const doc1 = JSON.parse(docState); const newDoc = { data: { ...doc1.data, [value]: 1, }, }; console.log("Setting value", JSON.stringify(newDoc)); await redis.set("doc", JSON.stringify(newDoc)); resolve(value); } catch (err) { console.error("Error when trying to set:", value); console.error(err); reject(err); } }) ); // Wait for ALL promises to resolve await Promise.all(allPromises); // Read the final value from Redis const endResult = await redis.get("doc"); console.log("value", endResult); // Check how many keys exist in the object - we want 100 console.log(Object.keys(JSON.parse(endResult).data).length); // Do some cleanup await redis.del("doc"); // Close our Redis client await redis.disconnect(); console.log("Completed"); } main();

The comments should indicate what happens, but to recap:

  1. We create a new Redis client.
  2. We invoke main to start our loop.
  3. We first ensure Redis cache is clean for our key doc we will work with.
  4. We set the initial value of our key doc to { data: { 1: 1 } }. This is contrived and for the sake of the first redis.get call in the first loop.
  5. We set an array of values from 1 - 100.
  6. We iterate over the array and for each value, we return a promise while the function is running asynchronously.
  7. We wait for all promises to resolve.
  8. We read the final value from Redis.
  9. We check how many keys exist in the object - we want 100.
  10. We do some cleanup.
  11. We close our Redis client.

What happens when we run this code? You can find out and see with node dirty-read.js.

$ node dirty-read.js Starting... Setting value {"data":{"1":1}} Setting value {"data":{"1":1,"2":1}} Setting value {"data":{"1":1,"3":1}} # ... Omitting values 4-99 ... but you can already see that the key-value pair is attempting to be updated incorrectly. Setting value {"data":{"1":1,"100":1}} # Final value value {"data":{"1":1,"100":1}} # Length of the object keys (we want 100) 2 Completed

Ooft... our final value is {"data":{"1":1,"100":1}}. As you may have noticed, that is not what we want.

Step (6) is where we see the problem. It is the most important step for us here.

As each of our promises run their callback function, they all read the same value {"data":{"1":1}}. After update the value with their respective key-value pair, we see that the value is written back as {"data":{"1":1,"<ITERATION-KEY>":1}}. This is constantly overwritten and so our final value written back to Redis results as {"data":{"1":1,"100":1}}.

Cleary, this is a problem. Our iterating loop is our contrived version of a hotly contested resource, so what happens in the wild when we have contention? This is where we can introduce the Redlock library.

Isolation with Redlock

We need to lock our Redis key to ensure we are the only one updating the value at any given time.

Node Redlock is a library that help with just that.

As the description for the package says:

This is a node.js implementation of the redlock algorithm for distributed redis locks. It provides strong guarantees in both single-redis and multi-redis environments, and provides fault tolerance through use of multiple independent redis instances or clusters.

In our particular use-case, we are not providing high-availability nor a multi-Redis environment. We are using a single Redis instance.

To see our fix in action, place the following into index.js:

const Client = require("ioredis"); const Redlock = require("redlock"); const redis = new Client(); // Here we pass our client to redlock. const redlock = new Redlock( // You should have one client for each independent node // or cluster. [redis], { // The expected clock drift; for more details see: // http://redis.io/topics/distlock driftFactor: 0.01, // multiplied by lock ttl to determine drift time // The max number of times Redlock will attempt to lock a resource // before erroring. retryCount: 10, // the time in ms between attempts retryDelay: 100, // time in ms // the max time in ms randomly added to retries // to improve performance under high contention // see https://www.awsarchitectureblog.com/2015/03/backoff.html retryJitter: 200, // time in ms // The minimum remaining time on a lock before an extension is automatically // attempted with the `using` API. automaticExtensionThreshold: 500, // time in ms } ); /** * A helper function to grab an array of numbers. * @example range(1,3) // [1,2,3] * @example range(5,8) // [5,6,7,8] */ function range(start, end) { return Array(end - start + 1) .fill() .map((_, idx) => start + idx); } async function main() { try { console.log("Starting..."); await redis.del("doc"); const valuesToSet = range(1, 100); // Iterate through our valuesToSet array and return an array of promises. const allPromises = valuesToSet.map( (value) => new Promise(async (resolve, reject) => { try { // Acquire a lock. let lock = await redlock.acquire(["a"], 5000); // Fetch the value of the `doc` key. const docState = await redis.get("doc"); // If a value exists, then we update that value. if (docState) { const doc1 = JSON.parse(docState); const newDoc = { data: { ...doc1.data, [value]: 1, }, }; // Do something... // ioredis supports all Redis commands: console.log("Setting value", JSON.stringify(newDoc)); await redis.set("doc", JSON.stringify(newDoc)); } else { // Initialise the first `doc` object value. let newDoc = { data: { [value]: 1, }, }; console.log("newDoc", newDoc); // Do something... // ioredis supports all Redis commands: console.log("Setting value", JSON.stringify(newDoc)); await redis.set("doc", JSON.stringify(newDoc)); } // Release the lock. await lock.unlock(); resolve(value); } catch (err) { console.error("Error when trying to set:", value); console.error(err); reject(err); } }) ); // Wait for all promises to resolve. await Promise.all(allPromises); // Find the final result const endResult = await redis.get("doc"); console.log("value", endResult); // Check how many keys exist in the object - we want 100 console.log(Object.keys(JSON.parse(endResult).data).length); // Cleanup await redis.del("doc"); } catch (err) { console.error("There was an error:", err); // Here we would handle the error. } finally { // Disconnect from the Redis client await redis.disconnect(); console.log("Completed"); } } main();

Our above code has been modified to use the Redlock library with some sensible defaults to demonstrate our database isolation.

Redlock works by creating an instance where the first argument is an array of Redis clients (we are only using the one) with options as the second argument.

In our main function, the main difference is now in the promises that we return.

Now, we run a try/catch block where the first uses Redlock to acquire a lock that we named a (this is arbitrary, but attempted to acquire a lock that is currently in use will cause a retry).

After acquiring a lock, run the logic required to update in Redis and finally release the lock with the unlock method.

Similar to before, we will wait for all promises, although we also have a catch block incase any promises are rejected and a finally block to run our disconnect.

What happens now? Run node index.js to find out:

$ node index.js Starting... # Initial value newDoc { data: { '1': 1 } } # First iteration Setting value {"data":{"1":1}} Setting value {"data":{"1":1,"2":1}} # Third iteraction: note that the asynchronous operations are not always written in order of the array Setting value {"data":{"1":1,"2":1,"51":1}} Setting value {"data":{"1":1,"2":1,"51":1,"93":1}} # ... many more iterations ... # 100th and final iteration Setting value {"data":{"1":1,"2":1,"3":1,"4":1,"5":1,"6":1,"7":1,"8":1,"9":1,"10":1,"11":1,"12":1,"13":1,"14":1,"15":1,"16":1,"17":1,"18":1,"19":1,"20":1,"21":1,"22":1,"23":1,"24":1,"25":1,"26":1,"27":1,"28":1,"29":1,"30":1,"31":1,"32":1,"33":1,"34":1,"35":1,"36":1,"37":1,"38":1,"39":1,"40":1,"41":1,"42":1,"43":1,"44":1,"45":1,"46":1,"47":1,"48":1,"49":1,"50":1,"51":1,"52":1,"53":1,"54":1,"55":1,"56":1,"57":1,"58":1,"59":1,"60":1,"61":1,"62":1,"63":1,"64":1,"65":1,"66":1,"67":1,"68":1,"69":1,"70":1,"71":1,"72":1,"73":1,"74":1,"75":1,"76":1,"77":1,"78":1,"79":1,"80":1,"81":1,"82":1,"83":1,"84":1,"85":1,"86":1,"87":1,"88":1,"89":1,"90":1,"91":1,"92":1,"93":1,"94":1,"95":1,"96":1,"97":1,"98":1,"99":1,"100":1}} # Final value value {"data":{"1":1,"2":1,"3":1,"4":1,"5":1,"6":1,"7":1,"8":1,"9":1,"10":1,"11":1,"12":1,"13":1,"14":1,"15":1,"16":1,"17":1,"18":1,"19":1,"20":1,"21":1,"22":1,"23":1,"24":1,"25":1,"26":1,"27":1,"28":1,"29":1,"30":1,"31":1,"32":1,"33":1,"34":1,"35":1,"36":1,"37":1,"38":1,"39":1,"40":1,"41":1,"42":1,"43":1,"44":1,"45":1,"46":1,"47":1,"48":1,"49":1,"50":1,"51":1,"52":1,"53":1,"54":1,"55":1,"56":1,"57":1,"58":1,"59":1,"60":1,"61":1,"62":1,"63":1,"64":1,"65":1,"66":1,"67":1,"68":1,"69":1,"70":1,"71":1,"72":1,"73":1,"74":1,"75":1,"76":1,"77":1,"78":1,"79":1,"80":1,"81":1,"82":1,"83":1,"84":1,"85":1,"86":1,"87":1,"88":1,"89":1,"90":1,"91":1,"92":1,"93":1,"94":1,"95":1,"96":1,"97":1,"98":1,"99":1,"100":1}} # Number of keys in our `data` object is 100 which is what we want! 100 Completed

Above is a shortened version of the output, but something you will notice now is that all our promises are added into our Redis cache as correctly desired! Woo!

This is a very important point to note, as we are using Redlock to ensure that we are not reading from/writing to the same key at the same time. Our key is what enforces the isolation and correctness.

Too many retries

So what happens when the retry count is reached? Let's adjust our options for Redlock in the retry delay from 100 to 10 and see what happens:

// Here we pass our client to redlock. const redlock = new Redlock( // You should have one client for each independent node // or cluster. [redis], { // The expected clock drift; for more details see: // http://redis.io/topics/distlock driftFactor: 0.01, // multiplied by lock ttl to determine drift time // The max number of times Redlock will attempt to lock a resource // before erroring. retryCount: 10, // the time in ms between attempts // UPDATE HERE! retryDelay: 10, // time in ms // the max time in ms randomly added to retries // to improve performance under high contention // see https://www.awsarchitectureblog.com/2015/03/backoff.html retryJitter: 200, // time in ms // The minimum remaining time on a lock before an extension is automatically // attempted with the `using` API. automaticExtensionThreshold: 500, // time in ms } );

Again, run index.js:

$ node index.js Starting... # ... some iterations log ... There was an error: LockError: Exceeded 10 attempts to lock the resource "a". at /Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/redlock/redlock.js:431:20 at tryCatcher (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/util.js:16:23) at Promise.errorAdapter [as _rejectionHandler0] (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/nodeify.js:35:34) at Promise._settlePromise (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/promise.js:601:21) at Promise._settlePromise0 (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/promise.js:649:10) at Promise._settlePromises (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/promise.js:725:18) at _drainQueueStep (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/async.js:93:12) at _drainQueue (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/async.js:86:9) at Async._drainQueues (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/async.js:102:5) at Immediate.Async.drainQueues [as _onImmediate] (/Users/dennisokeeffe/code/blog-projects/hello-redis-redlock/node_modules/bluebird/js/release/async.js:15:14) at processImmediate (node:internal/timers:463:21) { attempts: 11 } Completed

As you can see, an error was raised from a promise that attempt to lock the resource a over our limit of retries (set at 10).

As per usual, you need to handle this error appropriately (which I have not for the demonstration). What is a "sensible default" for this example is not necessarily the correct defaults for you use case. That will depend on your application and the use case.

Summary

After exploring the importance of isolation, we demonstrated an example of a Node.js script that incorrectly manages a Redis resource before adjusting our use case to be isolated using the Redlock library.

We finished off by showing what an error case may look like with the Redlock library.

Isolation is an important aspect of any database transaction and hotly contested cache resources are no different. Hopefully today's discussion has made it clear what is the best way to manage cache resources in Node.js applications.

Resources and further reading

Photo credit: pawel_czerwinski

Personal image

Dennis O'Keeffe

@dennisokeeffe92
  • Melbourne, Australia

Hi, I am a professional Software Engineer. Formerly of Culture Amp, UsabilityHub, Present Company and NightGuru.
I am currently working on workingoutloud.dev, Den Dribbles and LandPad .

1,200+ PEOPLE ALREADY JOINED ❤️️

Get fresh posts + news direct to your inbox.

No spam. We only send you relevant content.