Coupon code scaling with Rehman Dakait
Yo yo Yo yo Yoooooooooo 🙋🏼♂️,
So today we will be going to explore how to solve a very simple coupon code problem which is not that simple and at the same time not that hard, but it will take some time…..
And with this, we will have one good friend who will help to understand the thing even better.
The legend, the myth, Rehman Dakait aka Akshaye Khanna

Let’s get started……..
Chapter 1: The issue
Let’s understand the problem first, because without an issue, what is the meaning of the solution?
We have to build a coupon code system where a coupon code can be used against a purchase with a fixed maximum used count. (Assume we are using MariaDB/MySQL here).
For example,
let’s say Coupon: Gold20 can only be used by 20 users to purchase item x. If at the same time 30 users come with the same coupon, we should not allow 10 users.
Simple, right?
The first solution anyone can think of can look something like
data = Get Data of coupon Gold20 From DB
if data.used <= data.max_allowed
data.used ++
Update Data of coupon Gold20 in DB
else:
Nikal....And boom, we have solved the problem. It was easy, right? Finally, we can say.

But there are many issues in the above code when you think about concurrency. If 30 users come at the same time, then what? Will our system be able to save our coupon code integrity, or will our coupon code logic also die the same way Rehman son died when he sent them with noob bodyguards.
Chapter 2: The Deadlock
Let’s just dry run our code with 30 parallel requests. At the same time, 30 users come and get the coupon data from the DB, and at that time, the used count will be zero for all. They will all pass the check and update the used counter, which had a value of zero at the start. So at the end, 30 users will get a discount, and your used counter will be updated randomly based on the CPU’s mood at a given time. Maybe it will be 1, 2, 3…., you got the point.
And this is the most known problem we come across when trying to build a scaling system which will require handling concurrency at the same time.
The first solution will be to just put a lock. And that’s it. You just lock the row until one user buys the product, and others will wait or get a deadlock error if they come at the same time.
Our pseudo code can be updated like
data = Get Data of coupon Gold20 From and lock Gold20 row
if data.used <= data.max_allowed
data.used ++
Update data = Update Data of coupon Gold20 in DB and unlock/release Gold20 row
else:
Nikal....We can also remove all runtime logic with a simple SQL update and select operation. Also, you can add an index on the column such that the update query will be as fast as possible. But you have to think before applying it.
Update data if coupon is Gold20 and used<=max_allowedAnd you are all set. Now, even if 30 users come at the same time, our system will only allow 20 uses for the given coupon. And that’s it. Yeahhhhhhhhhhhhhhhhhh.

Now let’s scale this system with 10 VUs (VU == Parallel Users), 100 VUs, 1000 VUs….. and our end goal is to maintain the counter of the coupon with the same check if all users are going to use the same coupon.
Now life becomes complex. Here I will give only relative numbers, so do not consider them final.
- Without any coupon code check (100 VUs – 10m0s) –> 1% Error rate
- With one coupon code check (100 VUs – 10m0s) –> 10% Error rate
And again, one thing to note here: you should always do load testing with the database with a very huge amount of data. Doing load testing on an empty database table will only help you to counter the basic deadlocks. On localhost, I was able to pass all cases with 100 VU, LOL, so data is needed. Then you can optimize it.
10% error rate is high. Just think how much this system needs to be over-engineered (hehe 😝) now to solve this. The main issue is deadlock, where 10 threads try to update the given row at the same time. Because of the lock, we can’t process it further, and the request fails. Now we have an issue because we have to find a solution to solve our problem, just like Rehman Dakait to take over Lihari.
Chapter 3: The Way Outs
Now let’s try to solve the problem and scale the system. There are many basic solutions to this problem. It’s up to you what you want to select as per your use case.
Personally, if I were building a coupon system for my product, then I would go with the default DB lock system only. Why even give more discounts to users (-_-).
Let’s explore a couple of options and see what makes more sense for you and what makes more sense for me.

Why even need to handle concurrency?
A typical solution: why do you even want to handle the concurrency with a direct DB hit? Just put a queue in between and then process requests one by one or in batches.
Again, the user will have some waiting. I have not tried this yet for my use case, but this can work most of the time.
Same as Hamza did to save our myth Rehman Dakait first time from SP Chaudhary Aslam, instead of directly facing him, he focused on handling other issues.

Change the way you have stored the data
If you observe the problem, the main issue is that at the same time more than one process needs to work with the same row. So, why not just divide the row into multiple rows and decrease the chances of collision?
For example, if 10 requests come for ‘x’ row, now you divide ‘x’ into ‘5y’ so each ‘y’ can handle 2 requests. In our case, we can divide the coupon max count into sub-rows with some logic and just make sure we choose a row randomly to slow the issue.
Note:
This is more like using RDBMS only to solve the issue. There are a couple more hacks that some SQL DBs provide, you can use that as well, but at your own risk.
Like in MariaDB, you can change the isolation level to READ COMMITTED and handle the commit on your own. Deadlock will be very less, but now you are the “karta dharta” of your code.

Use any 3rd party DB/Service with RDBMS
Now comes the most interesting part of this problem’s solutions. This one is the most powerful yet I will say a complete solution we can choose to solve the issue.
In our coupon problem, consistency is key, thus we can’t just use any 3rd party service which can solve the issue but at the cost of losing consistency. There are many open-source DBs that exist that can be used here to handle a very huge load of balancing queries. You can just Google it, you will find what you need.
There is one more 3rd party DB we have. You can guess. Let me give you some time.
3 2 1…… Redis
Ok, it is not like I have said the thing which no one can guess. Even if you ask about the coupon code issue or any counter++ problem with concurrency, the answer is Redis only with many AI’s. Nothing new.
Redis is in memory data store. There is also a thing called persistent Redis which can be used as a DB. Again, latency will be more as now Disk I/O comes. Anyways, whenever you want data to be persistent, you have to pay to disk.
Random thought
Everything is disk I/O at the end, no matter how much AI can take over, it can’t exist without disk I/O. Even your AI on which you have so much trust and faith also requires disk I/O in life to exist.
Okay, coming back to the issue, Redis is single-threaded. So, there will be one event loop and it internally uses I/O multiplexing to speed things up. You can read or Google it.
So now, we can update our code like (Assuming coupon data exists in Redis)
Update in redis if coupon is Gold20 and used<=max_allowedNote: For complex logical things, Redis provides Lua scripting that I used here as our logic is not just used ++ but we want to do that on a given condition.
Redis is single-threaded but damn fast, damn like damn. Like dammmmm. You can’t even guess how fast, the same as you can’t guess the aura of Rehman Dakait.
And I tried using some 3rd party, and Redis worked very well and Redis just made the deadlock almost zero even with 1000 VUs using the one coupon code.
So, this is it, finally we have met the aura of Rehman Dakait.
Wait bro wait, nah nah nah.
The 3rd party/Redis solutions are not as easy as you think. With Redis, you also have to handle rollbacks.
- What if the Redis counter is updated and your request failed afterwards?
- Even with Redis, you have to make sure data is persistent and consistent (Just think someone just does clear-cache :bomb).
- DB level rollbacks are more reliable than your code (sorry, just kidding) but yes, rollback is something that is hard to maintain, you have to be very careful in this case.
And that’s it from my side. I have tried many solutions. Maybe I will also write a part 2 of this where I can give you a fully detailed solution with code. So stay tuned.
You have to release your lock sometimes to become fast in life, but at the same time, make sure consistency is not compromised with random releasing...:)Till then, bye bye….






