What Is a Loyalty and Rewards System?
A loyalty program tracks user activity and awards points redeemable for discounts or products. Amazon Prime, Starbucks Rewards, and airline miles programs are examples. Core challenges: points accrual without race conditions, expiry, transaction consistency, and scaling to tens of millions of members.
System Requirements
Functional
- Award points for purchases, referrals, reviews, login streaks
- Redeem points against purchases
- Points expiry after 12 months
- Tier system: Bronze/Silver/Gold based on spend
- Promotional multipliers: 2x/3x points during campaigns
Non-Functional
- Exact consistency — no lost or duplicated points (billing depends on it)
- Idempotency — retries must not double-award
- 100M members, 10K transactions/second peak
Core Data Model
members: id, user_id, tier, total_earned, total_redeemed
point_transactions: id, member_id, type(earn/redeem/expire), amount,
reference_id, reference_type, created_at, expires_at
point_balances: member_id, available_points, tier_points_ytd
Points Accrual — Idempotency
Risk: loyalty service crashes after processing but before committing Kafka offset — event redelivered, double points. Solution: unique constraint on (reference_id, reference_type):
INSERT INTO point_transactions
(member_id, type, amount, reference_id, reference_type)
VALUES (?, 'earn', ?, 'order_123', 'purchase')
ON CONFLICT (reference_id, reference_type) DO NOTHING;
Even if processed twice, only one row is inserted. Safe to replay Kafka events.
Balance — Ledger + Cached Sum
Update cached balance atomically with each transaction:
BEGIN TRANSACTION;
INSERT INTO point_transactions (...) VALUES (...);
UPDATE point_balances
SET available_points = available_points + ?
WHERE member_id = ?;
COMMIT;
Redemption — Preventing Overdraft
User tries to redeem 500 points but only has 300. Pessimistic locking:
BEGIN TRANSACTION;
SELECT available_points FROM point_balances
WHERE member_id = ? FOR UPDATE; -- row lock
IF available_points < 500: ROLLBACK;
INSERT INTO point_transactions (type='redeem', amount=-500, ...);
UPDATE point_balances SET available_points = available_points - 500;
COMMIT;
FOR UPDATE prevents two concurrent redemptions from both succeeding with insufficient balance. Low contention (one user’s balance) justifies pessimistic over optimistic locking here.
Points Expiry
- Store expires_at on each earn transaction
- Nightly batch job: compute unredeemed amount of expiring transactions, insert expiry transaction
- FIFO redemption: consume oldest non-expired points first
- Notify members 30/7/1 days before expiry
Tier System and Promotions
Tiers computed by nightly job on rolling-12-month spend. Store tier_points_ytd in point_balances. Promotions: at accrual time, check active promotions and apply highest multiplier. Store multiplier in point_transactions for auditing.
Caching
- Member balance/tier: Redis 30-second TTL, invalidated on each transaction
- Active promotions: Redis 5-minute TTL
- Transaction history: paginated, 60-second TTL
Interview Tips
- Idempotency via unique constraint on reference_id is the core reliability pattern.
- Ledger + cached balance = double-entry accounting — shows financial systems knowledge.
- FOR UPDATE for redemption: low contention but high cost of overdraft justifies pessimistic locking.
- FIFO expiry and tier computation are product details that differentiate senior candidates.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How do you prevent double-awarding points when a Kafka consumer processes an event twice?”,
“acceptedAnswer”: { “@type”: “Answer”, “text”: “Use idempotency via a unique database constraint on (reference_id, reference_type). When inserting into point_transactions, use: INSERT INTO point_transactions (…) VALUES (…) ON CONFLICT (reference_id, reference_type) DO NOTHING. The unique constraint ensures that even if the Kafka consumer processes the same event twice (crash-and-restart, duplicate delivery, or network retry), only one row is inserted. The second INSERT is silently ignored. This makes the consumer idempotent — safe to replay any event without side effects. Apply the same pattern to referral bonuses, review rewards, and all event-driven points accrual. The reference_id should be the source system identifier (order_id, review_id) that is stable across retries.” }
},
{
“@type”: “Question”,
“name”: “How do you implement points redemption without race conditions?”,
“acceptedAnswer”: { “@type”: “Answer”, “text”: “Points redemption requires reading the balance and decrementing it atomically. Without coordination, two concurrent redemptions could both read a balance of 500, both determine it is sufficient, and both succeed — resulting in a negative balance. Solution: pessimistic locking with SELECT FOR UPDATE. Inside a transaction: SELECT available_points FROM point_balances WHERE member_id = ? FOR UPDATE. The FOR UPDATE acquires a row-level exclusive lock. Other transactions trying to update the same row must wait until the lock is released. This serializes concurrent redemptions for the same member. Use pessimistic locking (not optimistic) here because: a single member rarely has two concurrent redemptions (low contention), but the cost of a negative balance (overdraft) is high — the overhead of a lock is worth it.” }
},
{
“@type”: “Question”,
“name”: “How does FIFO points expiry work and why does it matter?”,
“acceptedAnswer”: { “@type”: “Answer”, “text”: “FIFO (first in, first out) expiry means when a member redeems points, the oldest points (earliest expires_at) are consumed first, minimizing the risk of points expiring unused. Implementation: store expires_at on each earn transaction. When recording a redemption, consume from oldest earn transactions first. The nightly expiry batch computes: for each earn transaction expiring today, how many points remain unredeemed? That remaining amount becomes an expiry transaction. Without FIFO, a member could lose recently-earned points while older points remain unused, which is confusing and feels unfair. From an engineering standpoint, FIFO requires tracking partial consumption of earn transactions, which adds complexity — use a per-earn-transaction redeemed_amount field or maintain a sorted pointer into the transaction ledger.” }
}
]
}