System Design: Design a Real-Time Leaderboard (Gaming / Competitive Apps)
Real-time leaderboards appear in gaming, coding competitions, e-commerce flash sales, and sports apps. The core challenge is updating rankings efficiently under high write throughput while serving ranked queries with low latency.
Requirements
Functional: update user score, get user’s rank, get top-K users with scores, get players near a given user (+/- N places).
Non-functional: low latency reads (P99 < 50ms), support millions of users, handle thousands of score updates per second, real-time updates (rank changes visible within seconds).
Core Data Structure: Redis Sorted Set (ZSET)
Redis Sorted Sets are purpose-built for leaderboards. They maintain members sorted by score with O(log n) insert/update and O(log n + k) range queries.
# Score operations
redis.zadd("leaderboard:global", {user_id: score}) # insert/update O(log n)
redis.zincrby("leaderboard:global", delta, user_id) # increment score O(log n)
# Rank queries (0-indexed, rank 0 = lowest score)
redis.zrevrank("leaderboard:global", user_id) # rank from top O(log n)
redis.zscore("leaderboard:global", user_id) # get score O(1)
# Top-K leaderboard
redis.zrevrange("leaderboard:global", 0, 99, withscores=True) # top 100 O(log n + k)
# Players near user (rank ± 5)
rank = redis.zrevrank("leaderboard:global", user_id)
redis.zrevrange("leaderboard:global", max(0, rank-5), rank+5, withscores=True)
Architecture for Scale
Score Update
│
▼
Message Queue (Kafka)
│ buffered, ordered
▼
Score Processor (consumers)
│ batch or per-event
├──▶ Redis ZADD (primary leaderboard)
└──▶ PostgreSQL (durable score history)
Leaderboard Read
│
▼
API Server
│ cache hit
├──▶ Redis ZREVRANGE (top-K from in-memory sorted set)
│ cache miss / deep pagination
└──▶ PostgreSQL (fallback for archival data)
Segmented Leaderboards
A global leaderboard with 100M players is less engaging than smaller segments. Maintain separate sorted sets per segment:
- Regional:
leaderboard:region:US,leaderboard:region:EU - Friends:
leaderboard:friends:{user_id}(built from social graph) - Time-boxed:
leaderboard:weekly:2026-W15,leaderboard:daily:2026-04-16 - Game mode:
leaderboard:mode:ranked,leaderboard:mode:casual
Weekly/daily leaderboards are created fresh each period; expire old ones with Redis TTL.
Handling Tie-Breaking
When multiple users have equal scores, use a composite score to break ties deterministically:
# Encode score + timestamp as a float
# Higher score = higher rank; earlier timestamp wins ties
import time
def composite_score(score, timestamp=None):
t = timestamp or time.time()
# Use score as integer part, subtract small fraction for earlier timestamps
return score + (1.0 - (t / 1e12))
# Or use sorted set with lexicographic member names as tiebreaker
member = f"{score:020d}:{user_id}" # zero-padded score + user_id for lex sort
Pagination Beyond Top-K
Redis ZREVRANGE is O(log n + k) — fast for top 100, but deep pagination (rank 1M-1M+100) requires scanning. Solutions:
- For top 1000: serve directly from Redis
- For deep pagination: maintain a secondary index in PostgreSQL with rank materialized periodically
- Approximate ranks: for ranks beyond a threshold, show “Top X%” instead of exact rank
Score Update Throughput
At 10,000 updates/second:
- Direct Redis writes handle this easily (Redis ZADD: ~100K ops/sec on a single node)
- For bursts (end-of-match batch updates), use Kafka to buffer and process asynchronously
- Use Redis pipeline or MULTI/EXEC to batch multiple ZADDs in a single round-trip
Persistence and Durability
- Redis is in-memory — enable AOF persistence for durability
- Write scores to PostgreSQL for permanent record (scores are valuable data)
- Redis is the serving layer; PostgreSQL is the source of truth for auditing
Interview Checklist
- Lead with Redis Sorted Set — it’s the canonical answer for leaderboard storage
- Cover the 4 operations: update score, get rank, top-K, players near user
- Discuss segmented leaderboards (regional, friends, time-boxed)
- Address tie-breaking with composite scores
- Explain the Redis + PostgreSQL dual-write pattern for durability
- For high write throughput: Kafka buffer → async processors → Redis ZADD
Asked at: DoorDash Interview Guide