Twitter/X Interview Guide 2026: Timeline Algorithms, Real-Time Search, and Content at Scale

Twitter/X Interview Guide 2026: Timeline Algorithms, Real-Time Systems, and Content Moderation at Scale

Twitter/X has undergone massive changes since Elon Musk’s acquisition in 2022, including significant headcount reduction (~80%) and a complete rebuild of many systems. The engineering team is now smaller and more focused on performance and monetization. This guide covers what to expect in SWE interviews at X (formerly Twitter) in 2026.

The X/Twitter Interview Process (2026)

Post-acquisition, X’s interview process is less formalized than traditional FAANG:

  1. Recruiter/sourcing call — often direct outreach from engineering managers
  2. Technical screen (1 hour) — coding + architecture discussion
  3. Onsite (3–4 rounds, compressed schedule):
    • 2× coding (medium-hard, emphasis on performance and scale)
    • 1× system design (timeline, search, or real-time messaging)
    • 1× engineering manager / culture fit

Culture note: X values “hardcore” engineers who can work long hours on hard problems. Interview culture expects demonstrated productivity and shipping mentality over process.

Core Algorithms: Timeline and Feed

Twitter’s Open-Source Timeline Algorithm

Twitter open-sourced their recommendation algorithm in 2023. Key components:

from dataclasses import dataclass, field
from typing import List, Dict, Optional
import math

@dataclass
class Tweet:
    id: int
    author_id: int
    text: str
    timestamp: float
    like_count: int = 0
    retweet_count: int = 0
    reply_count: int = 0
    view_count: int = 0
    has_media: bool = False
    language: str = 'en'

@dataclass
class UserContext:
    user_id: int
    following_ids: List[int]
    interests: List[str]
    engagement_history: Dict[int, str]  # tweet_id -> action

class TimelineRanker:
    """
    Simplified model of Twitter's timeline ranking.

    Real Twitter uses a two-stage system:
    1. Candidate generation: fetch 1500 candidates from:
       - In-network (following, followers-of-followers)
       - Out-of-network (trending, similar users, topics)
    2. Heavy ranker: ML model scoring each candidate
    3. Heuristic filters: dedupe, balance in/out-network, author diversity

    The ranking model uses ~48M parameter neural network with
    features covering user engagement history, tweet quality signals,
    author trust scores, and topic relevance.
    """

    def __init__(self):
        self.engagement_weights = {
            'like': 1.0,
            'retweet': 2.0,
            'reply': 3.0,
            'quote': 2.5,
            'profile_click': 0.5,
            'link_click': 1.5,
        }

    def score_tweet(
        self,
        tweet: Tweet,
        viewer: UserContext,
        current_time: float
    ) -> float:
        """
        Multi-factor tweet relevance score.
        """
        # 1. Social graph signal (is this from someone you follow?)
        social_score = 1.0 if tweet.author_id in viewer.following_ids else 0.3

        # 2. Engagement rate (quality signal)
        impressions = max(tweet.view_count, 1)
        engagement_rate = (
            tweet.like_count * 1.0 +
            tweet.retweet_count * 2.0 +
            tweet.reply_count * 3.0
        ) / impressions
        engagement_score = min(1.0, engagement_rate * 100)

        # 3. Recency decay (half-life = 2 hours for breaking news feel)
        age_hours = (current_time - tweet.timestamp) / 3600
        half_life = 2.0
        recency_score = math.exp(-age_hours * math.log(2) / half_life)

        # 4. Media boost (tweets with images/video get higher engagement)
        media_boost = 1.3 if tweet.has_media else 1.0

        # 5. User engagement history with author
        author_affinity = 1.5 if any(
            tweet_id for tweet_id in viewer.engagement_history
            # simplified: check if user has engaged with content from this author
        ) else 1.0

        return (
            social_score * 0.30 +
            engagement_score * 0.25 +
            recency_score * 0.25 +
            (media_boost - 1) * 0.10 +
            (author_affinity - 1) * 0.10
        )

    def generate_timeline(
        self,
        viewer: UserContext,
        candidate_tweets: List[Tweet],
        current_time: float,
        limit: int = 200
    ) -> List[Tweet]:
        """
        Score and rank all candidate tweets for viewer's timeline.

        Time: O(C * F) where C=candidates, F=features per tweet
        """
        scored = [
            (self.score_tweet(t, viewer, current_time), t)
            for t in candidate_tweets
        ]
        scored.sort(reverse=True)

        # Author diversity constraint: max 2 consecutive tweets per author
        result = []
        consecutive_by_author: Dict[int, int] = {}

        for score, tweet in scored:
            if len(result) >= limit:
                break
            count = consecutive_by_author.get(tweet.author_id, 0)
            if count < 2:
                result.append(tweet)
                consecutive_by_author[tweet.author_id] = count + 1
            else:
                consecutive_by_author[tweet.author_id] = 0  # reset after gap

        return result

Distributed Rate Limiting at Twitter Scale

import time
from collections import defaultdict
from threading import Lock

class TokenBucketRateLimiter:
    """
    Token bucket algorithm for API rate limiting.
    Twitter API: 1500 requests per 15 minutes per app.

    Token bucket:
    - Bucket capacity = max_requests
    - Tokens refill at rate = max_requests / window_seconds
    - Each request consumes 1 token
    - If bucket empty: reject request

    Advantage over fixed window: allows short bursts up to capacity
    while maintaining average rate constraint.

    Twitter uses Redis for distributed token bucket across API servers.
    """

    def __init__(self, capacity: int, refill_rate: float):
        """
        capacity: max tokens (burst limit)
        refill_rate: tokens per second
        """
        self.capacity = capacity
        self.refill_rate = refill_rate
        self.buckets: Dict[str, Dict] = defaultdict(
            lambda: {'tokens': capacity, 'last_refill': time.time()}
        )
        self.lock = Lock()

    def allow_request(self, key: str, tokens_needed: int = 1) -> bool:
        """
        Check if request is allowed; consume tokens if so.
        Thread-safe via lock (use Redis EVAL script in distributed setting).

        Time: O(1)
        """
        with self.lock:
            bucket = self.buckets[key]
            now = time.time()

            # Refill tokens based on elapsed time
            elapsed = now - bucket['last_refill']
            new_tokens = elapsed * self.refill_rate
            bucket['tokens'] = min(self.capacity, bucket['tokens'] + new_tokens)
            bucket['last_refill'] = now

            if bucket['tokens'] >= tokens_needed:
                bucket['tokens'] -= tokens_needed
                return True
            return False

    def get_reset_time(self, key: str) -> float:
        """Seconds until bucket is full again."""
        with self.lock:
            bucket = self.buckets[key]
            deficit = self.capacity - bucket['tokens']
            return deficit / self.refill_rate if deficit > 0 else 0

System Design: Twitter Search

Common X/Twitter question: “Design Twitter’s real-time search — results must appear within seconds of a tweet being posted.”

"""
Twitter Real-Time Search Architecture:

Tweet Posted
    |
[Ingestion Kafka Topic]
    |
[Real-Time Indexing Service]
  - Tokenizes tweet text
  - Extracts hashtags, mentions, entities
  - Writes to real-time index (Earlybird — Twitter's custom Lucene)
  - Earlybird: in-memory inverted index, TTL-based eviction (7 days)
    |
[Search Query Service]
  - Parse query (boolean operators, hashtag/mention filters)
  - Fan out to all Earlybird shards (by time range)
  - Merge and rank results
  - Apply safety filters (spam, NSFW)

Key design decisions:
1. In-memory index: tweets are ephemeral; no need for persistent indexing
2. Time-partitioned shards: each shard covers a time window
3. Top-K aggregation: each shard returns top-K; merge globally
4. Relevance signals: engagement counts updated asynchronously
5. Personalization: rerank based on user's following graph

Scaling challenge: 6000 tweets/second peak
Solution: Earlybird replicas, consistent hashing, parallel fan-out
"""

X/Twitter Engineering Culture (2026)

Post-Musk X is radically different from pre-2022 Twitter:

  • High velocity: Features shipped in days, not months; less process
  • Small teams: Most teams are 3–8 engineers owning large surface areas
  • Performance obsession: Engineering blog frequently posts about latency, memory, and cost wins
  • “Hardcore”: Expect long hours during major product launches

Compensation (2025 data)

LevelBaseTotal Comp (est.)
SWE$150–190K$200–280K
Senior SWE$190–230K$280–380K
Staff SWE$230–270K$350–500K

X Corp. is now privately held. Equity is illiquid and uncertain. Compensation should be evaluated primarily on cash. Verify current comp data with levels.fyi.

Interview Tips

  • Real-time systems: Twitter is fundamentally a real-time platform; know Kafka, stream processing, pub/sub
  • Performance focus: Expect questions about latency, memory efficiency, and throughput optimization
  • Open source: Twitter open-sourced their timeline algorithm; reading it shows genuine interest
  • Culture awareness: Research recent changes honestly; show you can thrive in a fast-changing environment
  • LeetCode: Medium-hard; graph algorithms and sliding window patterns common

Practice problems: LeetCode 355 (Design Twitter), 295 (Median Data Stream), 362 (Design Hit Counter), 981 (Time Based Key-Value Store).

Related System Design Interview Questions

Practice these system design problems that appear in Twitter/X interviews:

Related Company Interview Guides

Explore all our company interview guides covering FAANG, startups, and high-growth tech companies.

Related system design: System Design Interview: API Rate Limiter Deep Dive (All Algorithms)

Related system design: System Design Interview: Design a Distributed File System (HDFS/GFS)

Related system design: System Design Interview: Design a Distributed Messaging System (Kafka)

See also: System Design Interview: Design a Pastebin / Code Snippet Service

See also: System Design Interview: Design a Feature Flag System

  • System Design Interview: Design a Distributed File System (HDFS / GFS)
  • System Design Interview: Design a Social Graph (Friend Connections)
  • System Design Interview: Design a Content Delivery Network (CDN)
  • Heap and Priority Queue Interview Patterns: Top-K, K-Way Merge, Median Stream
  • System Design Interview: Design a Real-Time Bidding (RTB) System
  • System Design Interview: Design a Real-Time Chat Application (WhatsApp/Slack)
  • System Design Interview: Design a Search Engine (Query Processing and Ranking)
  • System Design Interview: Design a Video Streaming Platform (YouTube/Netflix)
  • System Design Interview: Design a Real-Time Analytics Dashboard
  • System Design Interview: Design a Social Media Feed System
  • System Design Interview: Design a Content Moderation System
  • System Design Interview: Design a Notification System
  • System Design Interview: Design a Real-Time Gaming Leaderboard
  • System Design Interview: Design a Typeahead / Search Suggestion System
  • System Design Interview: Design an Ad Click Aggregation System (Google/Meta Ads)
  • System Design Interview: Design a Live Sports Score System
  • System Design Interview: Design a Rate Limiter (Token Bucket, Sliding Window)
  • System Design Interview: Design a Web Crawler and Search Indexer
  • System Design Interview: Design an Online Auction System (eBay)
  • System Design Interview: Design a Streaming Data Pipeline (Kafka + Flink)
  • System Design Interview: Design a Geospatial Service (Nearby Drivers/Places)
  • System Design Interview: Design a Cryptocurrency Exchange
  • System Design Interview: Design a Ticketing System (Ticketmaster)
  • System Design Interview: Design a Live Video Streaming Platform (Twitch)
  • System Design Interview: Design a Recommendation System (Netflix/Spotify/Amazon)
  • System Design Interview: Design a Web Search Engine (Google)
  • System Design Interview: Design a Social Media Feed (Twitter/Instagram)
  • System Design Interview: Design a Notification Service (Push, SMS, Email)
  • System Design Interview: Design a Search Autocomplete System
  • System Design Interview: Design a Real-Time Collaboration Tool (Figma/Miro)
  • System Design Interview: Design a CI/CD Deployment Pipeline
  • System Design Interview: Design a Real-Time Leaderboard
  • System Design Interview: Design a Web Crawler
  • System Design Interview: Design a Key-Value Store (Redis/DynamoDB)
  • System Design Interview: Design a Video Streaming Platform (YouTube/Netflix)
  • Trie Data Structure Interview Patterns: Autocomplete, Word Search & XOR
  • System Design Interview: Design a Load Balancer
  • System Design Interview: API Design (REST vs GraphQL vs gRPC)
  • System Design: Content Delivery Network (CDN)
  • System Design: Real-Time Analytics Dashboard (ClickHouse / Druid)
  • System Design: Search Engine (Google / Elasticsearch)
  • System Design: Real-Time Chat System (WhatsApp / Slack)
  • System Design: Twitter / Social Media Feed Architecture
  • System Design: Video Streaming Platform (Netflix/YouTube)
  • System Design: URL Shortener (bit.ly/TinyURL)
  • System Design: Autocomplete and Typeahead Service
  • System Design: Machine Learning Platform and MLOps
  • System Design: Recommendation Engine at Scale
  • System Design: Distributed File System (GFS/HDFS/S3)
  • Related System Design Topics

    📌 Related: System Design Interview: Design Instagram / Photo Sharing Platform

    📌 Related: System Design Interview: Design YouTube / Video Streaming Platform

    📌 Related: System Design Interview: Design WhatsApp / Real-Time Messaging

    📌 Related: System Design Interview: Design Twitter / X Timeline

    📌 Related: Low-Level Design: Chat Application (OOP Interview)

    📌 Related: System Design Interview: Design a Social Media News Feed

    📌 Related: System Design Interview: Design a Leaderboard / Top-K System

    📌 Related: System Design Interview: Design a Search Autocomplete (Typeahead)

    📌 Related: Trie Interview Patterns (2025)

    📌 Related: Low-Level Design: Social Network Friend Graph (OOP Interview)

    Related system design: System Design: Consistent Hashing Explained with Virtual Nodes

    Related system design: System Design Interview: Design a Distributed Message Queue (Kafka)

    Related: Low-Level Design: Pub/Sub Message Broker (Observer Pattern)

    Related system design: System Design: Sharding and Data Partitioning Explained

    Related system design: System Design: Content Delivery Network (CDN) — Cache, Routing, Edge

    Related system design: System Design: Database Replication, Read Scaling, and Failover

    Related system design: Low-Level Design: Social Media Feed (Follow, Post, Fan-out, Likes)

    Related system design: System Design: Collaborative Document Editing (Google Docs) — OT, CRDT, and WebSockets

    Related system design: System Design: Distributed Counters, Leaderboards, and Real-Time Analytics

    Related system design: System Design: Real-time Chat and Messaging System (WhatsApp/Slack) — WebSockets, Pub/Sub, Scale

    Related system design: System Design: Video Processing Pipeline (YouTube/Netflix) — Transcoding, HLS, and Scaling

    Related system design: System Design: Event-Driven Architecture — Kafka, Event Sourcing, CQRS, and Saga Pattern

    Related system design: Low-Level Design: Content Moderation System — Automated Filtering, Human Review, and Appeals

    Related system design: Low-Level Design: Social Media Post Scheduler — Scheduling, Multi-Platform Publishing, and Analytics

    Related system design: System Design: Log Aggregation and Observability Platform — ELK Stack, Metrics, Tracing, and Alerting

    Related system design: Low-Level Design: Task Management System — Boards, Workflows, Assignments, Due Dates, and Notifications

    Related system design: System Design: Social Network News Feed — Fan-out on Write vs Read, Ranking, and Feed Generation

    Related system design: System Design: Typeahead and Search Autocomplete — Trie, Prefix Indexing, and Real-time Suggestions

    Related system design: System Design: Multiplayer Game Backend — Game Sessions, Real-time State, Matchmaking, and Leaderboards

    Related system design: System Design: WebRTC and Video Calling — Signaling, ICE, STUN/TURN, and SFU Architecture

    Related system design: Monotonic Stack and Queue Interview Patterns: Next Greater Element, Largest Rectangle, Sliding Window Maximum (2025)

    Related system design: System Design: Distributed Message Queue — Kafka Architecture, Partitions, Consumer Groups, and Delivery Guarantees

    Related system design: Low-Level Design: Social Media Platform — Posts, Feeds, Follows, and Notifications

    Related system design: System Design: Collaborative Editing — Operational Transformation, CRDTs, and Conflict Resolution

    Related system design: System Design: Chat Application — Real-Time Messaging, Message Storage, and Presence (WhatsApp/Slack)

    Related system design: System Design: Ad Serving — Real-Time Bidding, Targeting, and Impression Tracking

    Related system design: System Design: Typeahead and Autocomplete — Trie, Ranking, and Real-Time Suggestion Updates

    Related system design: System Design: Ad Server — Targeting, Real-Time Bidding, Impression Tracking, and Click Attribution

    Related system design: Low-Level Design: Task Management System (Trello/Jira) — Boards, Workflows, and Notifications

    Related system design: System Design: Flash Sale — High-Concurrency Inventory, Queue-Based Purchase, and Oversell Prevention

    Related system design: System Design: Analytics Pipeline — Ingestion, Stream Processing, and OLAP Query Layer

    Related system design: System Design: Search Ranking — Query Processing, Inverted Index, and Relevance Scoring

    Related system design: System Design: Live Comments — Real-Time Delivery, Moderation, and Spam Prevention at Scale

    Related system design: Low-Level Design: Blog Platform — Content Management, Comments, and SEO-Friendly URLs

    Related system design: System Design: Typeahead / Search Autocomplete — Trie Service, Ranking, and Low-Latency Delivery

    See also: System Design: Gaming Backend

    See also: Low-Level Design: Online Judge System

    See also: System Design: Analytics Dashboard

    See also: System Design: Real-Time Bidding Platform

    See also: System Design: Feed Ranking and Personalization

    See also: Low-Level Design: Poll and Voting System

    Twitter’s t.co URL shortener is a classic system design topic. Review the full design in URL Shortener and Click Analytics Platform System Design.

    Twitter/X powers real-time sports conversations. Review the full live sports platform design in Live Sports Score Platform System Design.

    See also: Low-Level Design: Content Moderation System – Rules Engine, ML Scoring, and Appeals

    Twitter/X system design covers feed fan-out. Review the full social feed LLD in Social Feed System Low-Level Design.

    See also: Low-Level Design: API Rate Limiter – Token Bucket, Sliding Window, and Distributed Throttling

    Twitter engineering tests interval scheduling. Review meeting rooms and non-overlapping interval patterns in Interval Interview Patterns.

    Twitter search uses autocomplete. Review trie, Redis sorted sets, and trending query detection in Search Autocomplete System Low-Level Design.

    Twitter uses URL shortening at scale. Review base62, redirect strategy, and analytics in URL Shortener System Low-Level Design.

    Twitter search indexes billions of tweets. Review web crawler, inverted index, and query processing in Search Engine System Low-Level Design.

    Twitter handles high-traffic events. Review virtual waiting room and booking system scaling in Ticket Booking System Low-Level Design.

    Twitter system design covers follow graph and social networks. Review the social graph LLD in Social Graph System Low-Level Design.

    Twitter system design covers trending and ranking. Review Redis sorted set leaderboard design in Game Leaderboard System Low-Level Design.

    Twitter system design covers ad click tracking at scale. Review the full LLD in Ad Click Tracker System Low-Level Design.

    Twitter system design covers real-time event analytics. Review Kafka, Druid, and HyperLogLog design in Real-Time Analytics Platform Low-Level Design.

    Twitter system design covers A/B testing at scale. Review the full experimentation LLD in A/B Testing Platform Low-Level Design.

    Twitter system design covers timeline and feed delivery. Review the full feed LLD in Content Feed System Low-Level Design.

    Twitter system design covers comment and reply threading. Review the nested comment and voting LLD in Comment System Low-Level Design.

    Cursor-based pagination and feed design is covered in our Pagination System Low-Level Design.

    Activity feed and timeline fan-out design is covered in our Activity Feed System Low-Level Design.

    Follow system architecture design is covered in our Follow System Low-Level Design.

    Comments and reply threading system design is covered in our Comments System Low-Level Design.

    Like system design is covered in our Like System Low-Level Design.

    Tagging system and hashtag design is covered in our Tagging System Low-Level Design.

    Live comments and real-time feed design is covered in our Live Comments System Low-Level Design.

    Mentions and notification system design is covered in our Mentions and Notifications System Low-Level Design.

    User blocking and harassment prevention design is covered in our User Blocking System Low-Level Design.

    Cursor-based pagination and infinite scroll feed design is covered in our Cursor Pagination Low-Level Design.

    API pagination and infinite scroll feed design is covered in our API Pagination Low-Level Design.

    Link preview and URL unfurling system design is covered in our Link Preview Service Low-Level Design.

    Follower graph and social feed design is covered in our Follower Graph Low-Level Design.

    Trending topics and hashtag system design is covered in our Trending Topics Low-Level Design.

    Content moderation and abuse reporting design is covered in our Report and Abuse System Low-Level Design.

    Poll and voting system design is covered in our Poll System Low-Level Design.

    A/B experiment and product experimentation design is covered in our A/B Experiment System Low-Level Design.

    Distributed counter and high-throughput counting design is covered in our Distributed Counter Low-Level Design.

    Activity feed and social media feed design is covered in our Activity Feed Aggregator Low-Level Design.

    See also: Content Scheduler Low-Level Design: Scheduled Publishing, Timezone Handling, and Recurring Content

    See also: Image Moderation Pipeline Low-Level Design: ML Classification, Human Review Queue, and Appeals

    See also: Threaded Comment System Low-Level Design: Nested Replies, Voting, and Moderation

    See also: Follower Feed Low-Level Design: Fanout Strategies, Feed Pagination, and Hybrid Approach for Celebrities

    See also: Spam Detection System Low-Level Design: Velocity Checks, Graph-Based Detection, and Classifier Ensemble

    See also: Abuse Detection System Low-Level Design: Multi-Signal Risk Scoring, Account Takeover Detection, and Automated Response

    See also: Timeline Service Low-Level Design: Event Ordering, Cursor Pagination, and Gap Detection

    See also: Activity Stream Low-Level Design: Event Schema, Aggregation, and Real-Time Push

    See also: News Feed Aggregator Low-Level Design: Source Polling, Deduplication, and Ranking

    See also: Sports Data Feed Low-Level Design: Event Ingestion, Normalization, and Multi-Sport Schema

    See also: Content Moderation Pipeline Low-Level Design: Multi-Stage Detection, Appeal Flow, and Enforcement

    See also: Trust and Safety Platform Low-Level Design: Signal Aggregation, Policy Engine, and Action Bus

    See also: Draft and Publishing Service Low-Level Design: State Machine, Scheduled Publish, and Preview

    See also: Low Level Design: Reactions Service

    See also: Low Level Design: ML Content Moderation Service

    Scroll to Top