Core Entities
Issue: issue_id, project_id, title, description (rich text), type (BUG, FEATURE, TASK, EPIC), status, priority (P0-P3), assignee_id, reporter_id, parent_issue_id (for subtasks), sprint_id, story_points, labels (array), created_at, updated_at, resolved_at. Project: project_id, name, slug, workflow_id, board_config (JSON — column order, WIP limits), default_assignee_policy, created_at. WorkflowStatus: status_id, project_id, name (OPEN, IN_PROGRESS, IN_REVIEW, DONE, WONT_FIX), category (TODO, IN_PROGRESS, DONE), transitions (array of allowed next status IDs), is_initial (bool), is_terminal (bool). Comment: comment_id, issue_id, author_id, body, created_at, updated_at, is_deleted. Attachment: attachment_id, issue_id, filename, storage_key, size_bytes, mime_type, uploaded_by, created_at. IssueHistory: history_id, issue_id, changed_by, field_name, old_value, new_value, changed_at. Watcher: issue_id, user_id — users who receive notifications for issue changes.
Workflow State Machine
class WorkflowEngine:
def transition(self, issue_id: int, new_status_id: int,
actor_id: int) -> Issue:
with db.transaction():
issue = db.query(
"SELECT * FROM issues WHERE issue_id = %s FOR UPDATE",
issue_id
)
current_status = db.get_status(issue.status_id)
new_status = db.get_status(new_status_id)
# Validate transition is allowed
if new_status_id not in current_status.transitions:
raise InvalidTransition(
f"Cannot go from {current_status.name} to {new_status.name}"
)
# Run transition hooks
self._run_pre_hooks(issue, current_status, new_status, actor_id)
# Apply the transition
old_status_id = issue.status_id
db.execute(
"UPDATE issues SET status_id = %s, updated_at = NOW() "
"WHERE issue_id = %s",
new_status_id, issue_id
)
# Set resolved_at on terminal status
if new_status.is_terminal:
db.execute(
"UPDATE issues SET resolved_at = NOW() WHERE issue_id = %s",
issue_id
)
# Record history
db.insert("issue_history", {
"issue_id": issue_id,
"changed_by": actor_id,
"field_name": "status",
"old_value": current_status.name,
"new_value": new_status.name,
"changed_at": "NOW()"
})
self._run_post_hooks(issue, current_status, new_status, actor_id)
return db.get_issue(issue_id)
Full-Text Search and Filtering
Issue search requirements: full-text search on title and description, filter by status/assignee/labels/sprint/priority, sort by created_at/updated_at/priority. Implementation: PostgreSQL full-text search for moderate scale (< 10M issues): tsvector column on (title || ' ' || description), indexed with GIN. Query: WHERE tsvector @@ to_tsquery('english', $query). Elasticsearch for large scale: index issues as documents with all fields. Use multi-match query on title (boosted) and description. Filter by terms (status, assignee) as filter context (no relevance scoring, cached). Sort: by _score descending, then updated_at. Sync: on issue create/update, publish to a Kafka topic; a consumer indexes into Elasticsearch with a 1-2 second lag. For JQL (Jira Query Language)-style queries: parse the query string into a filter AST and translate to Elasticsearch DSL or SQL.
Notification System
Notification triggers: issue assigned to you, comment on a watched issue, status transition, mention (@username) in a comment. Architecture: on each issue mutation, publish an IssueEvent to Kafka. A notification worker consumes events and: (1) identifies relevant users (assignee, reporter, watchers, mentioned users). (2) Applies user notification preferences (which events they want, which channels: email, in-app, Slack). (3) Deduplicates: if 10 comments arrive in 5 minutes, batch them into one email digest instead of 10 separate emails. (4) Sends via the appropriate channel: SES for email, WebSocket push for in-app, Slack webhook. Notification preferences: stored per user per event type (ISSUE_ASSIGNED, ISSUE_COMMENTED, ISSUE_TRANSITIONED, ISSUE_MENTIONED). Respect “do not disturb” hours (no emails between 10pm-7am user’s local time).
See also: Atlassian Interview Prep
See also: LinkedIn Interview Prep
See also: Airbnb Interview Prep