Dynamic Programming Patterns: Recognizing and Solving DP Problems (Complete Guide 2025)

Recognizing DP Problems

Dynamic programming applies when a problem has two properties: (1) Optimal substructure: the optimal solution to the problem contains optimal solutions to subproblems. (2) Overlapping subproblems: the same subproblems are solved multiple times in a naive recursive approach. If a recursive solution has these properties, DP memoizes or builds up the solution bottom-up. Pattern recognition: the problem asks for a count, maximum, minimum, or feasibility (can you achieve X?). The input suggests discrete choices (items, steps, characters). A decision at each step affects future options but not past choices.

Pattern 1: Linear DP (1D)

dp[i] depends only on dp[i-1] or a few previous states. Climbing Stairs: dp[i] = dp[i-1] + dp[i-2]. House Robber: dp[i] = max(dp[i-1], dp[i-2] + nums[i]). Maximum Subarray: dp[i] = max(nums[i], dp[i-1] + nums[i]) (Kadane’s algorithm). Decode Ways: dp[i] = (single digit valid ? dp[i-1] : 0) + (two digits valid ? dp[i-2] : 0). The transition is local: dp[i] is computed from a constant number of previous states. Space: O(n) for the array, often reducible to O(1) by keeping only the last k values.

Pattern 2: 2D DP (Two Sequences)

dp[i][j] represents the optimal solution for the first i elements of sequence A and first j elements of sequence B. LCS: dp[i][j] = dp[i-1][j-1]+1 if A[i]==B[j] else max(dp[i-1][j], dp[i][j-1]). Edit Distance: dp[i][j] = 1 + min(dp[i-1][j], dp[i][j-1], dp[i-1][j-1]) if A[i]!=B[j] else dp[i-1][j-1]. Unique Paths: dp[i][j] = dp[i-1][j] + dp[i][j-1]. The common structure: define dp[i][j] clearly, identify base cases (row 0, column 0), write the recurrence, fill the table row by row.

Pattern 3: Knapsack DP

0-1 Knapsack: dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i]). Each item used at most once — iterate weights in reverse for the 1D optimization. Unbounded Knapsack (Coin Change, Rod Cutting): items reusable — iterate weights forward. Partition Equal Subset Sum (LC 416): can the array be split into two equal-sum halves? = 0-1 knapsack where target = sum/2. Word Break (LC 139): unbounded knapsack on string positions — dp[i] = True if any word ends at position i and dp[i-len(word)] is True. The key: identify what the “capacity” dimension represents (weight, target sum, position).

Pattern 4: Interval DP

dp[i][j] = optimal answer for the sub-range [i, j]. Fill by increasing interval length. For each length L and start i, j=i+L-1: try all split points k. dp[i][j] = min/max over k of combine(dp[i][k], dp[k+1][j]) + cost(i,j). Matrix Chain Multiplication: dp[i][j] = min splits of multiplying matrices i through j. Burst Balloons (LC 312): dp[i][j] = max coins from bursting all balloons between i and j, k is the LAST to burst. O(n^3) time, O(n^2) space.

Pattern 5: State Machine DP

The state includes a “mode” or “phase” that changes on transitions. Best Time to Buy/Sell Stock with Cooldown (LC 309): states = HOLDING, NOT_HOLDING, COOLDOWN. Transitions: NOT_HOLDING → HOLDING (buy), HOLDING → COOLDOWN (sell), COOLDOWN → NOT_HOLDING (wait). dp[i][state] = max profit on day i in given state. The state dimension captures the machine’s history without storing the full history. Paint House (LC 256): dp[i][color] = min cost to paint houses 0..i with house i painted color. Constraint: adjacent houses can’t have the same color.

Approach for Interviews

Step 1: define the DP state clearly — what does dp[i] (or dp[i][j]) represent? Step 2: write the recurrence — how does dp[i] depend on smaller subproblems? Step 3: identify base cases (dp[0], dp[0][0], etc.). Step 4: determine fill order (left-to-right, bottom-to-top). Step 5: extract the answer (often dp[n] or max(dp)). Step 6: optimize space if possible (do we need the entire table or just the last row/value?).

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How do you recognize that a problem requires dynamic programming?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Three indicators: (1) The problem asks for a count, maximum, minimum, or whether something is achievable (not the actual path, but the optimal value). (2) The decision at each step depends on previous decisions, but not on future ones (optimal substructure). (3) A naive recursive solution would re-solve the same subproblems multiple times (overlapping subproblems). Quick test: write the recursive solution — if the recursive call tree has duplicate calls with the same arguments, DP applies. Common phrasings that suggest DP: “minimum cost to…”, “number of ways to…”, “can you reach…”, “maximum profit if…”. Problems that look like DP but aren’t: greedy problems where the locally optimal choice is always globally optimal (activity selection, Huffman coding). Try greedy first for optimization problems; fall back to DP if greedy fails.”
}
},
{
“@type”: “Question”,
“name”: “What is the difference between top-down (memoization) and bottom-up (tabulation) DP?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Top-down (memoization): write the natural recursive solution. Add a cache (dictionary or array) to store results of subproblems already solved. Before computing, check the cache; if found, return immediately. Pros: only computes subproblems that are actually needed, natural to write, handles sparse dependency graphs. Cons: recursion overhead, potential stack overflow for deep recursion (Python default recursion limit = 1000). Bottom-up (tabulation): determine the order in which subproblems depend on each other. Fill the DP table iteratively from the base cases up. Pros: no recursion, cache-friendly memory access, usually faster in practice. Cons: must compute all subproblems in the defined order, even ones not needed for the final answer. For interviews: start with top-down (easier to reason about), then optimize to bottom-up if needed.”
}
},
{
“@type”: “Question”,
“name”: “How do you optimize space in a 2D DP solution?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Many 2D DP recurrences only depend on the previous row (or a few previous rows): dp[i][j] depends only on dp[i-1][j], dp[i-1][j-1], dp[i][j-1]. In this case, you only need to maintain 2 rows (current and previous) instead of the full n*m table. Replace dp[i] with curr[] and dp[i-1] with prev[]. After filling curr[], swap prev = curr. For some recurrences (like edit distance), you need only the current and previous row — O(min(m,n)) space instead of O(m*n). Further optimization: if only dp[i-1][j-1] and dp[i-1][j] are needed, you can sometimes reduce to a single 1D array by processing columns right-to-left (for 0-1 knapsack) or left-to-right (for unbounded knapsack). Understand the dependency direction before collapsing dimensions.”
}
},
{
“@type”: “Question”,
“name”: “How do you handle the stock trading DP problems (LC 121, 122, 123, 188, 309)?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “LC 121 (one transaction): max(prices[j] – prices[i]) for j > i. Track min_price_so_far, compute prices[i] – min_price. O(n) time, O(1) space. LC 122 (unlimited transactions): sum all positive differences — prices[i] – prices[i-1] when positive. Greedy. LC 123 (at most 2 transactions): dp[k][i] = max profit with at most k transactions through day i. Or track 4 state variables: buy1, sell1, buy2, sell2. LC 188 (at most k transactions): extend LC 123 to k. If k >= n/2, reduce to LC 122 (unlimited). LC 309 (cooldown): state machine DP with 3 states: HOLDING (own stock), RESTING (don’t own, post-cooldown), COOLDOWN (just sold). Each day transition: HOLDING u2192 sell u2192 COOLDOWN, COOLDOWN u2192 wait u2192 RESTING, RESTING u2192 buy u2192 HOLDING or stay RESTING. Answer = max(RESTING, COOLDOWN) on the last day.”
}
},
{
“@type”: “Question”,
“name”: “What is the classic approach for the Knapsack problem and its variants?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “0-1 Knapsack: dp[i][w] = max value using items 0..i-1 with capacity w. Transition: dp[i][w] = max(dp[i-1][w], dp[i-1][w-weight[i]] + value[i]). Space optimization: iterate weights in reverse (right to left) in the 1D version — ensures each item is used at most once. Unbounded Knapsack (items reusable, like coin change): same 1D array but iterate weights left to right. Fractional Knapsack: NOT DP — greedy (sort by value/weight, take highest-ratio items first). Subset Sum: dp[w] = True/False — can a subset sum to w? Same as 0-1 knapsack with value = weight. Partition Equal Subset Sum (LC 416): subset sum where target = total_sum // 2. Coin Change (LC 322): unbounded knapsack minimizing count. Coin Change 2 (LC 518): count ways — iterate coins in outer loop, amounts in inner, to count combinations (not permutations).”
}
}
]
}

Asked at: Meta Interview Guide

Asked at: Apple Interview Guide

Asked at: Coinbase Interview Guide

Asked at: LinkedIn Interview Guide

Scroll to Top