What Makes an Algorithm Greedy
A greedy algorithm makes the locally optimal choice at each step, hoping this leads to a global optimum. Greedy works when the problem has the “greedy choice property” (a locally optimal choice can be made without reconsidering it) and “optimal substructure” (an optimal solution contains optimal solutions to subproblems). Proving greedy correctness requires an exchange argument: show that replacing any solution’s choice with the greedy choice cannot make it worse. Greedy does NOT always work — dynamic programming is needed when subproblems overlap and greedy choices affect future subproblems.
Activity Selection / Interval Scheduling
Select the maximum number of non-overlapping activities. Sort by end time. Greedily pick the activity with the earliest end time that does not overlap with the last selected activity. Why sort by end time: picking the activity that ends earliest leaves the most room for future activities — the exchange argument shows no other choice can do better. O(n log n) for sorting, O(n) for the scan.
def activity_selection(intervals):
intervals.sort(key=lambda x: x[1]) # sort by end time
count, last_end = 0, float('-inf')
for start, end in intervals:
if start >= last_end:
count += 1
last_end = end
return count
LC 435 (Non-overlapping Intervals — minimum removals): total – max non-overlapping. LC 452 (Minimum Arrows to Burst Balloons): same as activity selection. LC 646 (Maximum Length of Pair Chain): sort by second element, greedily pick non-overlapping pairs.
Jump Game (LC 55 and 45)
LC 55 (Can Jump): can you reach the end? Maintain the maximum reachable index. At each position i: if i > max_reach, stuck. Otherwise max_reach = max(max_reach, i + nums[i]). Return max_reach >= n-1. O(n). LC 45 (Minimum Jumps to Reach End): greedy BFS approach. Track current_end (end of current jump range) and farthest (maximum reachable from current range). For each position: update farthest. When i reaches current_end: take a jump (jumps++), set current_end = farthest. Stop when current_end >= n-1. O(n).
def jump(nums):
jumps = current_end = farthest = 0
for i in range(len(nums) - 1):
farthest = max(farthest, i + nums[i])
if i == current_end:
jumps += 1
current_end = farthest
return jumps
Task Scheduling with Cooldown (LC 621)
Minimum time to execute all tasks with cooldown n. Greedy insight: always execute the most frequent remaining task. Math formula: idle_slots = (max_freq – 1) * n – sum of (freq – 1) for all other tasks at max_freq. Total = num_tasks + max(0, idle_slots). Intuition: arrange tasks in “cycles” of (n+1) length. Fill each cycle starting with the most frequent task. Remaining slots in incomplete cycles are idle time.
Gas Station (LC 134)
Can you complete the circuit? If total gas >= total cost: a solution always exists (not obvious — requires proof). To find the starting station: greedily try each station as the start; if the tank ever goes negative, reset to the next station (the current span cannot contain the valid start). This works because we know a solution exists and we are eliminating invalid starts. O(n).
Candy (LC 135)
Assign candies such that each child gets at least 1, and children with higher ratings than their neighbor get more. Two passes: left→right (if rating[i] > rating[i-1]: candies[i] = candies[i-1]+1). right→left (if rating[i] > rating[i+1]: candies[i] = max(candies[i], candies[i+1]+1)). The second pass preserves the left-pass constraint via max. Total: sum of candies. O(n) time, O(n) space. Space-optimized O(1) space solution exists using slope counting.
Recognizing Greedy Problems
Clues: “minimum number of X”, “maximum X”, “optimal assignment”. Sorting the input by a key often reveals the greedy choice. Greedy vs DP: try greedy first (simpler). If you find a counterexample: switch to DP. Common greedy patterns: sort by end time (interval scheduling), sort by ratio (fractional knapsack), sort by deadline (earliest deadline first scheduling), sort by weight (Kruskal MST).
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How do you prove a greedy algorithm is correct?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The exchange argument is the standard proof technique. Assume the greedy solution G differs from an optimal solution OPT. Find the first position where they differ. Show that replacing OPT’s choice at that position with the greedy choice cannot make the solution worse (it either remains optimal or becomes equal). Therefore the greedy solution is at least as good as OPT — so it is optimal. Example for activity selection (sort by end time): suppose OPT picks activity A as the first choice and the greedy picks activity B (earlier end time). Replace A with B in OPT. The remaining activities in OPT are all compatible with A (they start after A ends). Since B ends no later than A, they are also compatible with B. So the new solution is also valid and has the same number of activities — greedy is at least as good.”
}
},
{
“@type”: “Question”,
“name”: “How does the greedy approach for Jump Game II achieve O(n) time?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The key insight: you don’t need to track exactly where to jump from — only what is reachable at each step count. Two variables: current_end (furthest reachable with the current number of jumps) and farthest (furthest reachable with one more jump from any position up to current_end). Iterate i from 0 to n-2: update farthest = max(farthest, i + nums[i]). When i reaches current_end: you must take another jump (jumps++), set current_end = farthest. The loop never backtracks — it processes each position exactly once. At each “jump” decision, we greedily set current_end to the maximum reachable position, which is always optimal: no other jump strategy from the current set of reachable positions can reach further. The algorithm is correct because it always expands to the maximum possible next frontier.”
}
},
{
“@type”: “Question”,
“name”: “When does greedy fail and dynamic programming is needed?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Greedy fails when a locally optimal choice prevents a globally optimal solution — the optimal substructure breaks down because choices affect each other. Classic counterexample: 0-1 Knapsack. Greedy by value/weight ratio picks high-ratio items first but may leave the knapsack partially empty, missing a combination of lower-ratio items that fills it better. A coin change problem with non-standard denominations (e.g., coins [1, 3, 4] and target 6): greedy picks 4+1+1=3 coins but 3+3=2 coins is optimal. Greedy works for canonical coin systems (US coins) but not arbitrary denominations. Test for greedy: can you construct a counterexample? For most “minimum/maximum” problems with overlapping subproblems, use DP. For problems where the optimal choice at each step is provably independent of future choices, use greedy.”
}
},
{
“@type”: “Question”,
“name”: “How do you solve the candy distribution problem (LC 135) with two passes?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each child must get at least 1 candy. A child with a higher rating than an adjacent neighbor must get more candies than that neighbor. Left-to-right pass: initialize all candies to 1. If rating[i] > rating[i-1]: candies[i] = candies[i-1] + 1. This satisfies the left-neighbor constraint. Right-to-left pass: if rating[i] > rating[i+1]: candies[i] = max(candies[i], candies[i+1] + 1). The max is critical: we must not violate the left-pass constraint we already established. Both passes are O(n). Sum all candies for the answer. Why two passes: a single pass cannot handle both neighbors simultaneously. The left pass ensures the left constraint; the right pass ensures the right constraint without undoing the left. The max operation at each step in the right pass maintains consistency.”
}
},
{
“@type”: “Question”,
“name”: “What is the fractional knapsack problem and how does greedy solve it?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Fractional knapsack: given items with weights and values, fill a knapsack of capacity W to maximize total value. Unlike 0-1 knapsack, you can take fractions of items. Greedy strategy: sort items by value/weight ratio descending. Fill greedily: take the highest-ratio item completely (if it fits), then the next, and so on. If the last item does not fit entirely: take the fraction that fills the remaining capacity. O(n log n) for sorting. Why greedy works here: taking a fraction of an item does not affect other items (no interaction). Proof: if OPT takes less of a high-ratio item and more of a low-ratio item, replacing the low-ratio fraction with high-ratio fraction improves value without violating the weight constraint. This is the exchange argument. Contrast: in 0-1 knapsack, taking a complete item affects which combinations remain feasible — greedy does not work.”
}
}
]
}
Asked at: Meta Interview Guide
Asked at: Apple Interview Guide
Asked at: Coinbase Interview Guide
Asked at: LinkedIn Interview Guide