Two Pointers
Two pointers is exactly what it sounds like—two indices walking through your data, usually a sorted array or string. They might start at opposite ends and squeeze toward each other, or both start at the front and move at different speeds. The technique replaces an O(n²) nested loop with an O(n) single sweep.
When to recognise it
- The array is sorted (or you can sort it) and you're looking for a pair/triple with a target sum.
- You're checking a string for palindrome properties.
- You're partitioning an array in place (e.g., moving zeros to the end).
- The brute force is "check every pair" → O(n²). Two pointers gets you to O(n).
Template — opposite ends
// Two Sum on a SORTED array
public int[] twoSum(int[] nums, int target) {
int l = 0, r = nums.length - 1;
while (l < r) {
int sum = nums[l] + nums[r];
if (sum == target) return new int[]{l, r};
if (sum < target) l++;
else r--;
}
return new int[]{-1, -1};
}
Template — same direction (fast/slow)
// Remove duplicates from a sorted array in place
public int removeDuplicates(int[] nums) {
int slow = 0;
for (int fast = 1; fast < nums.length; fast++) {
if (nums[fast] != nums[slow]) {
slow++;
nums[slow] = nums[fast];
}
}
return slow + 1;
}
Sliding Window
A sliding window is two pointers that maintain a "window" of elements between them, expanding the right edge to include more and shrinking the left edge when the window breaks a constraint. It transforms "look at every contiguous subarray" (O(n²) or worse) into a single linear sweep.
When to recognise it
- Problem mentions a contiguous subarray or substring.
- You're tracking something inside a window — max, sum, distinct chars, frequency.
- "Longest / shortest / count of subarrays with property X."
- Brute force is O(n²) checking every (i, j) pair → window collapses it to O(n).
Template — variable-size window
// Longest substring with at most K distinct characters
public int longestKDistinct(String s, int k) {
Map<Character, Integer> freq = new HashMap<>();
int left = 0, best = 0;
for (int right = 0; right < s.length(); right++) {
char c = s.charAt(right);
freq.put(c, freq.getOrDefault(c, 0) + 1);
// Shrink until the window is valid again
while (freq.size() > k) {
char lc = s.charAt(left);
freq.put(lc, freq.get(lc) - 1);
if (freq.get(lc) == 0) freq.remove(lc);
left++;
}
best = Math.max(best, right - left + 1);
}
return best;
}
Template — fixed-size window
// Max sum of any K consecutive elements
public int maxSumK(int[] nums, int k) {
int sum = 0;
for (int i = 0; i < k; i++) sum += nums[i];
int best = sum;
for (int i = k; i < nums.length; i++) {
sum += nums[i] - nums[i - k]; // slide
best = Math.max(best, sum);
}
return best;
}
Prefix Sum
A prefix-sum array stores running totals: prefix[i] = sum of nums[0..i-1]. Once you have it, the sum of any subarray [l..r] is just prefix[r+1] - prefix[l]—a single subtraction instead of looping. It's the world's most underrated trick.
Template
int[] prefix = new int[nums.length + 1];
for (int i = 0; i < nums.length; i++)
prefix[i + 1] = prefix[i] + nums[i];
// sum of nums[l..r] inclusive:
int sum = prefix[r + 1] - prefix[l];
The hashmap upgrade — "subarray sum equals K"
Combine prefix sums with a hashmap and you can count subarrays with a specific sum in O(n):
public int subarraySum(int[] nums, int k) {
Map<Integer, Integer> seen = new HashMap<>();
seen.put(0, 1); // empty prefix
int sum = 0, count = 0;
for (int n : nums) {
sum += n;
count += seen.getOrDefault(sum - k, 0);
seen.put(sum, seen.getOrDefault(sum, 0) + 1);
}
return count;
}
Binary Search
Binary search is the phonebook trick formalised. Given something sorted (or monotonic), you halve the search space at every step, reaching the target in O(log n). The shocker is how often it applies to problems that don't look sorted at first glance—anywhere you can ask "is this value too big or too small?" and the answer is monotonic, binary search applies.
2⁷ = 128 > 100. Binary search is just twenty-questions running over an array.
The canonical template
public int binarySearch(int[] nums, int target) {
int lo = 0, hi = nums.length - 1;
while (lo <= hi) {
int mid = lo + (hi - lo) / 2; // avoids overflow
if (nums[mid] == target) return mid;
if (nums[mid] < target) lo = mid + 1;
else hi = mid - 1;
}
return -1;
}
Binary search on the answer
The advanced trick: when the answer space is an integer range and you can check a candidate quickly, binary-search the answer itself.
// Koko eating bananas — find the minimum eating speed
public int minEatingSpeed(int[] piles, int h) {
int lo = 1, hi = 0;
for (int p : piles) hi = Math.max(hi, p);
while (lo < hi) {
int mid = lo + (hi - lo) / 2;
if (canFinish(piles, h, mid)) hi = mid;
else lo = mid + 1;
}
return lo;
}
private boolean canFinish(int[] piles, int h, int speed) {
long hours = 0;
for (int p : piles) hours += (p + speed - 1) / speed;
return hours <= h;
}
mid = lo + (hi - lo) / 2, never (lo + hi) / 2. With large indices the latter overflows and gives a negative number — a notorious bug that hid in Java's standard library for years.
Sorting
For interview purposes, you rarely implement a sort by hand — Java's built-in Arrays.sort uses Dual-Pivot Quicksort for primitives and Timsort for objects, both excellent. What matters is knowing when to sort, the cost (O(n log n)), and how to customise comparison.
Quick reference
| Algorithm | Avg | Worst | Space | Stable? |
|---|---|---|---|---|
| Bubble / Insertion | O(n²) | O(n²) | O(1) | Yes |
| Merge Sort | O(n log n) | O(n log n) | O(n) | Yes |
| Quick Sort | O(n log n) | O(n²) | O(log n) | No |
| Heap Sort | O(n log n) | O(n log n) | O(1) | No |
| Counting Sort | O(n + k) | O(n + k) | O(k) | Yes |
Sorting in Java
// Primitives — ascending only
int[] nums = {3, 1, 4, 1, 5};
Arrays.sort(nums);
// Objects with custom comparator
Integer[] arr = {3, 1, 4};
Arrays.sort(arr, (a, b) -> b - a); // descending
// Sort a List
List<String> list = new ArrayList<>(List.of("banana", "apple"));
Collections.sort(list);
// Sort an int[][] by second column
int[][] pairs = {{1, 3}, {2, 1}};
Arrays.sort(pairs, (a, b) -> a[1] - b[1]);
// Multi-key sort
Arrays.sort(pairs, (a, b) -> {
if (a[0] != b[0]) return a[0] - b[0];
return a[1] - b[1];
});
Recursion
Recursion is a function that solves a problem by calling itself on a smaller piece of the same problem. The art is in two parts: the base case (when do we stop?) and the recursive case (how does this problem reduce to a smaller version?). Every tree problem, every backtracking problem, every divide-and-conquer algorithm is recursion in disguise.
The three questions to ask
- What's the base case? The smallest input where the answer is obvious without further recursion.
- What's the recurrence? Assuming the function works for inputs smaller than n, how do I build the answer for n?
- Does it terminate? Does every recursive call move toward the base case?
Template
// Factorial — the canonical example
public long factorial(int n) {
if (n <= 1) return 1; // base case
return n * factorial(n - 1); // recurse
}
// Tree height
public int height(TreeNode n) {
if (n == null) return 0;
return 1 + Math.max(height(n.left), height(n.right));
}
height(n.left), assume it correctly returns the height of the left subtree. Don't trace it. Build the answer for the current node and let recursion handle the rest.
Backtracking
Backtracking is recursion that tries every option, undoes the bad ones, and remembers the good ones. You make a choice, recurse, then "un-make" the choice so you can try the next one. It's how you generate all permutations, all subsets, solve sudoku, place N queens, navigate mazes. Anytime the problem says "find all..." or "is there a way to...", backtracking is on the table.
The universal template
void backtrack(state, choices) {
if (isGoal(state)) {
result.add(new ArrayList<>(state)); // snapshot!
return;
}
for (choice : choices) {
if (!isValid(choice, state)) continue;
state.add(choice); // make choice
backtrack(state, choices); // explore
state.remove(state.size() - 1); // UNDO
}
}
Example — generate all subsets
public List<List<Integer>> subsets(int[] nums) {
List<List<Integer>> res = new ArrayList<>();
backtrack(0, nums, new ArrayList<>(), res);
return res;
}
private void backtrack(int start, int[] nums,
List<Integer> cur, List<List<Integer>> res) {
res.add(new ArrayList<>(cur)); // every state is an answer
for (int i = start; i < nums.length; i++) {
cur.add(nums[i]); // choose
backtrack(i + 1, nums, cur, res); // explore
cur.remove(cur.size() - 1); // un-choose
}
}
new ArrayList<>(cur). If you add cur directly, every entry in res would be the same reference that keeps getting mutated. You'd end up with a list of empty lists. Always snapshot.
Breadth-First Search
BFS explores a graph or tree level by level, like ripples spreading from a pebble. Because it reaches everything one step away before going two steps, BFS is the natural choice for shortest path in an unweighted graph and for any "minimum number of moves" question.
When to recognise it
- "Shortest path / fewest moves / minimum steps" in an unweighted graph.
- "Level-order" traversal of a tree.
- Grid problems where each cell connects to its 4 neighbours.
- Multi-source spreading (rotting oranges, walls and gates).
Template — grid BFS
int[][] DIRS = {{-1,0}, {1,0}, {0,-1}, {0,1}};
public int shortestPath(int[][] grid, int[] start, int[] end) {
int R = grid.length, C = grid[0].length;
boolean[][] seen = new boolean[R][C];
Queue<int[]> q = new ArrayDeque<>();
q.offer(new int[]{start[0], start[1]});
seen[start[0]][start[1]] = true;
int steps = 0;
while (!q.isEmpty()) {
int sz = q.size();
for (int i = 0; i < sz; i++) {
int[] cur = q.poll();
if (cur[0] == end[0] && cur[1] == end[1])
return steps;
for (int[] d : DIRS) {
int r = cur[0] + d[0], c = cur[1] + d[1];
if (r < 0 || r >= R || c < 0 || c >= C) continue;
if (seen[r][c] || grid[r][c] == 1) continue;
seen[r][c] = true;
q.offer(new int[]{r, c});
}
}
steps++;
}
return -1;
}
Depth-First Search
DFS dives as deep as possible before backing up. If BFS is ripples spreading evenly, DFS is a single explorer plunging into one branch until it hits a wall, then backing up to try the next branch. It's the workhorse of tree problems, connectivity questions, and path-finding when you just need any path, not the shortest.
Recursive template
void dfs(int node, List<List<Integer>> graph, boolean[] seen) {
if (seen[node]) return;
seen[node] = true;
// process node
for (int nb : graph.get(node)) dfs(nb, graph, seen);
}
Iterative template (with a stack)
void dfsIter(int start, List<List<Integer>> graph) {
Deque<Integer> stack = new ArrayDeque<>();
boolean[] seen = new boolean[graph.size()];
stack.push(start);
while (!stack.isEmpty()) {
int node = stack.pop();
if (seen[node]) continue;
seen[node] = true;
for (int nb : graph.get(node))
if (!seen[nb]) stack.push(nb);
}
}
Topological Sort
Imagine you're scheduling university courses where some have prerequisites. CS201 requires CS101. CS301 requires CS201. A topological sort gives you a valid order to take everything—an order that never breaks a dependency. It only works on directed acyclic graphs (DAGs); if there's a cycle, no valid order exists.
Kahn's algorithm — BFS with in-degree counts
public int[] topoSort(int n, int[][] prereqs) {
List<List<Integer>> graph = new ArrayList<>();
for (int i = 0; i < n; i++) graph.add(new ArrayList<>());
int[] inDeg = new int[n];
for (int[] p : prereqs) {
graph.get(p[1]).add(p[0]); // p[1] → p[0]
inDeg[p[0]]++;
}
Queue<Integer> q = new ArrayDeque<>();
for (int i = 0; i < n; i++)
if (inDeg[i] == 0) q.offer(i);
int[] order = new int[n];
int idx = 0;
while (!q.isEmpty()) {
int node = q.poll();
order[idx++] = node;
for (int nb : graph.get(node))
if (--inDeg[nb] == 0) q.offer(nb);
}
return idx == n ? order : new int[0]; // empty = cycle
}
Dijkstra — Weighted Shortest Path
BFS finds the shortest path when every edge costs 1. The moment edges have different weights—roads with tolls, flights with fares, networks with latencies—BFS breaks. Dijkstra's algorithm fixes this by always expanding the cheapest-so-far node next, using a min-heap. It's BFS with a priority queue. Caveat: all edge weights must be non-negative.
Java template
public int[] dijkstra(int n, List<int[]>[] graph, int src) {
// graph[u] contains {v, weight} pairs
int[] dist = new int[n];
Arrays.fill(dist, Integer.MAX_VALUE);
dist[src] = 0;
PriorityQueue<int[]> pq = new PriorityQueue<>((a, b) -> a[1] - b[1]);
pq.offer(new int[]{src, 0});
while (!pq.isEmpty()) {
int[] cur = pq.poll();
int u = cur[0], d = cur[1];
if (d > dist[u]) continue; // stale entry
for (int[] edge : graph[u]) {
int v = edge[0], w = edge[1];
if (d + w < dist[v]) {
dist[v] = d + w;
pq.offer(new int[]{v, dist[v]});
}
}
}
return dist;
}
Complexity: O((V + E) log V) with a binary heap—fast enough for almost everything.
Greedy
A greedy algorithm makes the locally best choice at every step and hopes it adds up to the globally best answer. Often it does. Often it doesn't. The art is knowing when. Greedy is fast when it works—usually O(n log n) including the sort—but proving it's correct on a given problem is the actual challenge.
Telltale signs greedy might work
- Sorting unlocks structure — sort by deadline, by start time, by ratio.
- "Pick maximum number of X" or "minimum cost to do Y."
- Interval problems — selecting / merging / scheduling.
- You can argue an exchange: swapping a non-greedy choice for the greedy one never makes things worse.
Classic example — interval scheduling
// Maximum non-overlapping meetings — sort by END time, pick earliest-ending
public int maxMeetings(int[][] meetings) {
Arrays.sort(meetings, (a, b) -> a[1] - b[1]);
int count = 0, lastEnd = Integer.MIN_VALUE;
for (int[] m : meetings) {
if (m[0] >= lastEnd) {
count++;
lastEnd = m[1];
}
}
return count;
}
Dynamic Programming
Dynamic programming is recursion that remembers. The same subproblem comes up again and again; instead of resolving it, you cache the answer the first time and reuse it. That's the whole idea. The hard part isn't coding—it's defining the right "state" so that the answer for state S can be built from the answers of smaller states.
The DP method — three steps
- Define the state. What does
dp[i](ordp[i][j]) mean? Write it out as an English sentence: "the maximum sum of a subarray ending at index i." - Write the recurrence. How does the answer for state S build from smaller states?
- Identify base cases. The smallest states where the answer is direct.
Top-down (memoization)
int[] memo;
public int climbStairs(int n) {
memo = new int[n + 1];
return solve(n);
}
private int solve(int n) {
if (n <= 2) return n;
if (memo[n] != 0) return memo[n];
return memo[n] = solve(n - 1) + solve(n - 2);
}
Bottom-up (tabulation)
public int climbStairs(int n) {
if (n <= 2) return n;
int[] dp = new int[n + 1];
dp[1] = 1; dp[2] = 2;
for (int i = 3; i <= n; i++)
dp[i] = dp[i - 1] + dp[i - 2];
return dp[n];
}
The DP family — patterns you'll see again and again
- 1D DP — Climbing Stairs, House Robber, Max Subarray (Kadane).
- 0/1 Knapsack — Pick or skip each item. Subset Sum, Partition Equal Subset.
- Unbounded Knapsack — Use each item infinite times. Coin Change.
- Longest Increasing Subsequence (LIS) — O(n²) DP or O(n log n) patience-sort.
- Longest Common Subsequence (LCS) — 2D DP on two strings.
- Edit Distance — 2D DP, the canonical string-comparison problem.
- Matrix DP — Unique Paths, Min Path Sum on a grid.
- Interval DP — Burst Balloons, Matrix Chain Multiplication.
Example — 0/1 Knapsack template
public int knapsack(int[] wt, int[] val, int cap) {
int n = wt.length;
int[][] dp = new int[n + 1][cap + 1];
for (int i = 1; i <= n; i++) {
for (int w = 0; w <= cap; w++) {
dp[i][w] = dp[i - 1][w]; // skip item
if (w >= wt[i - 1])
dp[i][w] = Math.max(dp[i][w],
dp[i - 1][w - wt[i - 1]] + val[i - 1]);
}
}
return dp[n][cap];
}
Bit Manipulation
Every integer is just a string of 0s and 1s. Bit manipulation works at that level directly—setting, clearing, flipping, or counting individual bits. The payoff: operations that would take a loop in normal code become single instructions. Used cleverly, they replace HashSets with a single integer.
The operations to memorise
// AND & OR | XOR ^ NOT ~ Left << Right >>
int n = 12; // binary: 1100
n & 1; // 0 — is last bit set?
n | (1 << 3); // set bit 3
n & ~(1 << 2); // clear bit 2
n ^ (1 << 2); // flip bit 2
(n >> k) & 1; // get value of bit k (0 or 1)
// XOR magic: x ^ x == 0 and x ^ 0 == x
// → find the unique number in {a, a, b, b, c, c, d} : XOR them all
// Useful built-ins
Integer.bitCount(n); // number of 1-bits
Integer.toBinaryString(n);
Integer.numberOfTrailingZeros(n);
// n & (n - 1) clears the LOWEST set bit — count bits in O(set-bits)
while (n != 0) {
n &= (n - 1);
count++;
}
Monotonic Stack
A monotonic stack is a stack where you maintain an invariant: elements inside it stay in increasing (or decreasing) order. Whenever pushing a new element would break the order, you pop the offending elements first. Sounds niche, but it's the secret to "next greater element," "largest rectangle in histogram," and "daily temperatures" problems—all in O(n).
Template — next greater element
public int[] nextGreater(int[] nums) {
int n = nums.length;
int[] res = new int[n];
Arrays.fill(res, -1);
Deque<Integer> stack = new ArrayDeque<>(); // indices
for (int i = 0; i < n; i++) {
while (!stack.isEmpty() && nums[stack.peek()] < nums[i]) {
res[stack.pop()] = nums[i];
}
stack.push(i);
}
return res;
}
The recognition playbook
When a new problem arrives, walk this checklist. The first match is usually the right pattern.
| If the problem mentions… | Reach for… |
|---|---|
| Sorted array, target sum, pair / triplet | Two Pointers |
| Contiguous subarray / substring, "longest / smallest" | Sliding Window |
| Many range-sum queries on a static array | Prefix Sum |
| Sorted input, or "find / search / minimum X such that…" | Binary Search (sometimes "on the answer") |
| "All permutations / subsets / combinations / ways" | Backtracking |
| "Top K" / "Kth smallest / largest" | Heap (PriorityQueue) |
| Tree problem, anything | Recursion (DFS), sometimes BFS for levels |
| Shortest path in unweighted graph / grid | BFS |
| Shortest path with non-negative weights | Dijkstra |
| Connected components / "are X and Y connected?" | Union-Find or DFS |
| Dependencies / ordering / prerequisites | Topological Sort |
| "Count / minimum / maximum number of ways" | Dynamic Programming |
| Two strings → compare / transform / match | 2D DP (LCS, Edit Distance pattern) |
| Pick items with constraints | Knapsack DP |
| "Next greater / smaller element" | Monotonic Stack |
| Prefix matching / autocomplete | Trie |
| "Find unique number" with bit-pair-up tricks | XOR / Bit Manipulation |
| Locally-best choice each step works out | Greedy (always verify with a test case) |
Patterns in hand. Now sharpen the language you'll write them in.