Why complexity matters
Imagine you're searching a phone book for "Sharma." You could flip through every page from "A" onward—or you could open to the middle, see whether "S" comes before or after, and keep halving the remaining pages. Both methods find the name. One takes seconds; the other could take an hour. Complexity analysis is the language we use to describe the difference.
When an interviewer says "what's the time complexity of your solution?" they're really asking: how much longer does your code take when the input grows? Not how many seconds—that depends on the machine. They want the shape of the growth.
What Big-O really is
Big-O describes the upper bound on how an algorithm scales with input size n. It ignores constants and lower-order terms—because when n is huge, only the dominant term matters.
The mental model
If your algorithm performs 3n² + 5n + 100 operations, we drop everything except the biggest term and call it O(n²). Why? Because when n = 1,000,000, that 3n² dwarfs everything else.
The Big-O family
- Big-O (O) — worst case, the ceiling.
- Big-Theta (Θ) — tight bound, when upper = lower.
- Big-Omega (Ω) — best case, the floor.
In interviews, "complexity" almost always means Big-O (worst case). Use that as your default.
Reading code for complexity
You don't compute Big-O—you read it. A few rules cover 95% of cases:
Rule 1 — A single loop over n items is O(n)
// O(n) — visit each element once
for (int i = 0; i < nums.length; i++) {
System.out.println(nums[i]);
}
Rule 2 — Nested loops multiply
// O(n²) — for every i, we scan every j
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
System.out.println(i + "," + j);
}
}
Rule 3 — Halving the input is O(log n)
Anything that repeatedly halves what's left—binary search, balanced trees, divide-and-conquer—is logarithmic.
// O(log n) — binary search
int lo = 0, hi = nums.length - 1;
while (lo <= hi) {
int mid = lo + (hi - lo) / 2;
if (nums[mid] == target) return mid;
if (nums[mid] < target) lo = mid + 1;
else hi = mid - 1;
}
Rule 4 — Recursion that branches doubles each step
Naïve recursive Fibonacci spawns two new calls at every level, giving roughly O(2ⁿ)—catastrophic. (We'll fix this with memoization in the Algorithms chapter.)
n ≤ 10⁵, you probably need O(n) or O(n log n). O(n²) is fine up to about n = 10⁴. Beyond that, you need a smarter approach.
The growth-rate hierarchy
From fastest to slowest, the complexity classes you'll meet 99% of the time:
| Big-O | Name | Example | Verdict |
|---|---|---|---|
| O(1) | Constant | HashMap lookup, array index | Holy grail |
| O(log n) | Logarithmic | Binary search, BST find | Excellent |
| O(n) | Linear | Single pass over an array | Great |
| O(n log n) | Linearithmic | Merge sort, heap sort | Acceptable, often optimal |
| O(n²) | Quadratic | Nested loops, bubble sort | OK for small n |
| O(2ⁿ) | Exponential | Naïve recursion, subset gen | Only for tiny n |
| O(n!) | Factorial | Permutations, brute TSP | n > 11 = forget it |
n doors. O(1) opens one door. O(log n) opens half, then half of half. O(n) tries every door. O(n²) opens every door, then walks back through every door from each one. O(2ⁿ) explores every possible combination of open and closed doors. O(n!) tries every possible order of opening them. The leaps between rows are enormous.
Space complexity
Same idea, different resource: instead of "how many operations?" we ask "how much extra memory does this use as n grows?"
Crucially, space complexity counts auxiliary space—memory you allocate beyond the input. Returning the input array unchanged is still considered O(1) extra space.
Common space patterns
- O(1) — A few variables. Two pointers shifting through an array uses constant space.
- O(n) — Building a hashmap of every element, or a copy of the array.
- O(log n) — The call stack of a balanced recursive algorithm (like merge sort's recursion depth).
- O(n²) — A 2D DP table of size n × n.
Amortized & average case
Sometimes a single operation is occasionally expensive, but on average over many operations, it's cheap. That's amortized complexity.
add() is amortized O(1).
HashMap lookups are the same: worst case O(n) when every key collides into one bucket, but with a decent hash function the average is O(1). When people say "HashMap is O(1)," they mean amortized average—not worst case.
Cheat-sheet: data structure operations
Memorize this table. Interviewers expect instant recall.
| Structure | Access | Search | Insert | Delete |
|---|---|---|---|---|
| Array | O(1) | O(n) | O(n) | O(n) |
| ArrayList (end) | O(1) | O(n) | O(1)* | O(n) |
| LinkedList | O(n) | O(n) | O(1) | O(1) |
| Stack / Queue | — | — | O(1) | O(1) |
| HashMap / HashSet | — | O(1)* | O(1)* | O(1)* |
| TreeMap / TreeSet | — | O(log n) | O(log n) | O(log n) |
| Heap (PriorityQueue) | O(1) peek | O(n) | O(log n) | O(log n) |
| Binary Search Tree | O(log n) | O(log n) | O(log n) | O(log n) |
| Trie | — | O(m) | O(m) | O(m) |
* amortized · m = key length for Trie
Practice problems
These problems aren't about coding—they're about analyzing. Look at each, name its complexity, and only then write the solution. Then check the discussion to see if a faster approach exists.
With complexity in your bones, you're ready for the next chapter — the data structures themselves.