A free, self-paced field guide · No course required

Learn DSA
the way it actually
clicks.

Plain-English analogies. Production Java. Hand-picked LeetCode problems after every concept. Built for people who want to understand, not just memorize.

Most DSA resources teach you to recognize code. This one teaches you to build intuition—so that when a problem you've never seen before lands in front of you, your brain reaches for the right tool on its own.

Complexity

Big-O notation without the math degree. Learn to look at code and immediately know if it'll scale—or melt.

Start here

Data Structures

Arrays, hashmaps, trees, graphs, heaps, tries—every container explained with an everyday-life analogy, then coded in Java.

Open the chapter

Algorithms

Every pattern interviewers actually ask: two pointers, sliding window, BFS, DFS, DP, backtracking—with the trigger phrases that tell you which to use.

See the patterns

Java Toolkit

Exactly the Java you need for DSA—Collections, Comparator, generics, common gotchas. Nothing more.

Open the manual
Chapter I · § 1

Why complexity matters

Imagine you're searching a phone book for "Sharma." You could flip through every page from "A" onward—or you could open to the middle, see whether "S" comes before or after, and keep halving the remaining pages. Both methods find the name. One takes seconds; the other could take an hour. Complexity analysis is the language we use to describe the difference.

When an interviewer says "what's the time complexity of your solution?" they're really asking: how much longer does your code take when the input grows? Not how many seconds—that depends on the machine. They want the shape of the growth.

Think of a chef. A recipe that says "chop one onion" takes one onion's worth of time. A recipe that says "chop all the onions, then for each onion check every other onion" takes exponentially longer as the kitchen grows. Big-O is how we describe these recipes in shorthand.
Chapter I · § 2

What Big-O really is

Big-O describes the upper bound on how an algorithm scales with input size n. It ignores constants and lower-order terms—because when n is huge, only the dominant term matters.

The mental model

If your algorithm performs 3n² + 5n + 100 operations, we drop everything except the biggest term and call it O(n²). Why? Because when n = 1,000,000, that 3n² dwarfs everything else.

Picking the dominant term is like describing a city's size. Mumbai has millions of people. Yes, technically it has 12,478,447 plus a few stray pigeons—but "millions" is the useful answer.

The Big-O family

  • Big-O (O) — worst case, the ceiling.
  • Big-Theta (Θ) — tight bound, when upper = lower.
  • Big-Omega (Ω) — best case, the floor.

In interviews, "complexity" almost always means Big-O (worst case). Use that as your default.

Chapter I · § 3

Reading code for complexity

You don't compute Big-O—you read it. A few rules cover 95% of cases:

Rule 1 — A single loop over n items is O(n)

// O(n) — visit each element once
for (int i = 0; i < nums.length; i++) {
    System.out.println(nums[i]);
}

Rule 2 — Nested loops multiply

// O(n²) — for every i, we scan every j
for (int i = 0; i < n; i++) {
    for (int j = 0; j < n; j++) {
        System.out.println(i + "," + j);
    }
}

Rule 3 — Halving the input is O(log n)

Anything that repeatedly halves what's left—binary search, balanced trees, divide-and-conquer—is logarithmic.

// O(log n) — binary search
int lo = 0, hi = nums.length - 1;
while (lo <= hi) {
    int mid = lo + (hi - lo) / 2;
    if (nums[mid] == target) return mid;
    if (nums[mid] < target) lo = mid + 1;
    else                   hi = mid - 1;
}

Rule 4 — Recursion that branches doubles each step

Naïve recursive Fibonacci spawns two new calls at every level, giving roughly O(2ⁿ)—catastrophic. (We'll fix this with memoization in the Algorithms chapter.)

Rule of thumb
Count the loops, count the recursive branches, look for halving. If n ≤ 10⁵, you probably need O(n) or O(n log n). O(n²) is fine up to about n = 10⁴. Beyond that, you need a smarter approach.
Chapter I · § 4

The growth-rate hierarchy

From fastest to slowest, the complexity classes you'll meet 99% of the time:

Big-ONameExampleVerdict
O(1)ConstantHashMap lookup, array indexHoly grail
O(log n)LogarithmicBinary search, BST findExcellent
O(n)LinearSingle pass over an arrayGreat
O(n log n)LinearithmicMerge sort, heap sortAcceptable, often optimal
O(n²)QuadraticNested loops, bubble sortOK for small n
O(2ⁿ)ExponentialNaïve recursion, subset genOnly for tiny n
O(n!)FactorialPermutations, brute TSPn > 11 = forget it
Picture a hallway with n doors. O(1) opens one door. O(log n) opens half, then half of half. O(n) tries every door. O(n²) opens every door, then walks back through every door from each one. O(2ⁿ) explores every possible combination of open and closed doors. O(n!) tries every possible order of opening them. The leaps between rows are enormous.
Chapter I · § 5

Space complexity

Same idea, different resource: instead of "how many operations?" we ask "how much extra memory does this use as n grows?"

Crucially, space complexity counts auxiliary space—memory you allocate beyond the input. Returning the input array unchanged is still considered O(1) extra space.

Common space patterns

  • O(1) — A few variables. Two pointers shifting through an array uses constant space.
  • O(n) — Building a hashmap of every element, or a copy of the array.
  • O(log n) — The call stack of a balanced recursive algorithm (like merge sort's recursion depth).
  • O(n²) — A 2D DP table of size n × n.
Interview tip
Always state both time and space complexity when finishing a solution. Saying "O(n) time, O(1) space" out loud signals senior thinking even on easy problems.
Chapter I · § 6

Amortized & average case

Sometimes a single operation is occasionally expensive, but on average over many operations, it's cheap. That's amortized complexity.

Think of an ArrayList. Adding an element is normally O(1). But when it's full, it doubles in size and copies everything—an O(n) hit. That copy is rare though. If you average the cost over many adds, it works out to O(1) per add. We say add() is amortized O(1).

HashMap lookups are the same: worst case O(n) when every key collides into one bucket, but with a decent hash function the average is O(1). When people say "HashMap is O(1)," they mean amortized average—not worst case.

Chapter I · § 7

Cheat-sheet: data structure operations

Memorize this table. Interviewers expect instant recall.

StructureAccessSearchInsertDelete
ArrayO(1)O(n)O(n)O(n)
ArrayList (end)O(1)O(n)O(1)*O(n)
LinkedListO(n)O(n)O(1)O(1)
Stack / QueueO(1)O(1)
HashMap / HashSetO(1)*O(1)*O(1)*
TreeMap / TreeSetO(log n)O(log n)O(log n)
Heap (PriorityQueue)O(1) peekO(n)O(log n)O(log n)
Binary Search TreeO(log n)O(log n)O(log n)O(log n)
TrieO(m)O(m)O(m)

* amortized · m = key length for Trie

Chapter I · § 8

Practice problems

These problems aren't about coding—they're about analyzing. Look at each, name its complexity, and only then write the solution. Then check the discussion to see if a faster approach exists.

With complexity in your bones, you're ready for the next chapter — the data structures themselves.

Continue to Data Structures