Glossary

What Is Time Complexity? Big O Notation Explained

Time complexity describes how an algorithm's runtime grows as its input size grows. We express this using Big O notation, which characterises the worst-case growth rate, ignoring constants and lower-order terms. Understanding time complexity helps you choose the right algorithm and data structure for the scale of data you're working with.

The Big O Hierarchy

O(1) constant: array index access, hash map lookup. O(log n) logarithmic: binary search, balanced BST operations — halves the problem each step. O(n) linear: iterating an array once. O(n log n): efficient sorting (merge sort, quicksort on average). O(n²) quadratic: nested loops — bubble sort, naive string matching. O(2ⁿ) exponential: recursive Fibonacci, subset generation — impractical beyond ~30 elements. O(n!) factorial: permutation generation — impractical beyond ~12 elements.

Common Data Structure Operations

Array: access O(1), search O(n), insert/delete at end O(1), insert/delete in middle O(n). Linked list: access O(n), insert/delete at head O(1). Hash map: average O(1) for get/set/delete; O(n) worst case (all keys collide to one bucket). Binary search tree (balanced): O(log n) for all operations. Heap: insert O(log n), get min/max O(1), delete min O(log n).

Space Complexity

Space complexity measures memory usage growth. An algorithm can trade time for space: a hash map gives O(1) lookup but uses O(n) space. Recursive algorithms use O(depth) stack space — recursive DFS on a tree uses O(height). Iterative algorithms with constant extra variables use O(1) space. When designing algorithms, consider both time and space: sometimes a 2× slowdown that halves memory usage is the right trade.