Big O Notation
The idea of the Big O notation is to tell, and summarize how much slower a piece of a program code will get as the input into that code gets larger. The idea of such a summary is to be able to quickly understand the biggest trends inside a piece of code. It lessens the focus on the implementation details, and puts it on how the execution time scales with the amount of inputs the code takes.
With the definition in mind, let’s dive into examples of different Big O speed classifications:
O(1) - Constant Time
#include <stdio.h>
void main(int argc, char *argv[]) {
printf("str\n");
}
This just prints str once, and doesn’t have any way to depend on an input size argc. The function can take any amount of inputs, but will always just print str, it does not use the input, so it will always run in constant time, so this is an example of O(1).
For future reference, we’ll call argc as n.
#include <stdio.h>
void main(int argc, char *argv[]) {
printf("str\n");
printf("more stuff\n");
}
Adding some more print statements, the piece of code stays at O(1). Even though it does more stuff now, it still does not depend on the number of inputs, so the execution time is still constant.
O(log(n)) - Logarithmic Time
The code below runs in log_2(n). As the loop goes, i increases rapidly, since i is multiplied by 2 each iteration. There is now a clear dependence on the number of inputs.
#include <stdio.h>
void main(int argc, char *argv[]) {
int n = argc;
for (int i = 1; i < n; i = i * 2) {
printf("%d\n", i);
}
}
For n=1000 the program still finishes nearly instantly:
root@debian-test:~/# ./program $(seq 1000)
1
2
4
8
16
32
64
128
256
512
Why is this O(log(n))?
-> starting from
i = 1, waiting k iterations ofi,ibecomes2^k-> The loop stops when
2^k ≥ n-> Solving for k gives:
k = log_2(n)
Even for a large input, the code only performed a few iterations.
O(n) - Linear Time
The classic for loop.
#include <stdio.h>
void main(int argc, char *argv[]) {
int n = argc;
for (int i = 1; i < n; i++) {
printf("%d\n", i);
}
}
Running it will print a line for every single n:
root@debian-test:~/# ./program $(seq 1000)
1
2
3
...
998
999
1000
Linear time means the runtime grows at the same rate as the input size: if the program is fed twice as many arguments, it performs about twice as many operations, which takes about twice as much time.
If a step is added to the increment, it doesn’t change the classification, e.g. i = i + 2 instead of i++ will output less lines for the same n, but it is still classified as O(n). This is because the number of actions performed still grows proportionally with the number of inputs, n. The constant factor does not change the overall trend.
O(sqrt(n)) - Square Root Time
Some algorithms don’t need to examine all n inputs. For example, the algorithm to find the divisors of a number n can skip through some of the inputs. If we are looking for divisors of n, we know it takes 2 numbers to multiply to get n. If we find the smaller divisor, we automatically know what the other, bigger divisor would be.
Once the algorithm checked everything up to sqrt(n), it can stop, because any bigger divisors are just the other half of the pair.
#include <stdio.h>
#include <math.h>
void main(int argc, char *argv[]) {
int n = argc;
int i;
int flag = 0;
for (i = 1; i <= sqrt(n); i++) {
if (n % i == 0) {
printf("%d\n", i);
}
if(i == n/i) {
flag = 1;
}
}
if(flag) {
i -= 2; //don't repeat the square root if n is a perfect square
}
for(; i>=1; i--) {
if (n % i == 0) {
printf("%d\n", n/i);
}
}
}
Running this code gives:
root@debian-test:~/# ./program $(seq 999)
1
2
4
5
8
10
20
25
40
50
100
125
200
250
500
1000
Noticeably more work than O(log(n)), but significantly speedier than O(n). The fact that there are two loops does not matter for Big-O, since overall, the amount of work needed is still proportional to sqrt(n).
O(nlog(n)) - Logarithmic with a Linear Factor Time
Nested loops. When loops are nested, we will need to factor together the individual time complexity of each loop to arrive at an overall time complexity.
#include <stdio.h>
void main(int argc, char *argv[]) {
int n = argc;
for (int i = 1; i < n; i++) {
for (int j = 1; j < n; j = j * 2) {
printf("%d\n", i);
}
}
}
Each reuslt would be printed several times. Much slower than O(log(n)) without the linear factor.
O(n2) - Linear growth within linear growth
Similar to the previous example, but even slower. Two standard for loops with a simple increment:
#include <stdio.h>
void main(int argc, char *argv[]) {
int n = argc;
for (int i = 1; i < n; i++) {
for (int j = 1; j < n; j++) {
printf("%d\n", i);
}
}
}
Each input is worked on n * n times. Running it gives:
root@debian-test:~/# ./program $(seq 1000) | awk 'END{print NR}'
1000000
This is the territory where things really start to slow down a lot, relative to any of the previous examples.
O(n3) - Nested Triple Loops
Adding yet another for loop will make things significantly slower.
#include <stdio.h>
void main(int argc, char *argv[]) {
int n = argc;
for (int i = 1; i < n; i++) {
for (int j = 1; j < n; j++) {
for (int k = 1; k < n; k++) {
printf("%d\n", i);
}
}
}
}
Running even with only n=1000 already took a fair bit of time relative to the instant feeling of the previous examples:
root@debian-test:~/# ./program $(seq 1000) | awk 'END{print NR}'
1000000000
The result is a huge number, even for this small input. A triply nested standard for loop will quickly start to be felt performance-wise.
O(n!) - Factorial Time Factor
Extremely slow for anything other than a handful of inputs. Tends to appear in problems where permutations are needed, i.e. to look at every possible ordering for a list of inputs.
#include <stdio.h>
#include <stdlib.h>
void swap(int *x, int *y) {
int temp = *x;
*x = *y;
*y = temp;
}
void generate_permutations(int *arr, int l, int r) {
if (l == r) {
for (int i = 0; i <= r; i++) {
printf("%d ", arr[i]);
}
printf("\n");
} else {
for (int i = l; i <= r; i++) {
swap((arr + l), (arr + i));
generate_permutations(arr, l + 1, r);
swap((arr + l), (arr + i));
}
}
}
void main(int argc, char *argv[]) {
int n = argc - 1; //argv[0] = program name, so we'll skip this one this time
int *arr = (int *)malloc(n * sizeof(int));
if (arr == NULL) {
printf("Memory allocation failed!\n");
exit(1);
}
for (int i = 0; i < n; i++) {
arr[i] = atoi(argv[i + 1]);
}
printf("Generating all permutations for an array of size %d:\n", n);
generate_permutations(arr, 0, n - 1);
free(arr);
}
root@debian-test:~/# ./prog 1 2 3 4 5 6 7 8 9
...
9 1 2 3 4 5 8 6 7
9 1 2 3 4 5 6 8 7
9 1 2 3 4 5 6 7 8
Even just a handful of outputs takes a good while to process.
Calculating Big O…
So how practical is it to sit down and calculate Big O for some random snippet of code?
First, need to remember that basic operations are all done in O(1)… such as, arithmetics, logical, and comparison operators (+, -, &&, ||, <=)
Acessing arrays, assigning/copying variables, basic printing are also all O(1).
It will be beyond the scope or usefulness of this set of notes to detail why, but the short reasoning is that these types of operations can be done with a relatively few number of instructions.
So breaking down some example function with loops:
#include <stdio.h>
void main(int argc, char *argv[]) {
int n = argc-1; //O(1)
int i = 1; //O(1)
int j; //O(1)
int sum = 0; //O(1)
while (i <= n) { //O(n)
j = 1; //O(1)
while (j <= n) { //O(n)
sum = sum + i; //O(1)
j = j + 1; //O(1)
}
i = i + 1; //O(1)
}
printf("%d\n", sum); //O(1)
}
Remembering earlier that nothing in O(1) depends on the size of the input n, we can consider it as if it appeared only once.
Now, we see that O(n) appears twice, with one being nested inside the other. Giving us O(1) * O(n) * O(n).
This equates to O(n^2).
In practice, in real code bases, calculating time complexity might be significantly harder, because a loop might contain asshole_sort() which could be anything between a simple loop to a thousand lines of code, which could include
-> huge functions with recursions, internal sorting, or heavy control flow
-> calls to disk I/O, network, databases, or the filesystem
-> calls to external libraries whose complexity is difficult to estimate
-> calls to code written in another language that the reader is not familiar with
Relevance for MariaDB?
Different queries and execution plans boil down to algorithms of widly varying time complexities. Better time complexity classification -> faster query, happier devs.
Simple SELECT
- -> without index:
for each row in t1 (n)
scan all rows in t2 (m)
-> O(n * m)
Slow.
- -> with index:
for each row in t1 (n)
search in BTREE (log m)
-> O(nlog(m))
Significantly faster. Often minutes -> seconds.
Range SELECT
If the optimizer can do an index range scan:
search in BTREE for the start of the range (log n)
simple loop (linear time) to reach end of range (m)
-> O(log(n + m))
ORDER BY
These happen by well defined algorithms that go beyond the scope of these simple notes, but -> O(nlog(n))
Becomes expensive for big datasets, no big deal for a couple of million rows.
JOIN
The speed and complexity of JOIN is all over the place, depending on indexing, join type, and join order.
(Block) Nested Loop joins stand out at being very slow, as the algorithm is basically given by:
Nested Loop Join:
for each row in table t1:
for each row in table t2:
check join condition
Block Nested Loop Join:
buffer subset of t1
for row in t2:
compare t2.row with every buffered row in t1
repeat
These both boil down to O(n^2), with the BNL benefiting from some I/O work reduction. Very slow.
Conclusion
Big O gives a way to summarize the time complexity of database operations, to other technical users, such as developers, and can help translate an EXPLAIN into a mental model that does not require the ability to interpret the EXPLAIN directly. It can also give some insight into why some queries scale really well, while others do not.