# Big O is the language and metric that we use to describe the efficiency of algorithms. Let’s look at the following scenario: if we want to send a file to someone as fast as possible , how should we send it?

There are two options:

• Electronic transfer :
• Physical delivery

The concept of asymptotic runtime or big O means time complexity. Which means we could describe the data transfer algorithm runtime as:

• Electronic transfer : O(s), s - size of the file. This means that the time to transfer the file increases linearly with the size of the file.
• Physical delivery : O(1) - as the size increases it will not take any longer.

No matter how big constant is and how slow the linear is, it will surpass constant at some point.
There are many other more runtimes such as O(N), O(N2), O(2N). The time to paint fence that is w meters wide and h meters high could be described as O(wh).

We can describe our runtime for any algorithm in three different ways. Let’s look at this from the perspective of quick sort. It takes a random element as ‘pivot’ and swap values in the array before elements greater than pivot appear.
• Best case : all elements are equal and pass over through array happens once. O(N)
• Worst case : if we are unlucky pivot repeatedly biggest element in the array. O(N2)
• Expected case: Sometimes pivot will be very high or very low, but it will not happen over and over again. O(N log N)

### Space complexity

Time is not the only thing that matters in an algorithm, we might also care about the amount of memory or space that required by algorithm.  It is a parallel concept to time complexity. If we need to create an array size of n, O(n) space will be required. Stack space in recursive calls are also count.

 func sum(n: Int) -> Int {    if n <= 0 {        return 0    }    return n + sum(n: n-1)}

Each call adds a level to the stack and takes up actual memory
 1   sum(3)2     → sum(2)3         → sum(1)4             → sum(0)

On the other hand, there are some cases that n calls do not take O(n) space.

 func pairSumSequence(n: Int) -> Int {    var sum = 0    for i in 0...n {        sum += pairSum(a: i, b: i+1)    }    return sum }func pairSum(a: Int, b: Int) -> Int {    return a + b}

Due to the fact that n calls to pairSum function does not exist simultaneously on the call stack, we only need O(1) space.

### Drop the Constants and Non-Dominant Terms

It is very possible that O(N) code is faster than O(1) code for specific inputs. Big O just describes the rate of increase. So for this reason we drop constant which means that O(2N) is actually O(N).
As we can drop constants, it is possible to drop non dominant terms.
• O(N2+N) → O(N2)
• O(N+logN) → O(N)
• O(2*2N + 1000N100) → O(2N)
Sometimes we might have sum in the run time, for example O(B2+A) cannot be eliminated without knowledge of A or B.

Below graph depicts rate of increase for some common big O times.

As you can see O(x2) is much worse than O(x). There are a lot of runtimes that are worse than O(x!) such as O(xx) or O(2x * x!)

• Adding runtimes - If an algorithm is in the form of “do this then when all done, do that”
• Multiplication - if an algorithm is in the form “do this for each time you do that”

 Add the runtimes: O(A+B) Multiply the Runtimes O(A*B) for a in arrayA {    print(a)}for b in arrayB {    print(b)} for a in arrayA {    for b in arrayB {        print("\(a) + \(b)")    }}

### Amortized Time

With an ArrayList or a dynamically resizing array, you wont run out of space since its capacity grows as you add elements. When the array hits the capacity, the ArrayList creates new array with double capacity and copy all elements to the new array. So if array contains N element the runtime of adding elements will take O(N) time. We know that this does not happen very often and majority of time adding will be O(1), and amortized time takes into account both cases.
As we add elements, we double the capacity when the size of array is a power of 2. The capacity becomes 1,2,4,8,16 … X. If we look at the sum of this capacities from right to left it starts with X and halves until it gets 1.
X+X/2+X/4+X/8+..+1 and this roughly 2X and if we remove constant it becomes O(X). Therefore, X adds take O(X) time and amortized time for each adding is O(1).

#### Log N Runtimes

If we look at the binary search as an example, in which we are looking for x in an N-element sorted array. First we compare x to the midpoint, if it is equal we return , if x<middle , then we search on the left, if x>middle we search on the right.

 search 9 within [1,5,8,9,11,13,15,19,21] compare 9 to 11 → smaller search 9 within [1,5,8,9] compare 9 to 8 → bigger search 9 within  compare 9 to 9 return

We start with N- element array search , then after single step we are down to N/2 elements and one more step down to N/2 elements. So the total time is a matter of how many steps we can take until N becomes 1.

 N = 16N = 8 /* divide by 2 */N = 4 /* divide by 2 */N = 2 /* divide by 2 */N = 1 /* divide by 2 */

So how many times we can multiply 1 by 2 to get N. 2k = N → log2N = k

Thus where the number of elements in the problem space gets halved each time, that will likely be a O(log N) runtime.

#### Recursive Runtimes

What is the runtime of this code?

 func f (n : Int) -> Int {    if n <= 1 {        return 1    }    return f(n: n-1) + f(n: n-1)}

Rather than assumptions let us look at the through the code. Suppose we have f(4) and this calls f(3) twice and each of those calls of f(3) calls f(2), until we get down to f(1).

The three will have depth N and each node has two children. Therefore each level will have twice as many calls as the one above. More generically it can be expressed as 20+21+22+23+...+2N-1. Thus, when you have recursive function that makes multiple calls , the runtime will often look like O(branchesdept) and in our case O(2N).