In math, a function takes an input and returns an output. For example $f(x)=x^2$ returns the square of the input. Another example - common in computer science - is $\text{sort(list)},$ which returns a list with the same elements as the input list except in increasing order.

For how many distinct inputs is $\text{sort(list)}=[1,2,3,4]?$ That is, how many lists, when sorted, give $[1,2,3,4]?$

There are a lot of things we can say about functions in math, but we don't usually concern ourselves with how "difficult" a function is to evaluate; it just has an input and an output.

However, how "difficult" a function is to evaluate is a central question we ask when working with algorithms. It determines which computations are feasible in a reasonable amount of time with a computer, and which are not.

Computers know how to directly evaluate some "native" functions (like comparisons, addition, multiplication, etc.), and can only compute higher-order functions using these native operations.

Thus, a central question is how many native operations it takes to complete a higher-order function for different inputs. Let's take a look at concrete example of this in the next question.

Let's take an ElementList object that has two native functions:

- You can compare any two values in the list to see which one is bigger.
- You can swap any two values in the list.

Below is a routine that uses some number of these native operations to sort the list. **Across all four-element input lists, what is the maximum number of element comparisons made?**

Feel free to modify the input list in line 4 and re-run the code!

Across all four-element input lists, **what is the maximum number of switches made**?

We have seen the worst-case number of switches and comparisons for one particular way of sorting the list. But what if we want to make a statement about *any* method of sorting that uses these two native operations (comparisons and switches)?

To start, we will explicitly define a comparison as an operation that takes in two numbers and has one of two results depending on their values. Using this definition, one way to get a lower bound for the number of comparisons needed is the Pigeonhole principle. For instance, we use it below to show that a sorting algorithm needs more than two comparisons to sort a list of four elements.

Suppose that a sorting algorithm uses at most two comparisons on any four-element list. Each comparison has two possible outcomes, so there are only $2^2=4$ possible outcomes from the combination of the two. This means that with two comparisons, we can only reorder the list in four different ways.

However, there are 24 different possible initial orderings for a four element list, so to sort every possible input, we need to be able to reorder the list in 24 different ways. As a result, in order for us to sort the entire list with two comparisons, we would need to contain 24 different actions in our four potential outcomes.

Unfortunately, the pigeonhole principle, and common sense, tell us that this is impossible, meaning that we can't sort a 4 element list with just two comparisons.

Using this argument, what is the largest $N$ for which we can definitively say that any algorithm that makes at most $N$ comparisons cannot sort a four-element list?

In this quiz, we've seen how we can measure algorithms by the number of times they call a primitive function. By studying algorithms, we can learn efficient ways to solve common problems, as well as get a rich toolkit with which to reason about how well a given algorithm performs.