If there has been one recurring theme so far, it’s that more organized data can be valuable for certain types of computations, like searching for an element in a list. Specifically, we’ve noted that having a sorted list is generally more useful than having an unsorted list.
So, how can we sort a list? As with searching, we’ll start with a naive, intuitive approach (called insertion sort). In a later quiz, we’ll explore more efficient methods of sorting.
Suppose you had the following poker hand dealt to you, and decided to use insertion sort to sort it from smallest to largest:
Note that \(A>K>Q>J>2.\) How many individual comparisons between cards would need to be made for the sort?
Reminder: First, the \(K\) and \(2\) are compared. Then, the \(A\) and \(K\) are compared, then the \(Q\) and \(A\) and so on.
Ari wants to prove that insertion sort is slow. What order should he put the cards in to make sorting the cards in ascending order maximally inefficient?
In the worst case for a list with 10 distinct elements, how many comparisons are made?
In the best case for a list with 10 distinct elements, how many comparisons are made?
The worst case increases like \(n^2\) for large \(n,\) while the best case increases linearly (like \(n\)). Unfortunately for insertion sort--as we’ll explore in the next quiz--the worst case is much more likely, and the average case also increases like \(n^2.\)
Thankfully, we can do better. In later quizzes, we’ll dive into more complex, more efficient sorting algorithms; their worst case performance will increase like \(n \log n\) for large \(n.\)
Before we get there, we need to take a detour to start formalizing this language of computational complexity. How do we actually express how many computations an algorithm will use, and how do we meaningfully compare algorithms? We’ll explore this in the next quiz, on big O notation.