## Sunday, July 17, 2011

### Quicksort partitioning

During my years in school and at university I was always exposed to only one algorithm of quicksort partitioning. Let's take a look at some ways to partition a list for sorting.

Quicksort itself works by taking a list, picking an element from the list which is referred to as the "pivot" and splitting the list into two sub lists, the first containing all the elements smaller than the pivot and the second containing all the elements greater than the pivot. Once you have these two sub lists you can sort each one independently of the other since elements in one list will not be moved into the other after the sort is complete.

This splitting into two lists is called "partitioning". Partitioning once will not sort the list but it will allow you to either use a different sorting algorithm on each sub list (partition) or to recursively partition the two partitions until you end up with a partition of 1 or 0 elements, which is necessarily sorted.

For example, partitioning the list [7,3,6,4,1,7,3] using 4 as a pivot will give us a first partition of [3,1,3] and a second partition of [7,6,7]. The pivot itself, along with other duplicates of it, may or may not go in one of the partitions, depending on how the partitioning is done. If it does not go into one of the partitions, then the sort will place the pivot between the 2 partitions after they have been sorted.

Partitioning by filtering

The most intuitive way to partition is by creating 2 new lists, going through the unsorted list and copying elements from the unsorted list into one of the 2 lists. This is memory expensive however as you end up needing twice as much space as the unsorted list takes. The following partitioning algorithms are "in-place" and hence do not need any new lists.

Partitioning by moving the pivot

This is the partitioning algorithm I was familiar with at school. It's quite intuitive but slow when compared to the next algorithm. The way this works is by putting the pivot into its sorted place, that is, the place where it will be after the whole list has been sorted. All the elements smaller than the pivot will be on its left and all the elements larger than the pivot will be on its right. Therefore you would have created 2 partitions, the left side of the pivot and the right side.

The algorithm uses a pivot pointer which keeps track of where the pivot is and an index pointer which is used to compare the pivot to other elements. The pivot pointer starts by being at the right end of the list (you can choose a pivot and swap it with the last element if you don't want to stick to the element which happens to be there) and the index pointer starts by being at the left end of the list. The index moves towards the pivot pointer until it encounters an element which is not on the correct side of the pivot, upon which the index and the pivot and swapped and the the index pointer and pivot pointer swap locations. Once the index pointer and pivot pointer meet, the pivot is in its sorted location and the left and right side of the pivot are partitions.

Pseudo code:
function partition(arr, left, right)
pivotPtr = right
indexPtr = left
while pivotPtr != indexPtr
if indexPtr < pivotPtr //if index pointer is to the left of the pivot
while arr[indexPtr] <= arr[pivotPtr] and indexPtr < pivotPtr
indexPtr++ //move index pointer towards the pivot
if indexPtr < pivotPtr
swap(arr[indexPtr], arr[pivotPtr])
swap(indexPtr, pivotPtr)
else //if index pointer is to the right of the pivot
while arr[indexPtr] >= arr[pivotPtr] and indexPtr > pivotPtr
indexPtr-- //move index pointer towards the pivot
if indexPtr > pivotPtr
swap(arr[pivotPtr], arr[indexPtr])
swap(pivotPtr, indexPtr)
return pivotPtr

Partitioning by dividing

In the previous partitioning algorithm, we had to constantly swap the pivot in order to eventually put it in its place. This is however unnecessary as partitioning does not require the pivot to be in its sorted place, only that we have 2 partitions, even if the pivot itself is in one of the partitions (it doesn't matter in which one as it could be eventually placed in its sorted place in either partition).

This time we will not care where the pivot is, as long as we know its value. We will need 2 pointers, a high and a low pointer, which will be moving towards each other. The low pointer will expect to encounter only elements which are smaller than the pivot and the high point will expect to encounter only elements which are larger than the pivot. When both pointers encounter a wrong element, they swap the elements and continue moving towards each other. When they eventually meet, all the elements to the left of the meeting point will be smaller than or equal to the pivot and all the elements to the right of the meeting point will be greater than or equal to the pivot.

Since both pointers will be moving toward each other before swapping, this algorithm will do less swaps than the previous one and hence will be much faster. In fact a simple experiment will show that it does half the number of swaps.

Pseudo code:
function partition(arr, left, right, pivot)
lo = left
hi = right
while lo < hi
while arr[lo] <= pivot and lo < hi
lo++ //move low pointer towards the high pointer
while arr[hi] >= pivot and hi > lo
hi-- //move high pointer towards the low pointer
if lo < hi
swap(arr[lo], arr[hi])
/*
Since the high pointer moves last, the meeting point should be on an element that is greater than the pivot, that is, the meeting point marks the start of the second partition.
However if the pivot happens to be the maximum element, the meeting point will simply be the last element and hence will not have any significant meaning.
Therefore we need to make sure that the returned meeting point is where the starting point of the second partition is, including if the second partition is empty.
*/
if arr[lo] < pivot
return lo+1
else
return lo