Previous Section
 < Free Open Study > 
Next Section


10.1 Sorting

Putting an unsorted list of data elements into order-sorting-is a very common and useful operation. Entire books have been written about various sorting algorithms and algorithms for searching a sorted list to find a particular element. The goal is to come up with better, more efficient sorts. Because sorting a large number of elements can be extremely time-consuming, a good sorting algorithm is very desirable. This is one area in which programmers are sometimes encouraged to sacrifice clarity in favor of speed of execution.

How do we describe efficiency? We pick an operation central to most sorting algorithms: the operation that compares two values to see which is smaller. In our study of sorting algorithms, we relate the number of comparisons to the number of elements in the list (N) as a rough measure of the efficiency of each algorithm. The number of swaps made is another measure of sorting efficiency. In the exercises, we ask you to analyze the sorting algorithms developed in this chapter in terms of data movements.

Another efficiency consideration is the amount of memory space required. In general, memory space is not a very important factor in choosing a sorting algorithm. We look at only one sort in which space would be a serious consideration. The usual time versus space tradeoff applies to sorts-more space often means less time, and vice versa.

Because processing time is the factor that applies most often to sorting algorithms, we consider it in detail here. Of course, as in any application, the programmer must determine goals and requirements before selecting an algorithm and starting to write code.

We review the straight selection sort and the bubble sort, two simple sorts that students often write in their first programming course. Then we review a more complex sorting algorithm that we examined in Chapter 7 (the quick sort algorithm) and introduce two additional complex sorts: merge sort and heap sort. We assume that the actual data type that replaces the template parameter ItemType in the rest of the chapter is either a simple built-in type or a class that overloads the relational operators. For simplicity, we assume that the keys are unique.

As we pointed out in Chapter 7, at the logical level, sorting algorithms take an unsorted list object and convert it into a sorted list object. At the implementation level, sorting algorithms take an array and reorganize the values in the array so that they are in order by key. The number of values to be sorted and the array in which they are stored are parameters to our sorting algorithms. Note that we are not sorting an object of type UnsortedType but rather the values stored in an array. We call the array values, and its elements are of type ItemType.

Straight Selection Sort

If you were handed a list of names and asked to put them in alphabetical order, you might use this general approach:

  1. Find the name that comes first in the alphabet, and write it on a second sheet of paper.

  2. Cross the name out on the original list.

  3. Continue this cycle until all names on the original list have been crossed out and written onto the second list, at which point the second list is sorted.

This algorithm is simple to translate into a computer program, but it has one drawback: It requires space in memory to store two complete lists. Although we have not talked a great deal about memory space considerations, this duplication is clearly wasteful. A slight adjustment to this manual approach does away with the need to duplicate space, however. As you cross a name off the original list, a free space opens up. Instead of writing the minimum value on a second list, you can exchange it with the value currently in the position where the crossed-off item should go. Our "by-hand list" is represented in an array.

Let's look at an example-sorting the five-element array shown in Figure 10.1(a). Because of this algorithm's simplicity, it is usually the first sorting method that students learn. Therefore, we go straight to the algorithm:

Click To expand
Figure 10.1: Example of straight selection sort (sorted elements are shaded)

Although you could immediately begin writing the code, we will use this algorithm to practice designing correct loops.

We use a variable, current, to mark the beginning of the unsorted part of the array. We start out by setting current to the index of the first position (index 0). The unsorted part of the array then goes from the index current to numValues - 1.

The main sort processing occurs in a loop. In each iteration of the loop body, the smallest value in the unsorted part of the array is swapped with the value in the current location. After the swap, current is in the sorted part of the array, so we shrink the size of the unsorted part by incrementing current. The loop body is now complete.

At the top of the loop body, the unsorted part of the array goes from the (now incremented) current index to numValues - 1. We know that every value in the unsorted part is greater than (or equal to, if duplicates are permitted) any value in the sorted part of the array.

How do we know when "there are more elements in the unsorted part"? As long as current <= numValues - 1, the unsorted part of the array (values [current] . . values [numValues - 1]) contains values. In each iteration of the loop body, current is incremented, shrinking the unsorted part of the array. When current = numValues - 1, the "unsorted" part contains only one element, and we know that this value must be greater than (or equal to) any value in the sorted part. Thus the value in values [numValues - 1] is in its correct place, and we are done. The condition for the while loop is current < numValues - 1. Figure 10.2 gives a snapshot of the selection sort algorithm.

Click To expand
Figure 10.2: A snapshot of the selection sort algorithm

Now all we have to do is to locate the smallest value in the unsorted part of the array. Let's write a function to perform this task. The function MinIndex receives the array elements and the first and last indexes of the unsorted part, and returns the index of the smallest value in this part of the array.

Now that we know where the smallest unsorted element is located, we swap it with the element at index current. Because swapping data values between two array locations is common in many sorting algorithms, let's write a little function, Swap, to accomplish this task.

template<class ItemType>
inline void Swap(ItemType& item1, ItemType& item2)
// Post: Contents of item1 and item2 have been swapped.
{
  ItemType tempItem;

  tempItem = item1;
  item1 = item2;
  item2 = tempItem;
}

The word inline before the function heading is called a specifier. inline suggests that the compiler insert the code for the function body every time a call is issued rather than actually making a function call. We "suggest" rather than "tell," because compilers are not obliged to implement the inline specifier.

Here are the rest of the function templates for this sorting algorithm:

template<class ItemType>
int MinIndex(ItemType values[], int startIndex, int endIndex)
// Post: Returns the index of the smallest value in
//       values[startIndex]..values[endIndex].
{
  int indexOfMin = startIndex;
  for (int index = startIndex + 1; index <= endIndex; index++)
    if (values[index] < values[indexOfMin])
      indexOfMin = index;
  return indexOfMin;
}
template<class ItemType>
void SelectionSort(ItemType values[], int numValues)
// Post: The elements in the array values are sorted by key.
{
  int endIndex = numValues-1;
  for (int current = 0; current < endIndex; current++)
    Swap(values[current],
         values[MinIndex(values, current, endIndex)]);
}

Analyzing the Selection Sort Now let's try measuring the amount of "work" required by this algorithm. We describe the number of comparisons as a function of the number of items in the array. To be concise, in this discussion we refer to numValues as N.

The comparison operation occurs in the function MinIndex. We know from the loop condition in the SelectionSort function that MinIndex is called N - 1 times. Within MinIndex, the number of comparisons varies, depending on the values of startIndex and endIndex:

for (int index = startIndex + 1; index <= endIndex; index++)
  if (values[index] < values[indexOfMin])
    indexOfMin = index;

In the first call to MinIndex, startIndex is 0 and endIndex is numValues - 1, so N - 1 comparisons occur; in the next call, N - 2 comparisons occur; and so on. In the last call, only one comparison takes place. The total number of comparisons is

(N - 1) + (N - 2) + (N - 3) + ...+ 1 = N(N - 1)/2

To accomplish our goal of sorting an array of N elements, the straight selection sort requires N(N - 1)/2 comparisons. Note that the particular arrangement of values in the array does not affect the amount of work done. Even if the array is in sorted order before the call to SelectionSort, the function still makes N(N - 1)/2 comparisons.

Table 10.1 shows the number of comparisons required for arrays of various sizes. Note that doubling the array size roughly quadruples the number of comparisons.

Table 10.1: Number of Comparisons Required to Sort Arrays of Different Sizes Using the Selection Sort

Number of Items

Number of Comparisons

10

45

20

190

100

4,950

1,000

499,500

10,000

49,995,000

How do we describe this algorithm in terms of Big-O notation? If we express N(N - 1)/2 as 1/22 - 1/2, it is easy to see. In Big-O notation, we consider only the term 1/22, because it increases fastest relative to N. (Remember the elephants and goldfish?) Further, we ignore the constant, 1/2 making this algorithm become O(N2). Thus, for large values of N, the computation time is approximately proportional to N2. Looking again at Table 10.1, we see that multiplying the number of elements by 10 increases the number of comparisons by a factor of more than 100; that is, the number of comparisons is multiplied by approximately the square of the increase in the number of elements. Looking at this table makes us appreciate why sorting algorithms are the subject of so much attention: Using SelectionSort to sort an array of 1,000 elements requires almost one-half million comparisons!

The identifying feature of a selection sort is that, on each pass through the loop, one element is put into its proper place. In the straight selection sort, each iteration finds the smallest unsorted element and puts it into its correct place. If we had made the function find the largest value, instead of the smallest, the algorithm would have sorted in descending order. We could also have made the loop go down from numValues - 1 to 1, putting the elements into the bottom of the array first. All these algorithms are variations on the straight selection sort. The variations do not change the basic way that the minimum (or maximum) element is found.

Bubble Sort

The bubble sort is a selection sort that uses a different scheme for finding the minimum (or maximum) value. Each iteration puts the smallest unsorted element into its correct place, but it also changes the locations of the other elements in the array. The first iteration puts the smallest element in the array into the first array position. Starting with the last array element we compare successive pairs of elements, swapping them whenever the bottom element of the pair is smaller than the one above it. In this way, the smallest element "bubbles up" to the top of the array. The next iteration puts the smallest element in the unsorted part of the array into the second array position, using the same technique. As you look at the example in Figure 10.3, note that in addition to putting one element into its proper place, each iteration produces some intermediate changes in the array.

Click To expand
Figure 10.3: Example of bubble sort (sorted elements are shaded)

The basic algorithm for the bubble sort follows:

The structure of the loop is much like that in the SelectionSort function. The unsorted part of the array is the area from values [current] to values [numValues - 1]. The value of current begins at 0, and we loop until current reaches numValues - 1, with current incremented in each iteration. On entrance to each iteration of the loop body, the first current values are already sorted, and all elements in the unsorted part of the array are greater than or equal to the sorted elements.

The inside of the loop body is different, however. Each iteration of the loop "bubbles up" the smallest value in the unsorted part of the array to the current position. The algorithm for the bubbling task follows:

Figure 10.4 gives a snapshot of this algorithm. Using the Swap function coded earlier, the code for the function BubbleSort follows.

Click To expand
Figure 10.4: Snapshot of bubble sort
template<class ItemType>
void BubbleUp(ItemType values[] , int startIndex, int endIndex)
// Post: Adjacent pairs that are out of order have been switched
//       between values[startIndex]..values[endIndex] beginning at
//       values[endIndex].
{
  for (int index = endIndex; index > startIndex; index--)
    if (values[index] < values[index-1])
      Swap(values[index], values[index-1]);
}

template<class ItemType>
void BubbleSort(ItemType values[], int numValues)
// Post: The elements in the array values are sorted by key.
{
  int current = 0;

  while (current < numValues - 1)
  {
    BubbleUp(values, current, numValues-1);
    current++;
  }
}

Analyzing the Bubble Sort It is easy to analyze the work required by BubbleSort-it is the same as that for the straight selection sort algorithm. The comparisons occur in BubbleUp, which is called N - 1 times. There are N - 1 comparisons the first time, N - 2 comparisons the second time, and so on. Therefore BubbleSort and SelectionSort require the same amount of work, in terms of the number of comparisons. BubbleSort does more than just make comparisons, however: while SelectionSort performs only one data swap per iteration, BubbleSort may do many additional data swaps.

What purpose do these intermediate data swaps serve? By reversing out-of-order pairs of data as they are noticed, the function might get the array in order before N - 1 calls to BubbleUp. However, this version of the bubble sort makes no provision for stopping when the array is completely sorted. Even if the array is already in sorted order when BubbleSort is called, this function continues to call BubbleUp (which changes nothing) N - 1 times.

We could quit before the maximum number of iterations if BubbleUp returns a Boolean flag, sorted, to tell us when the array is sorted. Within BubbleUp, we initially set sorted to true; then in the loop, if any swaps occur, we reset sorted to false. If no elements have been swapped, we know that the array is already in order. Now the bubble sort needs to make only one extra call to BubbleUp when the array is in order. This version of the bubble sort follows:

template<class ItemType>
void BubbleUp2(ItemType values[], int startIndex, int endIndex.
     bool& sorted)
// Post: Adjacent pairs that are out of order have been switched
//       between values[startIndex]..values[endIndex] beginning at
//       values[endIndex].
//       sorted is false if a swap was made and true otherwise.
{
  sorted = true;
  for (int index = endIndex; index > startIndex; index--)
    if (values[index] < values[index-1])
    {
      Swap(values[index], values[index-1]);
      sorted = false;
    }
}

template<class ItemType>
void ShortBubble(ItemType values[], int numValues)
// Post: The elements in the array values are sorted by key.
//       The process stops as soon as values is sorted.
{
  int current = 0;
  bool sorted = false;
  while (current < numValues - 1 && !sorted)
  {
    BubbleUp2(values, current, numValues-1, sorted);
    current++;
  }
}

The analysis of ShortBubble is more difficult. Clearly, if the array is initially sorted in order, one call to BubbleUp2 tells us so. In this best-case scenario, ShortBubble is O(N); only N - 1 comparisons are required for the sort. What if the original array was actually sorted in descending order before the call to ShortBubble? This is the worst possible case: ShortBubble requires as many comparisons as BubbleSort and SelectionSort, not to mention the "overhead" consisting of the many extra swaps and setting and resetting of the sorted flag. Can we calculate an average case? In the first call to BubbleUp2, when current is 0, numValues - 1 comparisons occur; on the second call, when current is 1, numValues - 2 comparisons occur. The number of comparisons in any call to BubbleUp2 is numValues - current - 1. If we let N indicate numValues and K indicate the number of calls to BubbleUp2 executed before ShortBubble finishes its work, the total number of comparisons required is

A little algebra[1] changes this to

(2KN - K2 - K)/2

In Big-O notation, the term that is increasing the fastest relative to N is 2KN. We know that K is between 1 and N - 1. On average, over all possible input orders, K is proportional to N. Therefore, 2KN is proportional to N2; that is, the ShortBubble algorithm is also O(N2).

Why do we even bother to mention the bubble sort algorithm if it is O(N2) and requires extra data movements? Because ShortBubble is the only sorting algorithm that recognizes when the array is already sorted and stops. If the original array is already sorted when the program calls ShortBubble, only one pass is made through the array. If you plan to sort a file that you know is almost in order, ShortBubble is a good choice.

Insertion Sort

In Chapter 3, we created a sorted list by inserting each new element into its appropriate place in an array. We can use a similar approach for sorting an array. The principle of the insertion sort is quite simple: Each successive element in the array to be sorted is inserted into its proper place with respect to the other, already-sorted elements. As with the previous sorts, we divide our array into a sorted part and an unsorted part. Initially, the sorted portion contains only one element: the first element in the array. Now we take the second element in the array and put it into its correct place in the sorted part; that is, values [0] and values [1] are in order with respect to each other. Now the value in values [2] is put into its proper place, so values [0] . . values [2] are in order with respect to each other. This process continues until all the elements have been sorted. Figure 10.5 illustrates this process, which we describe in the following algorithm, and Figure 10.6 shows a snapshot of the algorithm.

Click To expand
Figure 10.5: Example of the insertion sort algorithm
Click To expand
Figure 10.6: A snapshot of the insertion sort algorithm

In Chapter 3, our strategy was to search for the insertion point from the beginning of the array and shift the elements from the insertion point down one slot to make room for the new element. We can combine the searching and shifting by beginning at the end of the sorted part of the array. We compare the item at values [current] to the one before it. If it is less, we swap the two items. We then compare the item at values [current - 1] to the one before it, and swap them if necessary. The process stops when the comparison shows that the values are in order or we have swapped into the first place in the array.

Here are the coded versions of InsertItem and InsertionSort:

template<class ItemType>
void InsertItem(ItemType values [], int startIndex, int endIndex)
// Post: values[0] ..values [endIndex] are now sorted.
{
  bool finished = false;
  int current = endIndex;
  bool moreToSearch = (current != startIndex);

  while (moreToSearch && !finished)
  {
    if (values[current] < values[current -1] )
    {
      Swap(values[current], values[current-1]);
      current--;
      moreToSearch = (current != startIndex);
    }
    else
      finished = true;
  }
)

template<class ItemType>
void InsertionSort(ItemType values[], int numValues)
// Post: The elements in the array values are sorted by key.
{
  for (int count = 0; count < numValues; count++)
    InsertItem(values, 0, count);
}

Analyzing the Insertion Sort The general case for this algorithm mirrors that for the SelectionSort and the BubbleSort, so the general case is O(N2). Like ShortBubble, InsertionSort also has a best case: the data are already sorted in ascending order. When the data are in ascending order, InsertItem is called N times, but only one comparison is made each time and no swaps occur. The maximum number of comparisons is made only when the elements in the array are in reverse order.

If we know nothing about the original order in the data to be sorted, SelectionSort, ShortBubble, and InsertionSort are all O(N2) sorts and are too time-consuming for sorting large arrays. Thus we need sorting methods that work better when N is large.

O(N log2N) Sorts

Considering how rapidly N2 grows as the size of the array increases, can't we do better? We note that N2 is a lot larger than (1/2)2 + (1/2)2. If we could cut the array into two pieces, sort each segment, and then merge the two pieces again, we should end up sorting the entire array with a lot less work. Figure 10.7 shows an example of this approach.

Click To expand
Figure 10.7: Rationale for divide-and-conquer sorts

The idea of "divide and conquer" has been applied to the sorting problem in different ways, resulting in a number of algorithms that can do the job much more efficiently than O(N2). In fact, an entire category of sorting algorithms are O(Nlog2N). We looked at one of these in Chapter 7: QuickSort. Here we examine two other sorting algorithms: MergeSort and HeapSort. As you might guess, the efficiency of these algorithms is achieved at the expense of the simplicity seen in the straight selection, bubble, and insertion sorts.

Merge Sort

The merge sort algorithm is taken directly from the idea in the previous section:

Merging the two halves together is an O(N) task: We merely go through the sorted halves, comparing successive pairs of values (one in each half) and putting the smaller value into the next slot in the final solution. Even if the sorting algorithm used for each half is O(N2), we should see some improvement over sorting the whole array at once.

Actually, because MergeSort is itself a sorting algorithm, we might as well use it to sort the two halves. That's right-we can make MergeSort be a recursive function and let it call itself to sort each of the two subarrays:

This is the general case, of course. What is the base case, which does not involve any recursive calls to MergeSort? If the "half" to be sorted doesn't have more than one element, we can consider it already sorted and just return.

Let's summarize MergeSort in the format we used for other recursive algorithms. The initial function call would be MergeSort (values, 0, numValues - 1).

Cutting the array in half is simply a matter of finding the midpoint between the first and last indexes:

middle = (first + last) / 2;

Then, in the smaller-caller tradition, we can make the recursive calls to MergeSort:

MergeSort(values, first, middle);
MergeSort(values, middle+1, last);

So far this is simple enough. Now we merely have to merge the two halves and we're done.

Merging the Sorted Halves Obviously, the serious work occurs in the merge step. Let's look first at the general algorithm for merging two sorted arrays, and then at the specific problem of our subarrays.

To merge two sorted arrays, we compare successive pairs of elements, one from each array, moving the smaller of each pair to the "final" array. We can stop when the shorter array runs out of elements, and then move all remaining elements (if any) from the other array to the final array. Figure 10.8 illustrates the general algorithm. We use a similar approach in our specific problem, in which the two "arrays" to be merged are actually subarrays of the original array (Figure 10.9). Just as in Figure 10.8, where we merged array1 and array2 into a third array, we need to merge our two subarrays into some auxiliary data structure. We need this data structure, another array, only temporarily. After the merge step, we can copy the now-sorted elements back into the original array. Figure 10.10 shows the entire process.

Click To expand
Figure 10.8: Strategy for merging two sorted arrays
Click To expand
Figure 10.9: Two subarrays
Click To expand
Figure 10.10: Merging sorted halves

Let's specify a function, Merge, to perform this task:

Here is the algorithm for Merge:

In the coding of the function Merge, we use leftFirst and rightFirst to indicate the "current" position in the left and right halves, respectively. Because they are not reference parameters, copies of these parameters are passed to Merge. The copies are changed in the function, but the changed values are not passed out of Merge. Note that both of the "copy any remaining items ..." loops are included. During the execution of this function, one of these loops never executes. Can you explain why?

template<class ItemType>
void Merge(ItemType values[], int leftFirst, int leftLast,
     int rightFirst, int rightLast)
// Post: values [leftFirst]..values[leftLast] and
//       values[rightFirst]..values[rightLast] have been merged.
//       values[leftFirst]..values[rightLast] are now sorted.
{
  ItemType tempArray[MAX_ITEMS];
  int index = leftFirst;
  int saveFirst = leftFirst;

  while ((leftFirst <= leftLast) && (rightFirst <= rightLast))
  {
    if (values[leftFirst] < values[rightFirst])
    {
      tempArray[index] = values[leftFirst];
      leftFirst++;
    }
    else
    {
      tempArray[index] = values[rightFirst];
      rightFirst++;
    )
    index++;
  }

  while (leftFirst <- leftLast)
  // Copy remaining items from left half.
  {
    tempArray[index] = values[leftFirst];
    leftFirst++;
    index++;
  }

  while (rightFirst <= rightLast)
  // Copy remaining items from right half.
  {
    tempArray[index] = values[rightFirst];
    rightFirst++;
    index++;
  }

  for (index = saveFirst; index <= rightLast; index++)
    values[index] = tempArray[index];
}

The MergeSort Function As we said, most of the work takes place in the merge task. The actual MergeSort function is short and simple:

template<class ItemType>
void MergeSort(ItemType values[], int first, int last)
// Post: The elements in values are sorted by key.
{
  if (first < last)
  {
    int middle = (first + last) / 2;
    MergeSort(values, first, middle);
    MergeSort(values, middle + 1, last);
    Merge(values, first, middle, middle + 1, last);
  }
}

Analyzing the Merge Sort The MergeSort function splits the original array into two halves. First it sorts the first half of the array, using the divide-and-conquer approach; then it sorts the second half of the array using the same approach; finally it merges the two halves. To sort the first half of the array it follows the same approach, splitting and merging. During the sorting process, the splitting and merging operations are intermingled. To simplify the analysis, however, we imagine that all of the splitting occurs first. We can view the process in this way without affecting the correctness of the algorithm.

We view the MergeSort algorithm as continually dividing the original array (of size N) in two, until it has created N one-element subarrays. Figure 10.11 shows this point of view for an array with an original size of 16. The total work needed to divide the array in half, over and over again until we reach subarrays of size 1, is O(N). After all, we end up with N subarrays of size 1.

Click To expand
Figure 10.11: Analysis of the function MergeSort with N = 16

Each subarray of size 1 is obviously a sorted subarray. The real work of the algorithm comes in merging the smaller sorted subarrays back into the larger sorted subarrays. To merge two sorted subarrays of size X and size Y into a single sorted subarray using the Merge operation requires O(X + Y) steps. We can see this fact because each time through the while loops of the Merge function we advance either the leftFirst index or the rightFirst index by 1. Because we stop processing when these indexes become greater than their "last" counterparts, we know that we take a total of (leftLast - leftFirst + 1) + (rightLast - rightFirst + 1) steps. This expression represents the sum of the lengths of the two subarrays being processed.

How many times is the Merge function called? And what are the sizes of the subarrays involved? Let's work from the bottom up. The original array of size N is eventually split into N subarrays of size 1. Merging two of those subarrays into a subarray of size 2 requires 0(1 + 1) = 0(2) steps based on the analysis given in the preceding paragraph. That is, we must perform this merge operation a total of 1/2 times (we have N one-element subarrays and we are merging them two at a time). Thus the total number of steps to create all of the sorted two-element subarrays is 0(N).

We repeat this process to create four-element subarrays. It takes four steps to merge two two-element subarrays. We must perform this merge operation a total of 1/4 times (we have 1/2 two-element subarrays and we are merging them two at a time). The total number of steps to create all of the sorted four-element subarrays is O(N) (4 * 1/4 = N).

The same reasoning leads us to conclude that each of the other levels of merging also requires O(N) steps. At each level, the sizes of the subarrays double, but the number of subarrays is cut in half, balancing out.

We now know that it takes a total of O(N) steps to perform merging at each "level" of merging. How many levels are there? The number of levels of merging is equal to the number of times we can split the original array in half. If the original array is size N, we have log2N levels (this is just like the analysis of the binary search algorithm). For example, in Figure 10.11 the size of the original array is 16 and the number of levels of merging is 4. Because we have log2N levels, and we require N steps at each level, the total cost of the merge operation is O(Nlog2N). Because the splitting phase was only O(N), we conclude that MergeSort has O(Nlog2N) complexity. Table 10.2 illustrates that, for large values of N, O(Nlog2N) represents a big improvement over O(N2).

Table 10.2: Comparing N2 and Nlog2N

N

log2N

N2

N log2 N

32

5

1,024

160

64

6

4,096

384

128

7

16,384

896

256

8

65,536

2,048

512

9

262,144

4,608

1024

10

1,048,576

10,240

2048

11

4,194,304

22,528

4096

12

16,777,216

49,152

The disadvantage of MergeSort derives from the fact that it requires an auxiliary array that is as large as the original array to be sorted. If the array is large and space is a critical factor, this sort may not be an appropriate choice. Next, we discuss two sorts that move elements within the original array and do not need an auxiliary array.

Quick Sort

We used Quicksort as the Case Study for Chapter 7. Like MergeSort, Quicksort is a "divide-and-conquer" algorithm, which is inherently recursive. We postponed the analysis of Quicksort until this chapter.

The analysis of Quicksort is very similar to that of MergeSort. On the first call, every element in the array is compared to the dividing value (the "split value"), so the work done is O(N). The array is divided into two parts (not necessarily halves), which are then examined.

Each of these pieces is then divided in two, and so on. If each piece is split approximately in half, O(log2N) splits occur. At each split, we make O(N) comparisons. Thus Quicksort is also an O(Nlog2N) algorithm, which is quicker than the O(N2) sorts we discussed earlier in this chapter.

But Quicksort isn't always quicker. Note that log2N splits take place if each split divides the segment of the array approximately in half. As we've seen, Quicksort is sensitive to the order of the data.

What happens if the array is already sorted when we call our first version of Quicksort? The splits are very lopsided, and the subsequent recursive calls to QuickSort break into a segment of one element and a segment containing all the rest of the array. This situation produces a sort that is not at all quick. In fact, N - 1 splits occur; in this case, Quicksort has O(N2) complexity.

Such a situation is very unlikely to arise by chance. By way of analogy, consider the odds of shuffling a deck of cards and coming up with a sorted deck. On the other hand, in some applications you may know that the original array is likely to be sorted or nearly sorted. In such cases, you would want to use either a different splitting algorithm or a different sort-maybe even ShortBubble!

What about space requirements? Quicksort does not require an extra array, as MergeSort does. Does it have any extra space requirements, besides the few local variables? Yes-remember that Quicksort uses a recursive approach. Many levels of recursion can be "saved" on the system stack at any time. On average, the algorithm requires O(log2N) extra space to hold this information and in the worst case requires O(N) extra space, the same as MergeSort.

Heap Sort

In each iteration of the selection sort, we searched the array for the next-smallest element and put it into its correct place in the array. Another way to write a selection sort is to find the maximum value in the array and swap it with the last array element, then find the next-to-largest element and put it into its place, and so on. Most of the work in this sorting algorithm comes from searching the remaining part of the array in each iteration, looking for the maximum value.

In Chapter 9, we discussed the heap, a data structure with a very special feature: We always know where to find its greatest element. Because of the order property of heaps, the maximum value of a heap is located in the root node. We can take advantage of this situation by using a heap to help us sort. The general approach of the heap sort is as follows:

  1. Take the root (maximum) element off the heap, and put it into its place.

  2. Reheap the remaining elements. (This action puts the next-largest element into the root position.)

  3. Repeat until no more elements are left.

The first part of this algorithm sounds a lot like the straight selection sort. What makes the heap sort fast is the second step: finding the next-largest element. Because the shape property of heaps guarantees a binary tree of minimum height, we make only O(log2N) comparisons in each iteration, as compared with O(N) comparisons in each iteration of the selection sort.

Building a Heap By now you are probably protesting that we are dealing with an unsorted array of elements, not a heap. Where does the original heap come from? Before we go on, we must convert the unsorted array, values, into a heap.

Let's look at how the heap relates to our array of unsorted elements. In Chapter 9, we saw how heaps can be represented in an array with implicit links. Because of the shape property, we know that the heap elements take up consecutive positions in the array. In fact, the unsorted array of data elements already satisfies the shape property of heaps. Figure 10.12 shows an unsorted array and its equivalent tree.

Click To expand
Figure 10.12: An unsorted array and its tree

We also need to make the unsorted array elements satisfy the order property of heaps. First, let's see whether any part of the tree already satisfies the order property. All of the leaf nodes (subtrees with only a single node) are heaps. In Figure 10.13(a), the subtrees whose roots contain the values 19, 7, 3, 100, and 1 are heaps because they are root nodes.

Next, let's look at the first nonleaf node, the one containing the value 2 (Figure 10.13b). The subtree rooted at this node is not a heap, but it is almost a heap-all of the nodes except the root node of this subtree satisfy the order property. We know how to fix this problem. In Chapter 9, we developed a heap utility function, ReheapDown, that we can use to correct this exact situation. Given a tree whose elements satisfy the order property of heaps except (perhaps) at the root node, ReheapDown rearranges the nodes, leaving the (sub)tree as a heap.

We apply this function to all subtrees on this level, then move up a level in the tree and continue reheaping until we reach the root node. After ReheapDown has been called for the root node, the entire tree should satisfy the order property of heaps. Figure 10.13 illustrates this heap-building process; Figure 10.14 shows the changing contents of the array.

Click To expand
Figure 10.13: The heap-building process
Click To expand
Figure 10.14: Changing contents of the array

(In Chapter 9, we defined ReheapDown to be a member function of HeapType, a struct type. There, the function had two parameters: the indexes of the root node and the bottom node. Here, we assume a slight variation: ReheapDown is a global function that takes a third parameter-an array that is treated as a heap.)

The algorithm for building a heap is summarized here:

We know where the root node is stored in our array representation of heaps-in values[0]. Where is the first nonleaf node? Because half the nodes of a complete binary tree are leaves (prove this yourself), the first nonleaf node may be found at position numValues/2 - 1.

Sorting Using the Heap Now that we are satisfied that we can turn the unsorted array of elements into a heap, let's take another look at the sorting algorithm.

We can easily access the largest element from the original heap-it's in the root node. In our array representation of heaps, that location is values [0]. This value belongs in the last-used array position values [numValues - 1], so we can just swap the values in these two positions. Because values [numValues - l] contains the largest value in the array (its correct sorted value), we want to leave this position alone. Now we are dealing with a set of elements, from values [0] through values [numValues - 2], that is almost a heap. We know that all of these elements satisfy the order property of heaps, except (perhaps) the root node. To correct this condition, we call our heap utility, ReheapDown.

At this point we know that the next-largest element in the array is located in the root node of the heap. To put this element in its correct position, we swap it with the element in values [numValues - 2]. Now the two largest elements are in their final correct positions, and the elements in values [0] through values [numValues - 3] are almost a heap. We call ReheapDown again, and now the third-largest element is placed in the root of the heap.

We repeat this process until all of the elements are in their correct positions-that is, until the heap contains only a single element, which must be the smallest item in the array, values [0]. This location is its correct position, so the array is now completely sorted from the smallest to the largest element. Notice that at each iteration the size of the unsorted portion (represented as a heap) becomes smaller and the size of the sorted portion becomes larger. At the end of the algorithm, the size of the sorted portion matches the size of the original array.

The heap sort algorithm, as we have described it, sounds like a recursive process. Each time through, we swap and reheap a smaller portion of the total array. Because it uses tail recursion, we can code the repetition just as clearly using a simple for loop. The node-sorting algorithm follows:

The function HeapSort first builds the heap and then sorts the nodes, using the algorithms just discussed. Notice that HeapSort does not actually use the struct HeapType. Rather, it uses the ReheapDown function with the array of values passed as a parameter.

template<class ItemType>
void HeapSort(ItemType values[], int numValues)
// Assumption: Function ReheapDown is available.
// Post: The elements in the array values are sorted by key.
{
  int index;

  // Convert the array of values into a heap.
  for (index = numValues/2 - 1; index >= 0; index--)
    ReheapDown(values, index, numValues-1);

  // Sort the array.
  for (index = numValues-1; index >=1; index--)
  {
    Swap(values [0], values[index]);
    ReheapDown(values, 0, index-1);
  }
}

Figure 10.15 shows how each iteration of the sorting loop (the second for loop) would change the heap created in Figure 10.14. Each line represents the array after one operation. The sorted elements are shaded.

Click To expand
Figure 10.15: Effect of HeapSort on the array

We entered the HeapSort routine with a simple array of unsorted values and returned to the caller with an array of the same values sorted in ascending order. Where did the heap go? The heap in HeapSort is just a temporary structure, internal to the sorting algorithm. It is created at the beginning of the function, to aid in the sorting process, and then is methodically diminished element by element as the sorted part of the array grows. At the end of the function, the sorted part fills the array and the heap has completely disappeared. When we used heaps to implement priority queues in Chapter 9, the heap structure stayed around for the duration of the use of the queue. The heap in HeapSort, in contrast, is not a retained data structure. It exists only for a while inside the HeapSort function.

Click To expand

Analyzing the Heap Sort The code for HeapSort is very short-only a few lines of new code plus the utility function ReheapDown, which we developed in Chapter 9. These few lines of code, however, do quite a bit. All of the elements in the original array are rearranged to satisfy the order property of heaps, moving the largest element up to the top of the array, only to put it immediately into its place at the bottom. It's difficult to believe from a small example such as the one in Figure 10.15 that HeapSort is really very efficient.

In fact, for small arrays, HeapSort is not very efficient because of its "overhead." For large arrays, however, HeapSort is very efficient. Consider the sorting loop. We loop through N - 1 times, swapping elements and reheaping. The comparisons occur in ReheapDown. A complete binary tree with N nodes has O(log2(N + 1)) levels. In the worst case, then, if the root element has to be bumped down to a leaf position, the ReheapDown function would make O(log2N) comparisons; the function ReheapDown is O(log2N). Multiplying this activity by the N - 1 iterations, O(N), shows that the sorting loop is O(Nlog2N).

Combining the original heap build operation, which is O(N), and the sorting loop, we can see that HeapSort requires O(Nlog2N) comparisons. Note that, unlike QuickSort, HeapSort's efficiency is not affected by the initial order of the elements. HeapSort is just as efficient in terms of space; it uses only one array to store the data.

Testing

We have examined several sorting algorithms: selection sort, two versions of bubble sort, insertion sort, merge sort, two versions of quick sort, and heap sort. Before we examine other issues involved in sorting, we need to verify that our algorithms are correct. We can use the same pattern that we have used for earlier ADTs. We can take as input the name of a sorting algorithm, apply that algorithm to an array of values, and print the results. We need to initialize an array with random numbers, refresh the array after it has been sorted, and generate an array of new values.

On the Web, the file SortDr.cpp contains the test driver. The file Sorts.in contains a test plan that generates an array, applies each of the algorithms to it, generates a second array, and applies one of the algorithms. Sorts.out holds the output from the driver, and Sorts.screen holds what was written on the screen. Examine these files carefully. Be sure you understand them. You are asked to design a more involved test plan in the exercises.

Other Efficiency Considerations

When N Is Small Throughout this chapter, we have based our analysis of efficiency on the number of comparisons made by a sorting algorithm. This number gives us a rough estimate of the computation time involved. The other activities that accompany the comparison (swapping, keeping track of Boolean flags, and so forth) contribute to the "constant of proportionality" of the algorithm.

In comparing Big-O evaluations, we ignored constants and smaller-order terms, because we wanted to know how the algorithm would perform for large values of N. In general, an O(N2) sort requires few extra activities in addition to the comparisons, so its constant of proportionality is fairly small. On the other hand, an O(Nlog2N) sort may be more complex, with more overhead and thus a larger constant of proportionality. This situation may cause anomalies in the relative performances of the algorithms when the value of N is small. In such a case, N2 is not much greater than Nlog2N, and the constants may dominate instead, causing an O(N2) sort to run faster than an O(Nlog2N) sort.

We have discussed sorting algorithms that have a complexity of either O(N2) or O(Nlog2N). The obvious question is: Are some algorithms better than O(Nlog2N)? No, it has been proven theoretically that we cannot do better than O(Nlog2N) for sorting algorithms that are based on comparing keys.

Eliminating Calls to Functions At the beginning of this chapter, we mentioned that it may be desirable, for efficiency reasons, to streamline the code as much as possible, even at the expense of readability. For instance, we have consistently written

Swap(item1, item2)

instead of the corresponding inline expansion:

tempItem = item1;
item1 = item2;
item2 = temp;

Similarly, in SelectionSort, we coded the operation to find the minimum element as a function, MinIndex; in BubbleSort, we coded a function BubbleUp. Coding such operations as functions made the code simpler to write and to understand, avoiding a more complicated nested loop structure.

Although the function calls are clearer, in the actual coding it may be better to use the inline expansion. Function calls require extra overhead that you may prefer to avoid in a sort, where these routines are called within a loop.

The recursive sorting functions, MergeSort and QuickSort, lead to a similar situation: They require the extra overhead involved in executing recursive calls. You may want to avoid this overhead by writing nonrecursive versions of these functions.

Programmer Time If the recursive calls are less efficient, why would anyone ever decide to use a recursive version of a sort? The decision involves a choice between types of efficiency. Until now, we have been concerned only with minimizing computer time. While computers are becoming faster and cheaper, however, it is not clear that computer programmers are following that trend. In some situations, programmer time may be an important consideration in choosing a sort algorithm and its implementation. In this respect, the recursive version of QuickSort is more desirable than its nonrecursive counterpart, which requires the programmer to simulate the recursion explicitly.

Space Considerations Another efficiency consideration relates to the amount of memory space required. In general, memory space is not a very important factor in choosing a sorting algorithm. We looked at only one sort, MergeSort, in which space would be a major factor. The usual time versus space tradeoff applies to sorts-more space often means less time, and vice versa.

Because processing time is the factor that applies most often to sorting algorithms, we have considered it in detail here. Of course, as in any application, the programmer must determine his or her goals and requirements before selecting an algorithm and starting to code.

More About Sorting in General

Keys In our descriptions of the various sorts, we showed examples of sorting arrays using unique keys. A record may also contain secondary keys, which may or may not be unique. For instance, a student record may contain the following data members:

If the data elements are only single integers, it doesn't matter whether you maintain the original order of duplicate values. However, preserving the original order of records with identical key values may be desirable. If a sort preserves this order, it is said to be stable.

Stable sort A sorting algorithm that preserves the order of duplicates

Suppose the items on our array are student records with the following declarations:

struct AddressType
{
  .
  .
  .
  StrType city;
  long zip;
};

struct NameType
{
  StrType firstName;
  StrType lastName;
};
struct PersonType
{
  long studentNumber;
  NameType name;
  AddressType address;
};

The list may normally be sorted by the unique key studentNumber. For some purposes, we might want to see a listing in order by name. In this case, the sort key would consist of the name data member. To sort by ZIP code, we would sort on the address.zip data member.

If the sort is stable, we can get a listing by ZIP code, with the names in alphabetical order within each ZIP code, by sorting twice: the first time by name and the second time by ZIP code. A stable sort preserves the order of the records when a match on the key is found. The second sort, by ZIP code, produces many such matches, but preserves the alphabetical order imposed by the first sort.

To get a listing by city, with the ZIP codes in order within each city and the names alphabetically sorted within each ZIP code, we would sort three times, on the following keys:

name
address.zip
address.city

The file would first be put into alphabetical order by name. The output from the first sort would serve as input to a sort on ZIP code. The output from this sort would serve as input to a sort on city name. If the sorting algorithms used were stable, the final sort would give us the desired result.

Of the sorts that we have discussed in this book, only HeapSort is inherently unstable. The stability of the other sorts depends on how the code manages duplicate values. In the exercises you are asked to examine the code for the other sorts as we have coded them and determine whether they are stable.

Sorting with Pointers Sorting large records using some kind of sort that swaps the contents of two places may require a lot of computer time just to move sections of memory from one place to another every time we make a swap. We can reduce this move time by setting up an array of pointers to the records and then rearranging the pointers instead of the actual records. Figure 10.16 illustrates this scheme. After the sort, the records are still in the same physical arrangement, but they may be accessed in order through the rearranged array of pointers.

Click To expand
Figure 10.16: Sorting arrays with pointers

We may extend this scheme to keep a large array of data sorted on more than one key. The data can be physically stored according to the primary key, and auxiliary arrays can contain pointers to the same data but be sorted on secondary keys.

[1]For those of you who want to see the algebra:

  • (N - 1) + (N - 2) +ยทยทยท+ (N - K)

= KN - (sum of 1 through K)

= KN - [ K(K + 1)]

= KN - ( K2 + K)

= (2KN - K2 - K) / 2



Previous Section
 < Free Open Study > 
Next Section
Converted from CHM to HTML with chm2web Pro 2.85 (unicode)