< Free Open Study > |
We have not attempted in this chapter to describe every known sorting algorithm. Instead, we have presented a few of the most popular sorts, of which many variations exist. It should be clear from this discussion that no single sort works best for all applications. The simpler, generally O(N^{2}) sorts work as well, and sometimes better, for fairly small values of N. Because they are simple, they require relatively little programmer time to write and maintain. As you add features to improve sorts, you also add to the complexity of the algorithms, increasing both the work required by the routines and the programmer time needed to maintain them.
Another consideration in choosing a sort algorithm is the order of the original data. If the data are already sorted (or almost sorted), ShortBubble is O(N), whereas some versions of a QuickSort are O(N^{2}).
As always, the first step in choosing an algorithm is to determine the goals of the particular application. This step usually narrows down the options considerably. After that, knowledge of the strong and weak points of the various algorithms assists you in making a choice.
Table 10.3 compares the sorts discussed in this chapter, in terms of Big-O.
Order of Magnitude |
|||
---|---|---|---|
Sort |
Best Case |
Average Case |
Worst Case |
SelectionSort |
O(N^{2}) |
O(N^{2}) |
O(N^{2}) |
BubbleSort |
O(N^{2}) |
O(N^{2}) |
O(N^{2}) |
ShortBubble |
O(N) (^{[*]}) |
O(N^{2}) |
O(N^{2}) |
InsertionSort |
O(N) (^{[*]}) |
O(N^{2}) |
O(N^{2}) |
MergeSort |
O(Nlog_{2}N) |
O(Nlog_{2}N) |
O(Nlog_{2}N) |
QuickSort |
O(Nlog_{2}N) |
O(Nlog_{2}N) |
O(N^{2}) (depends on split) |
HeapSort |
O((Nlog_{2}N) |
O(Nlog_{2}N) |
O((Nlog_{2}N) |
The radix sort is not shown in Table 10.3 because it is not based on key comparisons. This sort algorithm uses the values in different key positions to successively divide the list into sublists, then collects the sublists back together. After this process is repeated as many times as there are positions in the key, the list is sorted.
Searching, like sorting, is a topic that is closely tied to the goal of efficiency. We speak of a sequential search as an O(N) search, because it may require as many as N comparisons to locate an element. (N refers to the number of elements in the list.) Binary searches are considered to be O(log_{2}N) and are appropriate for arrays only if they are sorted. A binary search tree may be used to allow binary searches on a linked structure. The goal of hashing is to produce a search that approaches O(1). Because of collisions involving hash locations, some searching or rehashing is usually necessary. A good hash function minimizes collisions and distributes the elements randomly throughout the table.
To solve a problem, most programmers would rather create a new algorithm than review someone else's solution. Why, then, have we devoted an entire chapter to a discussion of well-known sorting and searching algorithms? First, it is important to be familiar with the basic sorting and searching techniques. You will use these tools over and over again in a programming environment, and you need to know which ones are appropriate solutions to different problems. Second, a review of sorting and searching techniques gives us another opportunity to examine a measuring tool-the Big-O approximation-that helps us determine how much work is required by a particular algorithm. Both building and measuring tools are needed to construct sound program solutions.
< Free Open Study > |
Converted from CHM to HTML with chm2web Pro 2.85 (unicode) |