Previous Section
 < Free Open Study > 
Next Section


5.2 Implementing a Queue as a Linked Structure

The major weakness of the array-based implementation of a FIFO queue is identical to that of a stack: the need to create an array large enough for a structure of the maximum expected size. This size is set once and cannot change. If a much smaller number of elements is actually needed, we have wasted a lot of space. If a larger number of elements is needed unexpectedly, we are in trouble, because we cannot extend the size of the array. To solve this problem, we would have to allocate a larger array, copy the elements into it, and deallocate the smaller array.

From our discussion of stacks, we know that we can get around this problem by using dynamic storage allocation to get space for each queue element only as needed. This implementation relies on the idea of linking the elements one to the next to form a chain.

The Function Enqueue

In the array-based implementation of a queue, we decided to keep track of two indexes that pointed to the front and rear boundaries of the data in the queue. In a linked representation, we can use two pointers, front and rear, to mark the front and the rear of the queue. (See Figure 5.15. By now you realize that dynamically allocated nodes in linked structures exist "somewhere on the free store," rather than in adjacent locations like array slots, but we will show the nodes arranged linearly for clarity.)

Click To expand
Figure 5.15: A linked queue representation

We can Dequeue elements from the queue using an algorithm similar to our stack Pop algorithm, with front pointing to the first node in the queue. Because we add new elements to the queue by inserting after the last node, however, we need a new Enqueue algorithm (see Figure 5.16).

Click To expand
Figure 5.16: The Enqueue operation

The first of these tasks is familiar from the stack Push operation. We get the space using C++'s new operator and then store the new item into the node's info member. The new node is inserted at the rear end of the queue, so we also need to set the node's next member to NULL.

The second part of the Enqueue algorithm involves updating the next member of Node(rear) to make it point to the new node. This task is simple:

What happens if the queue is empty, when we Enqueue the first element? In this case, no Node(rear) exists; we must set front to point to the new node. We modify the algorithm to take this condition into account:

The last task in the Enqueue algorithm, updating the rear pointer, simply involves the assignment rear = newNode. Does this approach work if this node is the first one in the queue? Yes-we want rear to always point to the rear node following a call to Enqueue, regardless of how many items are in the queue.

Note the relative positions of front and rear. Had they been reversed (as in Figure 5.17), we could have used our stack Push algorithm for the Enqueue operation. But how could we Dequeue? To delete the last node of the linked queue, we need to reset front to point to the node preceding the deleted node. Because our pointers all go forward, we can't get back to the preceding node. To accomplish this task, we would have to either traverse the whole list (an O(N) solution-very inefficient, especially if the queue is long) or keep a list with pointers in both directions. This kind of a doubly linked structure is not necessary if we set up our queue pointers correctly in the first place.

Click To expand
Figure 5.17: A bad queue design

The Function Dequeue

In writing the Enqueue algorithm, we noticed that inserting into an empty queue is a special case because we need to make front point to the new node. Similarly, in our Dequeue algorithm, we need to allow for the case of deleting the last node in the queue, leaving the queue empty. If front is NULL after we delete the front node, we know that the queue is now empty. In this case, we need to set rear to NULL as well. Figure 5.18 illustrates the algorithm for removing the front element from a linked queue. This algorithm assumes that the test for an empty queue was performed as specified, so we know that the queue contains at least one node. (We can make this assumption because an exception is thrown if it isn't true.) As with Pop, we need to keep a local pointer to the node being removed, allowing us to access it for the delete operation after the front pointer change.

Click To expand
Figure 5.18: The Dequeue operation

How do we know when the queue is empty? Both front and rear should then be NULL pointers. This fact makes the class constructor and IsEmpty extremely simple. What about the function IsFull? We can use the same IsFull we wrote for the Stack ADT.

In the array-based implementation, the operation MakeEmpty merely changed the front and rear indexes to make the queue appear to be empty. The data slots in the array were left unchanged; they became logical garbage, inaccessible through the queue operations. In the linked implementation, MakeEmpty must result in an empty queue, but this operation involves more than just setting front and rear to NULL. We must also free the dynamically allocated space in which the queue elements reside, just as we did for the stack items. In fact, the algorithm for destroying the queue is exactly the same as the algorithm for destroying the stack.

As in the case of changing the stack implementation to a linked structure, we change only the declarations and the insides of the queue operations. For a queue-using program, the interfaces to the operations remain the same. Let's look at the declarations first. In our design, we referred to the two queue pointers as front and rear. These pointers become the data members in our class QueType. Each points to a node in the linked queue (or is a NULL pointer if the queue is empty). Each queue node has two members, info (containing the user's data) and next (containing the pointer to the next node or, in the case of the last node, NULL). We can implement the FIFO Queue ADT with the following code. Let's make this implementation a class template.

// Header file for Queue ADT
class FullQueue
{};

class EmptyQueue
{};

template <class ItemType>
struct NodeType;

template <class ItemType>
class QueType
{
public:
  QueType();
  ~QueType();
  void MakeEmpty();
  void Enqueue(ItemType);
  void Dequeue(ItemType&);
  bool IsEmpty() const;
  bool IsFull() const;
private:
  NodeType<ItemType>* front;
  NodeType<ItemType>* rear;
};

#include <cstddef>                // For NULL.
#include <new>                    // For bad_alloc.

template <class ItemType>
struct NodeType
{
  ItemType info;
  NodeType* next;
};

template <class ItemType>
QueType<ItemType>::QueType()         // Class constructor.
// Post: front and rear are set to NULL.
{
  front = NULL;
  rear = NULL;
}

template <class ItemType>
void QueType<ItemType>::MakeEmpty()
// Post: Queue is empty; all elements have been deallocated.
{
  NodeType<ItemType>* tempPtr;

  while (front != NULL)
  {
    tempPtr = front;
    front = front->next;
    delete tempPtr;
  }
  rear = NULL;
}
template <class ItemType>          // Class destructor.
QueType<ItemType>::~QueType()
{
  MakeEmpty();
}

template<class ItemType>
bool QueType<ItemType>::IsFull() const
// Returns true if there is no room for another NodeType object
//  on the free store and false otherwise.
{
  NodeType<ItemType>* location;
  try
  {
    location = new NodeType<ItemType>;
    delete location;
    return false;
  }
  catch(std::bad_alloc exception)
  {
    return true;
  }
}

template <class ItemType>
bool QueType<ItemType>::IsEmpty() const
// Returns true if there are no elements on the queue and false otherwise.
{
  return (front == NULL);
}

template <class ItemType>
void QueType<ItemType>::Enqueue(ItemType newItem)
// Adds newItem to the rear of the queue.
// Pre:  Queue has been initialized.
// Post: If (queue is not full), newItem is at the rear of the queue:
//       otherwise, a FullQueue exception is thrown.

{
  if (IsFull())
    throw FullQueue();
  else
  {
    NodeType<ItemType>* newNode;

    newNode = new NodeType<ItemType>;
    newNode->info = newItem;
    newNode->next = NULL;
    if (rear == NULL)
      front = newNode;
    else
      rear->next = newNode;
    rear = newNode;
  }
}

template <class ItemType>
void QueType<ItemType>::Dequeue(ItemType& item)
// Removes front item from the queue and returns it in item.
// Pre:  Queue has been initialized
// Post: If (queue is not empty). the front of the queue has been
//       removed and a copy returned in item;
//       otherwise, an EmptyQueue exception is thrown.
{
  if (IsEmpty())
    throw EmptyQueue();
  else
  {
    NodeType<ItemType>* tempPtr;

    tempPtr = front;
    item = front->info;
    front = front->next;
    if (front == NULL)
      rear = NULL;
    delete tempPtr;
  }
}

A Circular Linked Queue Design

Our QueType class contains two pointers, one to each end of the queue. This design is based on the linear structure of the linked queue. Given only a pointer to the front of the queue, we could follow the pointers to reach the rear, but this tactic turns accessing the rear (to Enqueue an item) into an O(N) operation. With a pointer to the rear of the queue only, we could not access the front because the pointers go only from front to rear.

If we made the queue circularly linked, we could access both ends of the queue from a single pointer. That is, the next member of the rear node would point to the front node of the queue (see Figure 5.19). Now QueType has only one data member, rather than two. One interesting thing about this queue implementation is that it differs from the logical picture of a queue as a linear structure with two ends. This queue is a circular structure with no ends. What makes it a queue is its support of FIFO access.

Click To expand
Figure 5.19: A circular linked queue

To Enqueue an element, we access the "rear" node directly through the pointer rear. To Dequeue an element, we access the "front" node of the queue. We don't have a pointer to this node, but we do have a pointer to the node preceding it-rear. The pointer to the "front" node of the queue is in Next(rear). An empty queue is represented by rear = NULL. Designing and coding the queue operations using a circular linked implementation is left as a programming assignment.

We can test both linked implementations of the Queue ADT by using the same test plan that we wrote for the array-based version.

Comparing Queue Implementations

We have now looked at several different implementations of the Queue ADT. How do they compare? As when we compared the stack implementations, we look at two different factors: the amount of memory required to store the structure and the amount of "work" required by the solution, as expressed in Big-O notation. Let's compare the two implementations that we have coded completely: the array-based implementation and the dynamically linked implementation.

An array variable of the maximum queue size takes the same amount of memory, no matter how many array slots are actually used; we need to reserve space for the maximum possible number of elements. The linked implementation using dynamically allocated storage space requires space only for the number of elements actually in the queue at run time. Note, however, that the node elements are larger, because we must store the link (the next member) as well as the user's data.

Let's see how these implementations would compare if the queue contains strings (each requiring, say, 80 bytes). If the maximum number of queue elements is 100 strings, maxQue must be 101 to account for the reserved space before front. On our example system, an array index (type int) takes 2 bytes and a pointer takes 4 bytes. The storage requirements of the array-based implementation are

(80 bytes * 101 array slots) + (2 bytes *2 indexes) = 8,084 bytes

no matter how many elements are in the queue at any time. The linked queue implementation requires

80 bytes (the string) + 4 bytes (the "next" pointer) = 84 bytes

per queue node, plus 8 bytes for the two external queue pointers. The storage requirements of these queue implementations are graphed in Figure 5.20(a). Note that the linked implementation does not always take less space than the array; when the number of elements in the queue exceeds 96, the linked queue requires more memory, due to the need to store the pointers.

If the queue item type were small, such as a character or an integer, the pointer member could be larger than the user's data member. In this case, the space used by the linked representation exceeds that of the array-based representation much more quickly. Consider a queue that may contain a maximum of 100 integer elements (2 bytes each). The storage requirements for the array-based queue are

(2 bytes (per element) * 101 array slots) + (2 bytes * 2 indexes) = 206 bytes

no matter how many elements are in the queue at any time. The linked queue implementation requires

2 bytes (the info member) + 4 bytes (the next member) = 6 bytes

Click To expand
Figure 5.20: Comparison of storage requirements

per queue node, plus 8 bytes for the two external queue pointers. The storage requirements for this queue are graphed in Figure 5.20(b). When the number of elements in this queue exceeds 33, the linked queue requires more memory, due to the need to store pointers that are twice as big as the ItemType.

We can also compare the relative "efficiency" of the implementations, in terms of Big-O notation. The class constructors, IsFull, and IsEmpty operations are clearly O(1); they always take the same amount of work regardless of how many items are on the queue. What about Enqueue and Dequeue? Does the number of elements in the queue affect the amount of work done by these operations? No, it does not; in both implementations, we can directly access the front and rear of the queue. The amount of work done by these operations is independent of the queue size, so these operations also have O(1) complexity.

Only the MakeEmpty operation differs from one implementation to the other. The static array-based implementation merely sets the front and rear indexes, so it is clearly an O(1) operation. The dynamic array-based implementation is only one statement, so it also is O(1). The linked implementation must process every node in the queue to free the node space. This operation, therefore, has O(N) complexity, where N is the number of nodes in the queue. The class destructor was not needed in the statically allocated array-based structure but was required in the dynamically allocated array-based structure. The class destructor in the array-based implementation in dynamic storage has only one statement, so it has O(1) complexity. The class destructor in the dynamically allocated linked structure contains a loop that executes as many times as there are items on the queue. Thus the dynamically linked version has O(N) complexity. As with the array-based and linked implementations of stacks, these two queue implementations are roughly equivalent in terms of the amount of work they do, differing only in one of the six operations and in the class destructor. Table 5.3 summarizes the Big-O comparison of the queue operations.

Table 5.3: Big-O Comparison of Queue Operations
 

Static Array Implementation

Dynamic Array Implementation

Linked Implementation

Class constructor

O(1)

O(1)

O(1)

MakeEmpty

O(1)

O(1)

O(N)

IsFull

O(1)

O(1)

O(1)

IsEmpty

O(1)

O(1)

O(1)

Enqueue

O(1)

O(1)

O(1)

Dequeue

O(1)

O(1)

O(1)

Destructor

NA

O(1)

O(N)



Previous Section
 < Free Open Study > 
Next Section
Converted from CHM to HTML with chm2web Pro 2.85 (unicode)