Previous Section
 < Free Open Study > 
Next Section


1.3 Verification of Software Correctness

At the beginning of this chapter, we discussed some characteristics of good programs. The first of these was that a good program works-it accomplishes its intended function. How do you know when your program meets that goal? The simple answer is, test it.

Let's look at testing as it relates to the rest of the software development process. As programmers, we first make sure that we understand the requirements. We then come up with a general solution. Next, we design the solution in terms of a computer program, using good design principles. Finally, we implement the solution, using good structured coding, with classes, functions, self-documenting code, and so on.

Testing The process of executing a program with data sets designed to discover errors

Debugging The process of removing known errors

Acceptance test The process of testing the system in its real environment with real data

Once we have the program coded, we compile it repeatedly until no syntax errors appear. Then we run the program, using carefully selected test data. If the program doesn't work, we say that it has a "bug" in it. We try to pinpoint the error and fix it, a process called debugging. Notice the distinction between testing and debugging. Testing is running the program with data sets designed to discover any errors; debugging is removing errors once they are discovered.

When the debugging is completed, the software is put into use. Before final delivery, software is sometimes installed on one or more customer sites so that it can be tested in a real environment with real data. After passing this acceptance test phase, the software can be installed at all customer sites. Is the verification process now finished? Hardly! More than half of the total life-cycle costs and effort generally occur after the program becomes operational, in the maintenance phase. Some changes correct errors in the original program; other changes add new capabilities to the software system. In either case, testing must occur after any program modification. This phase is called regression testing.

Testing is useful in revealing the presence of bugs in a program, but it doesn't prove their absence. We can only say for sure that the program worked correctly for the cases we tested. This approach seems somewhat haphazard. How do we know which tests or how many of them to run? Debugging a whole program at once isn't easy. Also, fixing the errors found during such testing can sometimes be a messy task. Too bad we couldn't have detected the errors earlier-while we were designing the program, for instance. They would have been much easier to fix then.

Regression testing Reexecution of program tests after modifications have been made to ensure that the program still works correctly

Program verification The process of determining the degree to which a software product fulfills its specifications

Program validation The process of determining the degree to which software fulfills its intended purpose

We know how program design can be improved by using a good design methodology. Can we use something similar to improve our program verification activities? Yes, we can. Program verification activities don't need to start when the program is completely coded; they can be incorporated into the entire software development process, from the requirements phase on. Program verification is more than just testing.

In addition to program verification, which involves fulfilling the requirement specifications, the software engineer has another important task-making sure the specified requirements actually solve the underlying problem. Countless times a programmer has finished a large project and delivered the verified software, only to be told, "Well, that's what I asked for but it's not what I need."

The process of determining that software accomplishes its intended task is called program validation. Program verification asks, "Are we doing the job right?"; program validation asks, "Are we doing the right job?"[4]

Can we really "debug" a program before it has ever been run-or even before it has been written? In this section we review a number of topics related to satisfying the criterion "quality software works." The topics include

Origin of Bugs

When Sherlock Holmes goes off to solve a case, he doesn't start from scratch every time; he knows from experience all kinds of things that help him find solutions. Suppose Holmes finds a victim in a muddy field. He immediately looks for footprints in the mud, for he can tell from a footprint what kind of shoe made it. The first print he finds matches the shoes of the victim, so he keeps looking. Now he finds another print, and from his vast knowledge of footprints he can tell that it was made by a certain type of boot. He deduces that such a boot would be worn by a particular type of laborer, and from the size and depth of the print he guesses the suspect's height and weight. Now, knowing something about the habits of laborers in this town, he guesses that at 6:30 P.M. the suspect might be found in Clancy's Pub.

Click To expand

In software verification we are often expected to play detective. Given certain clues, we have to find the bugs in programs. If we know what kinds of situations produce program errors, we are more likely to be able to detect and correct problems. We may even be able to step in and prevent many errors entirely, just as Sherlock Holmes sometimes intervenes in time to prevent a crime from taking place.

Let's look at some types of software errors that show up at various points in program development and testing and see how they might be avoided.

Specifications and Design Errors What would happen if, shortly before you were supposed to turn in a major class assignment, you discovered that some details in the professor's program description were incorrect? To make matters worse, you also found out that the corrections were discussed at the beginning of class on the day you got there late, and somehow you never knew about the problem until your tests of the class data set came up with the wrong answers. What do you do now?

Writing a program to the wrong specifications is probably the worst kind of software error. How bad can it be? Let's look at a true story. Some time ago, a computer company contracted to replace a government agency's obsolete system with new hardware and software. A large and complicated program was written, based on specifications and algorithms provided by the customer. The new system was checked out at every point in its development to ensure that its functions matched the requirements in the specifications document. When the system was complete and the new software was executed, users discovered that the results of its calculations did not match those of the old system. A careful comparison of the two systems showed that the specifications of the new software were erroneous because they were based on algorithms taken from the old system's inaccurate documentation. The new program was "correct" in that it accomplished its specified functions, but the program was useless to the customer because it didn't accomplish its intended functions-it didn't work. The cost of correcting the errors measured in the millions of dollars.

How could correcting the error be so expensive? First, much of the conceptual and design effort, as well as the coding, was wasted. It took a great deal of time to pinpoint which parts of the specification were in error and then to correct this document before the program could be redesigned. Then much of the software development activity (design, coding, and testing) had to be repeated. This case is an extreme one, but it illustrates how critical specifications are to the software process. In general, programmers are more expert in software development techniques than in the "application" areas of their programs, such as banking, city planning, satellite control, or medical research. Thus correct program specifications are crucial to the success of program development.

Most studies indicate that it costs 100 times as much to correct an error discovered after software delivery than it does if the problem is discovered early in the software life cycle. Figure 1.4 shows how fast the costs rise in subsequent phases of software development. The vertical axis represents the relative cost of fixing an error; this cost might be measured in units of hours, hundreds of dollars, or "programmer months" (the amount of work one programmer can do in one month). The horizontal axis represents the stages in the development of a software product. As you can see, an error that would have taken one unit to fix when you first started designing might take 100 units to correct when the product is actually in operation!

Click To expand
Figure 1.4: This graph demonstrates the importance of early detection of software errors.

Good communication between the programmers (you) and the party who originated the problem (the professor, manager, or customer) can prevent many specification errors. In general, it pays to ask questions when you don't understand something in the program specifications. And the earlier you ask, the better.

A number of questions should come to mind as you first read a programming assignment. What error checking is necessary? What algorithm or data structure should be used in the solution? What assumptions are reasonable? If you obtain answers to these questions when you first begin working on an assignment, you can incorporate them into your design and implementation of the program. Later in the program's development, unexpected answers to these questions can cost you time and effort. In short, to write a program that is correct, you must understand precisely what your program is supposed to do.

Sometimes specifications change during the design or implementation of a program. In such cases, a good design helps you to pinpoint which sections of the program must be redone. For instance, if a program defines and uses type StringType to implement strings, changing the implementation of StringType does not require rewriting the entire program. We should be able to see from the design-either functional or object-oriented-that the offending code is restricted to the module where StringType is defined. The parts of the program that require changes can usually be located more easily from the design than from the code itself.

Compile-Time Errors In the process of learning your first programming language, you probably made a number of syntax errors. These mistakes resulted in error messages (for example, "TYPE MISMATCH," "ILLEGAL ASSIGNMENT," "SEMICOLON EXPECTED," and so on) when you tried to compile the program. Now that you are more familiar with the programming language, you can save your debugging skills for tracking down really important logical errors. Try to get the syntax right the first time. Having your program compile cleanly on the first attempt is not an unreasonable goal. A syntax error wastes computing time and money, as well as programmer time, and it is preventable. Some programmers argue that looking for syntax errors is a waste of their time, that it is faster to let the compiler catch all the typos and syntax errors. Don't believe them! Sometimes a coding error turns out to be a legal statement, syntactically correct but semantically wrong. This situation may cause very obscure, hard-to-locate errors.

As you progress in your college career or move into a professional computing job, learning a new programming language is often the easiest part of a new software assignment. This does not mean, however, that the language is the least important part. In this book we discuss abstract data types and algorithms that we believe are language independent. That is, they can be implemented in almost any general-purpose programming language. In reality, the success of the implementation depends on a thorough understanding of the features of the programming language. What is considered acceptable programming practice in one language may be inadequate in another, and similar syntactic constructs may be just different enough to cause serious trouble.

For this reason, it is worthwhile to develop an expert knowledge of both the control and data structures and the syntax of the language in which you are programming. In general, if you have a good knowledge of your programming language-and are careful-you can avoid syntax errors. The ones you might miss are relatively easy to locate and correct. Most are flagged by the compiler with an error message. Once you have a "clean" compilation, you can execute your program.

Run-Time Errors Errors that occur during the execution of a program are usually more difficult to detect than syntax errors. Some run-time errors stop execution of the program. When this situation happens, we say that the program "crashed" or "terminated abnormally."

Run-time errors often occur when the programmer makes too many assumptions. For instance,

result = dividend / divisor;

is a legitimate assignment statement, if we can assume that divisor is never zero. If divisor is zero, however, a run-time error results.

Sometimes run-time errors occur because the programmer does not fully understand the programming language. For example, in C++ the assignment operator is =, and the equality test operator is ==. Because they look so much alike, they often are miskeyed one for the other. You might think that this would be a syntax error that the compiler would catch, but it is actually a logic error. Technically, an assignment in C++ consists of an expression with two parts: The expression on the right of the assignment operator (=) is evaluated and the result is returned and stored in the place named on the left. The key word here is returned; the result of evaluating the right-hand side is the result of the expression. Therefore, if the assignment operator is miskeyed for the equality test operator, or vice versa, the code executes with surprising results.

Let's look at an example. Consider the following two statements:

count == count + 1;
if (count = 10)
.
.
.

The first statement returns false; count can never be equal to count + 1. The semicolon ends the statement, so nothing happens to the value returned; count has not changed. In the next statement, the expression (count = 10) is evaluated, and 10 is returned and stored in count. Because a nonzero value (10) is returned, the if expression always evaluates to true.

Run-time errors also occur because of unanticipated user errors. For instance, if newValue is declared to be of type int, the statement

cin >> newValue;

causes a stream failure if the user inputs a nonnumeric character. An invalid filename can cause a stream failure. In some languages, the system reports a run-time error and halts. In C++, the program doesn't halt; the program simply continues with erroneous data. Well-written programs should not stop unexpectedly (crash) or continue with bad data. They should catch such errors and stay in control until the user is ready to quit.

C++: Stream Input and Output
Start example

In C++, input and output are considered streams of characters. The keyboard input stream is cin; the screen output stream is cout. Important declarations relating to these streams are supplied in the library file <iostream>. If you plan to use the standard input and output streams, you must include this file in your program. You must also provide for access to the namespace with the using directive,

#include <iostream>
int main()
{
  using namespace std;

  int intValue;
  float realValue;

  cout  << "Enter an integer number followed by return."
        << end1;
  cin   >> intValue;
  cout  << "Enter a real number followed by return."
        << end1;
  cin   >> realValue;
  cout  << "You entered "  << intValue  << " and "
        << realValue << end1;
  return 0;
}

<< is called the insertion operator: The expressions on the right describe what is inserted into the output stream. >> is called the extraction operator: Values are extracted from the input stream and stored in the places named on the right. end1 is a special language feature called a manipulator; it terminates the current output line.

If you are reading or writing to a file, you include <fstream>. You then have access to the data types ifstream (for input) and ofstream (for output). Declare variables of these types, use the open function to associate each with the external file name, and use the variable names in place of cin and cout, respectively.

#include <fstream>
int main()
{
  using namespace std;

  int intValue;
  float realValue;
  ifstream inData;
  ofstream outData;

  inData.open("input.dat");
  outData.open("output.dat");

  inData  >> intValue;
  inData  >> realValue;
  outData << "The input values are "
          << intValue  << " and "
          << realValue   << end1;
  return 0 ;
}

On input, whether from the keyboard or from a file, the >> operator skips leading whitespace characters (blank, tab, line feed, form feed, carriage return) before extracting the input value. To avoid skipping whitespace characters, you can use the get function. You invoke it by giving the name of the input stream, a dot, and then the function name and parameter list:

cin.get(inputChar);

The get function inputs the next character waiting in the input stream, even if it is a whitespace character.

Stream Failure

The key to reading data in correctly (from either the keyboard or a file) is to ensure that the order and the form in which the data are keyed are consistent with the order and type of the identifiers on the input statement. If an error occurs while accessing an I/O stream, the stream enters the fail state, and any further references to the stream will be ignored. For example, if you misspell the name of the file that is the parameter to the function open (In.dat instead of Data. In, for example), the file input stream will enter the fail state. Alternatively, if you try to input a value when the stream is at the end of the file, the stream will enter the fail state. Your program may continue to execute while the stream remains in the fail state, but all further references to the stream will be ignored.

C++ gives you a way to test the state of a stream: The stream name used in an expression returns a value that is converted to true if the state is good and to false if the stream is in the fail state. For example, the following code segment prints an error message and halts execution if the proper input file is not found:

#include <fstream>
#include <iostream>

int main()
{
  using namespace std;
  ifstream inData;

  inData.open("myData.dat");
  if (!inData)
  {
    cout  << "File myData.dat was not found."  << end1;
    return 1 ;
  }
  .
  .
  .
  return 0;
}

By convention, the main function returns an exit status of 0 if execution completed normally, whereas it returns a nonzero value (above, we used 1) otherwise.

End example

Robustness The ability of a program to recover following an error; the ability of a program to continue to operate within its environment

The ability of a program to recover when an error occurs is called robustness. If a commercial program is not robust, people do not buy it. Who wants a word processor that crashes if the user says "SAVE" when there is no disk in the drive? We want the program to tell us, "Put your disk in the drive, and press Enter." For some types of software, robustness is a critical requirement. An airplane's automatic pilot system or an intensive care unit's patient-monitoring program cannot afford to just crash. In such situations, a defensive posture produces good results.

In general, you should actively check for error-creating conditions rather than let them abort your program. For instance, it is generally unwise to make too many assumptions about the correctness of input, especially "interactive" input from a keyboard. A better approach is to check explicitly for the correct type and bounds of such input. The programmer can then decide how to handle an error (request new input, print a message, or go on to the next data) rather than leave the decision to the system. Even the decision to quit should be made by a program that controls its own execution. If worse comes to worst, let your program die gracefully.

Of course, not everything that the program inputs must be checked for errors. Sometimes inputs are known to be correct-for instance, input from a file that has been verified. The decision to include error checking must be based upon the requirements of the program.

Some run-time errors do not stop execution but do produce the wrong results. You may have incorrectly implemented an algorithm or used a variable before it was assigned a value. You may have inadvertently swapped two parameters of the same type on a function call or forgotten to designate a function's output data as a reference parameter. (See the Parameter Passing sidebar, page 74.) These "logical" errors are often the hardest to prevent and locate. Later we will talk about debugging techniques to help pinpoint run-time errors. We will also discuss structured testing methods that isolate the part of the program being tested. But knowing that the earlier we find an error, the easier it is to fix, we turn now to ways of catching run-time errors before run time.

Designing for Correctness

It would be nice if there were some tool that would locate the errors in our design or code without our even having to run the program. That sounds unlikely, but consider an analogy from geometry. We wouldn't try to prove the Pythagorean Theorem by proving that it worked on every triangle; that result would merely demonstrate that the theorem works for every triangle we tried. We prove theorems in geometry mathematically. Why can't we do the same for computer programs?

The verification of program correctness, independent of data testing, is an important area of theoretical computer science research. Such research seeks to establish a method for proving programs that is analogous to the method for proving theorems in geometry. The necessary techniques exist, but the proofs are often more complicated than the programs themselves. Therefore a major focus of verification research is the attempt to build automated program provers-verifiable programs that verify other programs. In the meantime, the formal verification techniques can be carried out by hand.[5]

Assertions An assertion is a logical proposition that can be true or false. We can make assertions about the state of the program. For instance, with the assignment statement

Assertion A logical proposition that can be true or false

sum = part + 1 ;     // sum and part are integers.

we might assert the following: "The value of sum is greater than the value of part." That assertion might not be very useful or interesting by itself, but let's see what we can do with it. We can demonstrate that the assertion is true by making a logical argument: No matter what value part has (negative, zero, or positive), when it is increased by 1, the result is a larger value. Now note what we didn't do. We didn't have to run a program containing this assignment statement to verify that the assertion was correct.

The general concept behind formal program verification is that we can make assertions about what the program is intended to do, based on its specifications, and then prove through a logical argument (rather than through execution of the program) that a design or implementation satisfies the assertions. Thus the process can be broken down into two steps:

  1. Correctly assert the intended function of the part of the program to be verified.

  2. Prove that the actual design or implementation does what is asserted.

The first step, making assertions, sounds as if it might be useful to us in the process of designing correct programs. After all, we already know that we cannot write correct programs unless we know what they are supposed to do.

Preconditions and Postconditions Let's take the idea of making assertions down a level in the design process. Suppose we want to design a module (a logical chunk of the program) to perform a specific operation. To ensure that this module fits into the program as a whole, we must clarify what happens at its boundaries-that is, what must be true when we enter the module and what must be true when we exit.

To make the task more concrete, picture the design module as it is eventually coded, as a function that is called within a program. To call the function, we must know its exact interface: the name and the parameter list, which indicates its inputs and outputs. But this information isn't enough: We must also know any assumptions that must be true for the operation to function correctly. We call the assertions that must be true on entry into the function preconditions. The preconditions act like a product disclaimer:

Preconditions Assertions that must be true on entry into an operation or function for the postconditions to be guaranteed

Click To expand

For instance, when we said that following the execution of

sum = part + 1;

we can assert that sum is greater than part, we made an assumption-a precondition-that part is not INT_MAX. If this precondition were violated, our assertion would not be true.

Postconditions Assertions that state what results are expected at the exit of an operation or function, assuming that the preconditions are true

We must also know what conditions are true when the operation is complete. The postconditions are assertions that describe the results of the operation. The postconditions do not tell us how these results are accomplished; rather, they merely tell us what the results should be.

Let's consider the preconditions and postconditions for a simple operation, one that deletes the last element from a list and returns its value as an output. (We are using "list" in an intuitive sense here; we formally define it in Chapter 3.) The specification for is as follows:

What do these preconditions and postconditions have to do with program verification? By making explicit assertions about what is expected at the interfaces between modules, we can avoid making logical errors based on misunderstandings. For instance, from the precondition we know that we must check outside of this operation for the empty condition; this module assumes that at least one element is present in the list. The postcondition tells us that when the value of the last list element is retrieved, that element is deleted from the list. This fact is an important one for the list user to know. If we just want to take a peek at the last value without affecting the list, we cannot use .

Experienced software developers know that misunderstandings about interfaces to someone else's modules are one of the main sources of program problems. We use preconditions and postconditions at the module or function level in this book, because the information they provide helps us to design programs in a truly modular fashion. We can then use the modules we've designed in our programs, confident that we are not introducing errors by making mistakes about assumptions and about what the modules actually do.

Design Review Activities When an individual programmer is designing and implementing a program, he or she can find many software errors with pencil and paper. Deskchecking the design solution is a very common method of manually verifying a program. The programmer writes down essential data (variables, input values, parameters of subprograms, and so on) and walks through the design, marking changes in the data on the paper. Known trouble spots in the design or code should be double-checked. A checklist of typical errors (such as loops that do not terminate, variables that are used before they are initialized, and incorrect order of parameters on function calls) can be used to make the deskcheck more effective. A sample checklist for deskchecking a C++ program appears in Figure 1.5.

Start Figure

The Design

  1. Does each module in the design have a clear function or purpose?

  2. Can large modules be broken down into smaller pieces? (A common rule of thumb is that a C++ function should fit on one page.)

  3. Are all the assumptions valid? Are they well documented?

  4. Are the preconditions and postconditions accurate assertions about what should be happening in the module they specify?

  5. Is the design correct and complete as measured against the program specification? Are there any missing cases? Is there faulty logic?

  6. Is the program designed well for understandability and maintainability?

The Code

  1. Has the design been clearly and correctly implemented in the programming language? Are features of the programming language used appropriately?

  2. Are all output parameters of functions assigned values?

  3. Are parameters that return values marked as reference parameters (have & to the right of the type if the parameter is not an array)?

  4. Are functions coded to be consistent with the interfaces shown in the design?

  5. Are the actual parameters on function calls consistent with the parameters declared in the function prototype and definition?

  6. Is each data object to be initialized set correctly at the proper time? Is each data object set before its value is used?

  7. Do all loops terminate?

  8. Is the design free of "magic" numbers? (A "magic" number is one whose meaning is not immediately evident to the reader.)

  9. Does each constant, type, variable, and function have a meaningful name? Are comments included with the declarations to clarify the use of the data objects?

End Figure

Figure 1.5: Checklist for deskchecking a C++ program

Deskchecking Tracing an execution of a design or program on paper

Walk-through A verification method in which a team performs a manual simulation of the program or design

Inspection A verification method in which one member of a team reads the program or design line by line and the other members point out errors

Have you ever been really stuck trying to debug a program and showed it to a classmate or colleague who detected the bug right away? It is generally acknowledged that someone else can detect errors in a program better than the original author can. In an extension of deskchecking, two programmers can trade code listings and check each other's programs. Universities, however, frequently discourage students from examining each other's programs for fear that this exchange will lead to cheating. Thus many students become experienced in writing programs but don't have much opportunity to practice reading them.

Teams of programmers develop most sizable computer programs. Two extensions of deskchecking that are effectively used by programming teams are design or code walkthroughs and inspections. The intention of these formal team activities is to move the responsibility for uncovering bugs from the individual programmer to the group. Because testing is time consuming and errors cost more the later they are discovered, the goal is to identify errors before testing begins.

In a walk-through, the team performs a manual simulation of the design or program with sample test inputs, keeping track of the program's data by hand on paper or on a blackboard. Unlike thorough program testing, the walk-through is not intended to simulate all possible test cases. Instead, its purpose is to stimulate discussion about the way the programmer chose to design or implement the program's requirements.

At an inspection, a reader (not the program's author) goes through the design or code line by line. Inspection participants point out errors, which are recorded on an inspection report. Some errors are uncovered just by the process of reading aloud. Others may have been noted by team members during their preinspection preparation. As with the walk-through, the chief benefit of the team meeting is the discussion that takes place among team members. This interaction among programmers, testers, and other team members can uncover many program errors long before the testing stage begins.

At the high-level design stage, the design should be compared to the program requirements to make sure that all required functions have been included and that this program or module correctly "interfaces" with other software in the system. At the low-level design stage, when the design has been filled out with more details, it should be reinspected before it is implemented. When the coding has been completed, the compiled listings should be inspected again. This inspection (or walk-through) ensures that the implementation is consistent with both the requirements and the design. Successful completion of this inspection means that testing of the program can begin.

For the last 20 years, the Software Engineering Institute at Carnegie Mellon University has played a major role in supporting research into formalizing the inspection process in large software projects, including sponsoring workshops and conferences. A paper presented at the SEI Software Engineering Process Group (SEPG) Conference reported on a project that was able to reduce the number of product defects by 86.6% by using a two-tiered inspection process of group walk-throughs and formal inspections. The process was applied to packets of requirements, design, or code at every stage of the life cycle. Table 1.2 shows the defects per 1,000 source lines of code (KSLOC) that were found in the various phases of the software life cycle in a maintenance project. This project added 40,000 lines of source code to a software program of half a million lines of code. The formal inspection process was used in all of the phases except testing activities.

Table 1.2: Defects Found in Different Phases[*]

Stage

KSLOC


System Design

2

Software Requirements

8

Design

12

Code Inspection

34

Testing Activities

3


[*]Dennis Beeson, Manager, Naval Air Warfare Center, Weapons Division, F-18 Software Development Team.

Looking back at Figure 1.4, you can see that the cost of fixing an error is relatively cheap until you reach the coding phase. After that stage, the cost of fixing an error increases dramatically. Using the formal inspection process clearly benefited this project.

These design-review activities should be carried out in as nonthreatening a manner as possible. The goal is not to criticize the design or the designer, but rather to remove defects in the product. Sometimes it is difficult to eliminate the natural human emotion of pride from this process, but the best teams adopt a policy of egoless programming.

Exceptions At the design stage, you should plan how to handle exceptions in your program. Exceptions are just what the name implies: exceptional situations. When these situations occur, the flow of control of the program must be altered, usually resulting in a premature end to program execution. Working with exceptions begins at the design phase: What are the unusual situations that the program should recognize? Where in the program can the situations be detected? How should the situations be handled if they arise?

Exception An unusual, generally unpredictable event, detectable by software or hardware, that requires special processing; the event may or may not be erroneous

Where-indeed, whether-an exception is detected depends on the language, the software package design, the design of the libraries being used, and the platform (that is, the operating system and hardware). Where an exception should be detected depends on the type of exception, the software package design, and the platform. Where an exception is detected should be well documented in the relevant code segments.

An exception may be handled in any place in the software hierarchy-from the place in the program module where the exception is first detected through the top level of the program. In C++, as in most programming languages, unhandled built-in exceptions carry the penalty of program termination. Where in an application an exception should be handled is a design decision; however, exceptions should be handled at a level that knows what they mean.

An exception need not be fatal. In nonfatal exceptions, the thread of execution may continue. Although the thread of execution may be picked up at any point in the program, the execution should continue from the lowest level that can recover from the exception. When an error occurs, the program may fail unexpectedly. Some of the failure conditions may possibly be anticipated; some may not. All such errors must be detected and managed.

Exceptions can be written in any language. Some languages (such as C++ and Java) provide built-in mechanisms to manage exceptions. All exception mechanisms have three parts:

  • Defining the exception

  • Generating (raising) the exception

  • Handling the exception

C++ gives you a clean way of implementing these three phases: the try-catch and throw statements. We cover these statements at the end of Chapter 2 after we have introduced some additional C++ constructs.

Program Testing

Eventually, after all the design verification, deskchecking, and inspections have been completed, it is time to execute the code. At last, we are ready to start testing with the intention of finding any errors that may still remain.

The testing process is made up of a set of test cases that, taken together, allow us to assert that a program works correctly. We say "assert" rather than "prove" because testing does not generally provide a proof of program correctness.

The goal of each test case is to verify a particular program feature. For instance, we may design several test cases to demonstrate that the program correctly handles various classes of input errors. Alternatively, we may design cases to check the processing when a data structure (such as an array) is empty, or when it contains the maximum number of elements.

Within each test case, we perform a series of component tasks:

  • We determine inputs that demonstrate the goal of the test case.

  • We determine the expected behavior of the program for the given input. (This task is often the most difficult one. For a math function, we might use a chart of values or a calculator to figure out the expected result. For a function with complex processing, we might use a deskcheck type of simulation or an alternative solution to the same problem.)

  • We run the program and observe the resulting behavior.

  • We compare the expected behavior and the actual behavior of the program. If they match, the test case is successful. If not, an error exists. In the latter case, we begin debugging.

For now we are talking about test cases at a module, or function, level. It's much easier to test and debug modules of a program one at a time, rather than trying to get the whole program solution to work all at once. Testing at this level is called unit testing.

Unit testing Testing a module or function by itself

Functional domain The set of valid input data for a program or function

How do we know what kinds of unit test cases are appropriate, and how many are needed? Determining the set of test cases that is sufficient to validate a unit of a program is in itself a difficult task. Two approaches to specifying test cases exist: cases based on testing possible data inputs and cases based on testing aspects of the code itself.

Data Coverage In those limited cases where the set of valid inputs, or the functional domain, is extremely small, we can verify a subprogram by testing it against every possible input element. This approach, known as "exhaustive" testing, can prove conclusively that the software meets its specifications. For instance, the functional domain of the following function consists of the values true and false:

void PrintBoolean(bool error)
// Prints the Boolean value on the screen.
{
  if (error)
    cout  << "true";
  else
    cout  << "false";
  cout  << end1;
}

It makes sense to apply exhaustive testing to this function, because there are only two possible input values. In most cases, however, the functional domain is very large, so exhaustive testing is almost always impractical or impossible. What is the functional domain of the following function?

void PrintInteger(int intValue)
// Prints the integer value intValue on the screen.
{
  cout  << intValue;
}

It is not practical to test this function by running it with every possible data input; the number of elements in the set of int values is clearly too large. In such cases we do not attempt exhaustive testing. Instead, we pick some other measurement as a testing goal.

You can attempt program testing in a haphazard way, entering data randomly until you cause the program to fail. Guessing doesn't hurt (except possibly by wasting time), but it may not help much either. This approach is likely to uncover some bugs in a program, but it is very unlikely to find all of them. Fortunately, strategies for detecting errors in a systematic way have been developed.

One goal-oriented approach is to cover general classes of data. You should test at least one example of each category of inputs, as well as boundaries and other special cases. For instance, in the function PrintInteger there are three basic classes of int data: negative values, zero, and positive values. You should plan three test cases, one for each class. You could try more than three, of course. For example, you might want to try INT_MAX and INT_MIN; because the program simply prints the value of its input, however, the additional test cases don't accomplish much.

Other data coverage approaches exist as well. For example, if the input consists of commands, you must test each command. If the input is a fixed-sized array containing a variable number of values, you should test the maximum number of values-that is, the boundary condition. It is also a good idea to try an array in which no values have been stored or one that contains a single element. Testing based on data coverage is called black-box testing. The tester must know the external interface to the module-its inputs and expected outputs-but does not need to consider what is happening inside the module (the inside of the black box). (See Figure 1.6.)

Click To expand
Figure 1.6: Testing approaches

Black-box testing Testing a program or function based on the possible input values, treating the code as a "black box"

Clear- (white-) box testing Testing a program or function based on covering all the statements, branches, or paths of the code

Statement coverage Every statement in the program is executed at least once

Code Coverage A number of testing strategies are based on the concept of code coverage, the execution of statements or groups of statements in the program. This testing approach is called clear- (or white-) box testing. The tester must look inside the module (through the clear box) to see the code that is being tested.

One approach, called statement coverage, requires that every statement in the program be executed at least once. Another approach requires that the test cases cause every branch, or code section, in the program to be executed. A single test case can achieve statement coverage of an if-then statement, but it takes two test cases to test both branches of the statement.

A similar type of code-coverage goal is to test program paths. A path is a combination of branches that might be traveled when the program is executed. In path testing, we try to execute all possible program paths in different test cases.

Branch A code segment that is not always executed; for example, a switch statement has as many branches as there are case labels

Path A combination of branches that might be traversed when a program or function is executed

Path testing A testing technique whereby the tester tries to execute all possible paths in a program or function

The code-coverage approaches are analogous to the ways forest rangers might check out the trails through the woods before the hiking season opens. If the rangers wanted to make sure that all trails were clearly marked and not blocked by fallen trees, they would check each branch of the trails (see Figure 1.7a). Alternatively, if they wanted to classify each of the various trails (which may be interwoven) according to its length and difficulty from start to finish, they would use path testing (see Figure 1.7b).

Click To expand
Figure 1.7a: Checking out all the branches
Click To expand
Figure 1.7b: Checking out all the trails

To create test cases based on code-coverage goals, we select inputs that drive the execution into the various program paths. How can we tell whether a branch or a path is executed? One way to trace execution is to put debugging output statements at the beginning of every branch, indicating that this particular branch was entered. Software projects often use tools that help programmers track program execution automatically.

These strategies lend themselves to measurements of the testing process. We can count the number of paths in a program, for example, and keep track of how many paths have been covered in our test cases. The numbers provide statistics about the current status of testing; for instance, we could say that 75% of the branches of a program have been executed or that 50% of the paths have been tested. When a single programmer is writing a single program, such numbers may be superfluous. In a software development environment with many programmers, however, such statistics are very useful for tracking the progress of testing.

These measurements can also indicate when a certain level of testing has been completed. Achieving 100% path coverage is often not a feasible goal. A software project might have a lower standard (say, 80% branch coverage) that the programmer who writes the module is required to reach before turning the module over to the project's testing team. Testing in which goals are based on certain measurable factors is called metric-based testing.

Metric-based testing Testing based on measurable factors

Test plan A document showing the test cases planned for a program or module, their purposes, inputs, expected outputs, and criteria for success

Implementing a test plan Running the program with the test cases listed in the test plan

Test Plans Deciding on the goal of the test approach-data coverage, code coverage, or (most often) a mixture of the two-precedes the development of a test plan. Some test plans are very informal-the goal and a list of test cases, written by hand on a piece of paper. Even this type of test plan may be more than you have ever been required to write for a class programming project. Other test plans (particularly those submitted to management or to a customer for approval) are very formal, containing the details of each test case in a standardized format.

Implementing a test plan involves running the program with the input values listed in the plan and observing the results. If the answers are incorrect, the program is debugged and rerun until the observed output always matches the expected output. The process is complete when all test cases listed in the plan give the desired output.

Let's develop a test plan for a function called Divide, which was coded from the following specifications:

Should we use code coverage or data coverage for this test plan? Because the code is so short and straightforward, let's begin with code coverage. A code-coverage test plan is based on an examination of the code itself. Here is the code to be tested:

void Divide(int dividend, int divisor, bool& error, float& result)
// Set error to indicate if divisor is zero.
// If no error, set result to dividend / divisor.
{
  if (divisor = 0)
    error = true;
  else
    result = float(dividend) / float(divisor);
}

The code consists of one if statement with two branches; therefore, we can do complete path testing. There is a case where divisor is zero and the true branch is taken and a case where divisor is nonzero and the else branch is taken.

Reason for Test Case

Input Values

Expected Output

divisor is zero

  • (dividend can be anything)

divisor is 0

error is true

 

dividend is 8

result is undefined

divisor is nonzero

  • (dividend can be anything)

divisor is 2

error is false

 

dividend is 8

result is 4.0

To implement this test plan, we run the program with the listed input values and compare the results with the expected output. The function is called from a test driver, a program that sets up the parameter values and calls the functions to be tested. A simple test driver is listed below. It is designed to execute both test cases: It assigns the parameter values for Test 1, calls Divide, and prints the results; then it repeats the process with new test inputs for Test 2. We run the test and compare the values output from the test driver with the expected values.

Test driver A program that sets up the testing environment by declaring and assigning initial values to variables, then calls the subprogram to be tested

#include <iostream>

void Divide(int, int, bool&, float&);
// Function to be tested.

void Print(int, int, bool, float);
// Prints results of test case.

int main()
{
  using namespace std;

  bool error;
  float result;
  int dividend = 8;                                   // Test 1
  int divisor = 0;

  Divide(dividend, divisor, error, result);
  cout  << "Test 1: "  << end1;
  Print(dividend, divisor, error, result);
  divisor = 2;                                        // Test 2
  Divide(dividend, divisor, error, result);
  cout  << "Test 2: " << end1;
  Print(dividend, divisor, error, result);
  return 0;
}

For Test 1, the expected value for error is true, and the expected value for result is undefined, but the division is carried out anyway! How can that be when divisor is zero? If the result of an if statement is not what you expect, the first thing to check is the relational operator: Did we use a single = rather than ==? Yes, we did. After fixing this mistake, we run the program again.

For Test 2, the expected value for error is false, yet the value printed is true! Our testing has uncovered another error, so we begin debugging. We discover that the value of error, set to true in Test 1, was never reset to false in Test 2. We leave development of the final correct version of this function as an exercise.

Now let's design a data-coverage test plan for the same function. In a data-coverage plan, we know nothing about the internal working of the function; we know only the interface that is represented in the documentation of the function heading.

void Divide(int dividend, int divisor, bool& error, float& result)
// Set error to indicate if divisor is zero.
// If no error, set result to dividend / divisor.

There are two input parameters, both of type int. A complete data-coverage plan would require that we call the function with all possible values of type int for each parameter-clearly overkill. The interface tells us that one thing happens if divisor is zero and another thing happens if divisor is nonzero. Clearly, we must have at least two test cases: one where divisor is zero and one where divisor is nonzero. When divisor is zero, error is set to true and nothing else happens, so one test case should verify this result. When divisor is nonzero, a division takes place. How many test cases does it take to verify that the division is correct? What are the end cases? There are five possibilities:

  • divisor and dividend are both positive

  • divisor and dividend are both negative

  • divisor is positive and dividend is negative

  • divisor is negative and dividend is positive

  • dividend is zero

The complete test plan is shown below.

Reason for Test Case

Input Values

Expected Output

divisor is zero

  • (dividend can be anything)

divisor is 0

error is true

 

dividend is 8

result is undefined

divisor is nonzero

  • (dividend can be anything)

divisor is 2

error is false

combined with

dividend is 8

result is 4.0

  • divisor is positive

   
  • dividend is positive

   

divisor is nonzero

  • divisor is negative

divisor is -2

error is false

  • dividend is negative

dividend is -8

result is 4.0

divisor is nonzero

  • divisor is positive

divisor is 2

error is false

  • dividend is negative

dividend is -8

result is -4.0

divisor is nonzero

  • divisor is negative

divisor is -2

error is false

  • dividend is positive

dividend is 8

result is -4.0

dividend is zero

  • (divisor can be anything)

divisor is 2

error is false

 

dividend is 0

result is 0.0

In this case the data-coverage test plan is more complex than the code-coverage plan: There are seven cases (two of which are combined) rather than just two. One case covers a zero divisor, and the other six cases check whether the division is working correctly with a nonzero divisor and alternating signs. If we knew that the function uses the built-in division operator, we would not need to check these cases-but we don't. With a data-coverage plan, we cannot see the body of the function.

For program testing to be effective, it must be planned. You must design your testing in an organized way, and you must put your design in writing. You should determine the required or desired level of testing, and plan your general strategy and test cases before testing begins. In fact, you should start planning for testing before writing a single line of code.

Planning for Debugging In the previous section we discussed checking the output from our test and debugging when errors were detected. We can debug "on the fly" by adding output statements in suspected trouble spots when problems arise. But in an effort to predict and prevent problems as early as possible, can we also plan our debugging before we ever run the program?

By now you should know that the answer will be yes. When you write your design, you should identify potential trouble spots. You can then insert temporary debugging output statements into your code in places where errors are likely to occur. For example, to trace the program's execution through a complicated sequence of function calls, you might add output statements that indicate when you are entering and leaving each function. The debugging output is even more useful if it also indicates the values of key variables, especially parameters of the function. The following example shows a series of debugging statements that execute at the beginning and end of the function Divide;

void Divide(int dividend, int divisor, bool& error, float& result)
// Set error to indicate if divisor is zero.
// If no error, set result to dividend / divisor.
{
  using namespace std;
  // For debugging
  cout  << "Function Divide entered."  << end1;
  cout  << "Dividend = "  << dividend << end1;
  cout  << "Divisor = "  << divisor << end1;
  //*************************
  // Rest of code goes here.
  //*************************
  // For debugging
  if (error)
    cout  << "Error = true ";
  else
    cout  << "Error = false ";
  cout  << "and Result = " << result  << end1;
  cout  << "Function Divide terminated."  << end1;
}

If hand testing doesn't reveal all the bugs before you run the program, well-placed debugging lines can at least help you locate the rest of the bugs during execution. Note that this output is intended only for debugging; these output lines are meant to be seen only by the tester, not by the user of the program. Of course, it's annoying for debugging output to show up mixed with your application's real output, and it's difficult to debug when the debugging output isn't collected in one place. One way to separate the debugging output from the "real" program output is to declare a separate file to receive these debugging lines, as shown in the following example:

#include <fstream>

std::ofstream debugFile;

debugFile  << "This is the debug output from Test 1."  << end1;

Usually the debugging output statements are removed from the program, or "commented out," before the program is delivered to the customer or turned in to the professor. (To "comment out" means to turn the statements into comments by preceding them with // or enclosing them between /* and */.) An advantage of turning the debugging statements into comments is that you can easily and selectively turn them back on for later tests. A disadvantage of this technique is that editing is required throughout the program to change from the testing mode (with debugging) to the operational mode (without debugging).

Another popular technique is to make the debugging output statements dependent on a Boolean flag, which can be turned on or off as desired. For instance, a section of code known to be error-prone may be flagged in various spots for trace output by using the Boolean value debugFlag:

// Set debugFlag to control debugging mode.
const bool debugFlag = true;
.
.
.
if (debugFlag)
  debugFile  << "Function Divide entered."  << end1;

This flag may be turned on or off by assignment, depending on the programmer's needs. Changing to an operational mode (without debugging output) involves merely redefining debugFlag as false and then recompiling the program. If a flag is used, the debugging statements can be left in the program; only the if checks are executed in an operational run of the program. The disadvantage of this technique is that the code for the debugging is always there, making the compiled program larger. If a lot of debugging statements are present, they may waste needed space in a large program. The debugging statements can also clutter up the program, making it more difficult to read. (This situation illustrates another tradeoff we face in developing software.)

Some systems have online debugging programs that provide trace outputs, making the debugging process much simpler. If the system at your school or workplace has a run-time debugger, use it! Any tool that makes the task easier should be welcome, but remember that no tool replaces thinking.

A warning about debugging: Beware the quick fix! Program bugs often travel in swarms, so when you find a bug, don't be too quick to fix it and run your program again. Often as not, fixing one bug generates another. A superficial guess about the cause of a program error usually does not produce a complete solution. In general, time devoted to considering all the ramifications of the changes you are making is time well spent.

If you constantly need to debug, your design process has flaws. Time devoted to considering all the ramifications of the design you are making is time spent best of all.

Integration Testing In the last two sections we discussed unit testing and planned debugging. In this section we explore many concepts and tools that can help you put your test cases for individual units together for structured testing of your whole program. The goal of this type of testing is to integrate the separately tested pieces, so it is called integration testing.

Integration testing Testing performed to integrate program modules that have already been independently unit tested

You can test a large, complicated program in a structured way by using a method very similar to the top-down approach to program design. The central idea is one of divide and conquer: test pieces of the program independently and then use the parts that have been verified as the basis for the next test. The testing can use either a top-down or a bottom-up approach, or a combination of the two.

With a top-down approach, we begin testing at the top levels. The purpose of the test is to ensure that the overall logical design works and that the interfaces between modules are correct. At each level of testing, the top-down approach is based on the assumption that the lower-levels work correctly. We implement this assumption by replacing the lower-level subprograms with "placeholder" modules called stubs. A stub may consist of a single trace output statement, indicating that we have reached the function, or a group of debug output statements, showing the current values of the parameters. It may also assign values to output parameters if values are needed by the calling function (the one being tested).

Stub A special function that can be used in top-down testing to stand in for a lower-level function

An alternative testing approach is to test from the bottom up. With this approach, we unit test the lowest-level subprograms first. A bottom-up approach can be useful in testing and debugging a critical module, one in which an error would have significant effects on other modules. "Utility" subprograms, such as mathematical functions, can also be tested with test drivers, independently of the programs that eventually call them. In addition, a bottom-up integration testing approach can prove effective in a group-programming environment, where each programmer writes and tests separate modules. The smaller, tested pieces of the program are later verified together in tests of the whole program.

Testing C++ Data Structures

The major topic of this textbook is data structures: what they are, how we use them, and how we implement them using C++. This chapter has provided an overview of software engineering; in Chapter 2 we begin to focus on data and ways to structure it. It seems appropriate to end this section about verification with a look at how we test the data structures we implement in C++.

Throughout this book we implement data structures using C++ classes, so that many different application programs can use the resulting structures. When we first create a class that models a data structure, we do not necessarily have any application programs ready to use it. We need to test the class by itself first, before creating the applications. For this reason, we use a bottom-up testing approach utilizing test drivers.

Every data structure that we implement supports a set of operations. For each structure, we would like to create a test driver that allows us to test the operations in a variety of sequences. How can we write a single test driver that allows us to test numerous operation sequences? The solution is to separate the specific set of operations that we want to test from the test driver program itself. We list the operations, and the necessary parameters, in a text file. The test driver program reads the operations from the text file one line at a time, performs the specified operation by invoking the member function of the data structure being tested, and reports the results to an output file. The test program also reports its general results on the screen.

The testing approach described here allows us easily to change our test cases-we just change the contents of the input file. Testing would be even easier if we could dynamically change the name of the input file, whenever we run the program. We could then run another test case or rerun a previous test case whenever we needed. Therefore, we construct our test driver to read the name of the input file from the console; we do the same for the output file. Figure 1.8 shows a model of our test architecture.

Click To expand
Figure 1.8: Model of test architecture

Our test drivers all follow the same basic algorithm. First, we prompt for and read the file names and prepare the files for input and output. Next, the name of the function to be executed is read from the input file. Because the name of the function drives the flow of control, let's call it command. As long as command is not "quit," we execute the function with that name, print the results, and read the next function name. We then close the files and quit. Did we forget anything? The output file should have some sort of a label. Let's prompt the user to enter a label for the output file. We should also let the user know what is going on by keeping track of the number of commands and printing a closing message. Here, then, is the algorithm for our test driver program:

This algorithm provides us with maximum flexibility for minimum extra work when we are testing our data structures. Once we implement the algorithm by creating a test driver for a specific data structure, we can easily create a test driver for a different data structure by changing only the first two steps in the loop. Here is the code for the test driver with the data structure-specific code left to be filled in. We demonstrate how this code can be written in the case study. The statements that must be filled in are shaded.

// Test driver
#include <iostream>
#include <fstream>
#include <string>
// #include file containing class to be tested
int main()
{
  using namespace std;
  ifstream inFile;       // File containing operations
  ofstream outFile;      // File containing output
  string inFileName;     // Input file external name
  string outFileName;    // Output file external name
  string outputLabel;
  string command;        // Operation to be executed
  int numCommands;

  // Declare a variable of the type being tested
  // Prompt for file names, read file names, and prepare files
  cout << "Enter name of input file; press return." << end1;
  cin  >> inFileName;
  inFile.open(inFileName.c_str());

  cout << "Enter name of output file; press return." << end1;
  cin  >> outFileName;
  outFile.open(outFileName.c_str());

  cout << "Enter name of test run; press return." << end1;
  cin  >> outputLabel;
  outFile << outputLabel << end1;

  inFile >> command;
  numCommands = 0;
  while (command != "Quit")
{

  // The following should be specific to the structure being tested
  // Execute the command by invoking the member function of the
  // same name
  // Print the results to the output file

    numCommands++;
    cout << "Command number " << numCommands << " completed."
         << end1:
    inFile >> command;
  }

  cout << "Testing completed."  << end1;
  inFile.close();
  outFile.close();
  return 0;
}

Note that the test driver gets the test data and calls the member functions to be tested. It also provides written output about the effects of the member function calls, so that the tester can visually check the results. Sometimes test drivers are used to test hundreds or thousands of test cases. In such situations, the test driver should automatically verify whether the test cases were handled successfully. We leave the expansion of this test driver to include automatic test case verification as a programming assignment.

This test driver does not do any error checking to confirm that the inputs are valid. For instance, it doesn't verify that the input command code is really a legal command. Remember that the goal of the test driver is to act as a skeleton of the real program, not to be the real program. Therefore, the test driver does not need to be as robust as the program it simulates.

By now you are probably protesting that these testing approaches are a lot of trouble and that you barely have time to write your programs, let alone "throwaway code" like stubs and drivers. Structured testing methods do require extra work. Test drivers and stubs are software items; they must be written and debugged themselves, even though they are seldom turned in to a professor or delivered to a customer. These programs are part of a class of software development tools that take time to create but are invaluable in simplifying the testing effort.

Such programs are analogous to the scaffolding that a contractor erects around a building. It takes time and money to build the scaffolding, which is not part of the final product; without it, however, the building could not be constructed. In a large program, where verification plays a major role in the software development process, creating these extra tools may be the only way to test the program.

C++: Reading in File Names
Start example

The following code segment causes a compile-time error:

ifstream inFile;
string fileName;

cout << "Enter the name of the input file" << end1;
cin  >> fileName;
inFile.open(fileName);

Why does the error arise? Because C++ recognizes two types of strings. One is a variable of the string data type; the other is a limited form of string inherited from the C language. The open function expects its argument to be a so-called C string. The code segment shown above passes a string variable. Thus it generates a type conflict. To solve this problem, the string data type provides a value-returning function named c_str that can be applied to a string variable to convert it to a C string. Here is the corrected code segment:

ifstream inFile;
string fileName;

cout << "Enter the name of the input file" << end1;
cin  >> fileName;
inFile.open(fileName.c_str());
End example

Practical Considerations

It is obvious from this chapter that program verification techniques are time consuming and, in a job environment, expensive. It would take a long time to do all of the things discussed in this chapter, and a programmer has only so much time to work on any particular program. Certainly not every program is worthy of such cost and effort. How can you tell how much and what kind of verification effort is necessary?

A program's requirements may provide an indication of the level of verification needed. In the classroom, your professor may specify the verification requirements as part of a programming assignment. For instance, you may be required to turn in a written, implemented test plan. Part of your grade may be determined by the completeness of your plan. In the work environment, the verification requirements are often specified by a customer in the contract for a particular programming job. For instance, a contract with a military customer may specify that formal reviews or inspections of the software product be held at various times during the development process.

A higher level of verification effort may be indicated for sections of a program that are particularly complicated or error-prone. In these cases, it is wise to start the verification process in the early stages of program development so as to avoid costly errors in the design.

A program whose correct execution is critical to human life is obviously a candidate for a high level of verification. For instance, a program that controls the return of astronauts from a space mission would require a higher level of verification than would a program that generates a grocery list. As a more down-to-earth example, consider the potential for disaster if a hospital's patient database system had a bug that caused it to lose information about patients' allergies to medications. A similar error in a database program that manages a Christmas card mailing list, however, would have much less severe consequences.

[4]B. W. Boehm, Software Engineering Economics (Englewood Cliffs, N.J.: Prentice-Hall, 1981).

[5]We do not go into this subject in detail here. For students who are interested in this topic, see David Gries, The Science of Programming (New York: Springer-Verlag, 1981).



Previous Section
 < Free Open Study > 
Next Section
Converted from CHM to HTML with chm2web Pro 2.85 (unicode)