This is part one of a two-part series outlining code verification methods.
We begin with a question: At what stage of software development should I verify my code?
The answer is simple. You should verify it right after you have compiled it, when the code is fresh on your mind. Once you are shown potential errors, reviewing and fixing those errors can be almost trivial. Fixing errors never gets easier after that stage in the workflow.
If you follow this approach, you are most likely verifying a unit of code. The unit will probably be integrated into an existing code base later in the development cycle. Polyspace® products allow you to approach your verification in two distinct ways: through robustness verification, in which you verify your unit in isolation, or through contextual verification, in which you verify your unit in the context of the remaining code base. In the following two posts, we provide some guidelines that will help you decide which approach to take.
But first, what difference does each approach make? Let us begin with robustness verification.
In robustness verification, Polyspace products verify that your unit is robust (i.e., that it is safe against certain errors for all inputs, undefined functions, function call sequences). If you follow this approach, no new errors should appear during code integration and testing. If you review and fix all errors flagged by Polyspace products, you do not have to review that unit (file or function) again.
Let us illustrate this behavior with a few simple examples.
Robustness Against All Inputs
In this example, num1 and num2 are inputs to the function
dividebydifference. Other than their data type, Polyspace cannot determine anything else about
num2 from this code. Therefore, it considers all possible values in the range of an
- The results of operations
num1 – num2and
num/dencan exceed the value allowed for an
intvariable. Therefore, Polyspace warns about a potential overflow on both operations.
num1can be equal to
num2. Therefore, Polyspace warns about a potential division by zero error on the division.
However, once Polyspace products point out such vulnerabilities in your code, you can make your function more robust by taking appropriate action. For instance, in the above example, you can place checks to safeguard against overflow and division by zero.
Robustness Against All Undefined Functions
Suppose your unit of code calls functions external to the unit and/or you do not have the function definitions. How can you assure yourself that a new run-time error will not occur in your code once the function is defined? You can use the robustness approach.
In the example shown above,
den is assigned a value between 1 and 3 during declaration. However, its address is passed to
Den can potentially change in
checkDenominator. Since the definition of
checkDenominator is not provided, Polyspace products consider the possibility that
den can be changed and can have any value in the range of an
int variable. Therefore, it warns that the division might cause a division by zero error.
However, once Polyspace products point out this error, you can protect the division, for instance, by passing
checkDenominator by value instead of through a pointer. Now the division operation is safe, whatever the definition of
checkDenominator might be.
Robustness verification is also appealing from the user’s perspective. Setting up verification using this approach requires less effort. After you have provided your source files and target settings, you can go ahead and run the verification. The default settings are designed to verify your code against all inputs, function definitions, and so on.
So, robustness verification is the way to go, isn’t it? Well… in practice, you have to trade off the time spent setting up with the time spent reviewing significantly higher number of errors. However, using the robustness approach may result in too many potential errors to review.
In order to spot all errors during robustness verification, Polyspace products make broad assumptions about unknowns such as input variables. Most operations involving those unknowns can potentially fail. Therefore, you can have many more potential errors to review than your time allows.
The next post illustrates an alternative verification approach (called contextual code verification), which addresses this issue.