Language menu for mobile

Articles

The Second Holy Grail of Test Design by Hans Buwalda

This article discusses the "second Holy Grail," namely finding the right approach per test module. This step focuses on developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope.

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation

Introduction

In the article "Key Principles of Test Design" I presented three key principles (the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

This article discusses the "second Holy Grail," namely finding the right approach per test module. In the text of the first Holy Grail article ("The First Holy Grail of Test Design") we saw that a first important step is the breakdown of tests into test modules, a step that can make or break your test design (and subsequent test automation).

Right Approach per Test Module

The next step or "second grail" is developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope. This can then lead to two sets of items for the test modules:

  1. Test requirements
  2. Test cases, related to the test requirements

The test requirements are a set of statements describing as comprehensively as possible what should be tested. The best way I have found to write and read them is to think of the words "test if" in front of them. Examples:

  • Coming directly from a system requirement: (test if) "the password must be a minimum of 6 characters"
  • More aimed at the test, only indirectly coming from system requirements: (test if) "a transfer can be made from Mexican to Chinese currencies"

Making test requirements is part "science" and part "art". It is the "analytical phase" of test development, in which you should actually analyze and understand system requirements and not just copy and paste them. The test requirements should show what you do. We have a more extensive guideline for test requirements, but here are some things to look for:

  • Make cause and effect clear, and mention cause first ("clicking 'Submit' empties all fields")
  • Make condition and effect clear, and mention condition first ("if all fields are populated, ok is enabled")
  • Split complex sentences into small statements
    • It is ok to combine two or more functionalities if this is not adding to complexity (like "ok becomes enabled if both first name and last name are specified")
  • Keep test requirements short. Leave out as many words as you can without loosing the essential meaning

After the "analytical" phase of devising test requirements, the next step is the "design" phase of creating the actual test cases. Once the test cases are developed they can be related to the test requirements. Sometimes this a is one-to-one, but in the majority of cases the relation will be many-to-many: one test requirement might be tested in more than one test case, and one test case can verify multiple test requirements.

As in the earlier phases of test development (test module break down and test requirements), the creation of test cases should show added value from the tester. We train both our on-shore and off-shore testers to "use their head before using their hands", meaning think about the test cases while they are developing them. Try to make them smart and aggressive:

  • To get maximum effect from a limited set of test cases
  • To make them aggressive in finding system faults

There are a substantial number of testing techniques available, many of which have been published over the years in books like Testing Computer Software (Cem Kaner, Jack Falk, and Hung Nguyen). The value of these techniques depends on the situation, in our terminology: the scope of your test module. Please make a good study of them, and keep using your own intelligence and creativity. Test development should most of all be an intelligent and creative activity (you have to find issues that, also intelligent, developers overlooked), not just a mechanical one.

From my own experience I have come up with a test design technique that is specifically meant to steer away from too much mechanical testing. I have called it "Soap Opera Testing", since I used the popular format of television "soap operas" as an inspiration. This technique can come in handy if: (1) the business processes in the system under test are complex, and (2) end-users are involved, or can be involved if needed. The idea is to write test cases as if they were an "episode" in a "series", as a way to make them creative and aggressive. For more information please see my article "Soap Opera Testing" which was published in Better Software magazine in February 2004 and is also available on the LogiGear web site in the downloads section.

Conclusion

Regardless of a specific technique, I feel that a combination of "analytical" test requirements that focus on completeness and "creative" test cases that focus on aggressiveness can lead to an optimal result:

  • Completeness in testing functionalities and combinations of functionalities
  • Aggressiveness in finding hard to find bugs
  • Lean design that leads to efficient and maintainable automation

For the automation the use of appropriate "actions" is significant too. This is a topic for the next article on the "third grail" of test design.

Most of all make sure that the scope of the test module is clear and that all test requirements and test cases adhere to the scope. Avoid "sneaky checks", like testing the caption of an OK button in a test module that focuses on a business aspect like an insurance policy premium calculation. Such checks should really go into another test module.