Knowledge Library


At LogiGear, we pride ourselves not only on providing world-class service to our clients, but on contributing to the development of the software testing industry as a whole.

Over the years, we've generated and collected a great deal of valuable information on Agile. The Agile and Testing Resource Center has been created to provide you with the information that can help you in your understanding and application of Agile. Below you will find links to articles, book reviews, interviews and videos from some of the industry's foremost thought-leaders and expert testers such as Scott Ambler, Michael Hackett, Jonathan Rasmusson and Guido Schoonheim.

We're actively seeking additional resources to add, so if you've written about Agile and think your piece would be valuable for your peers, please feel free to submit an article by emailing

Agile test automation




Action Based Testing

The Action Based Testing™ method represents the continued evolution of the keyword-based testing approach and is the foundation of LogiGear's test automation toolset, TestArchitect™, which uses keywords to create and automate the majority of tests without scripting of any kind.

Action-Based Testing (ABT) provides a powerful framework for organizing test design, automation and execution around keywords. In ABT, keywords are called actions — to make the concept absolutely clear. Actions are the tasks that are executed during a test. Rather than automating an entire test as one long script, tests are assembled using individual actions. Non-technical test engineers and business analysts can then define their tests as a series of these automated actions.

Unlike traditional test design, which begins with a written narrative that must be interpreted by each tester or automation engineer, ABT test design takes place in a spreadsheet format called a test module. Actions, test data and any necessary GUI interface information are stored separately and referenced by the main test module.

Agile test automation




Mobile Testing

One of the hottest trends in the software testing world is mobile. As mobile devices come to resemble computers more and more, the complexity of mobile applications has followed suit. Numerous operating systems and device sizes make testing these applications increasingly difficult.

We’ve published a few magazines on mobile testing and thought it would be helpful to put everything in one place for easy reference. Here, you can find articles, book reviews, videos and interviews from some of the industry’s thought-leaders such as Julian Harty, Edward Hill, Gal Tunik and Robert V. Binder. We hope that this resource center will help you save time and money in your mobile testing efforts!


Agile Test Automation

Agile Test Automation

by Michael Hackett and Hans Buwalda

What is Agile Testing Automation? How can automated functional testing fit into agile projects? These are questions we encounter from customers all the time. Agile methods are relatively new to the software world, and hold great promise and many early success stories. With this in mind we've created this eBook for you, "Agile Test Automation".


Testing Computer Software

Testing Computer Software, 2nd Edition

by Cem Kaner, Jack Falk and Hung Q. Nguyen

Software testing is a race against time: testers must make sure that highly complicated programs will work reliably for the consumer, often with insufficient resources and on unrealistic schedules. Testers will appreciate the author's advice on effective bug analyzing, reporting, and tracking; black box testing; printer compatibility tests; and software product liability.


Testing Applications on the Web

Testing Applications on the Web, 2nd Edition

by Hung Q. Nguyen, Bob Johnson, Michael Hackett and Robert Johnson

With Internet applications spreading like wildfire, the field of software testing is increasingly challenged by the brave new networked world of e-business. Software engineers have developed sophisticated test methodologies over the years, but they just don't do the job for web-based software. Distributed applications have different performance goals from those of desktop applications, and require networking know-how on the part of the tester.


Integrated Test Design and Automation

Integrated Test Design and Automation

by Hans Buwalda, Dennis Janssen, Iris Pinkster and Paul Watters

Zero-defect software is the holy grail of all development projects, and sophisticated techniques have emerged for automating the testing process so that high-quality software can be delivered on time and on budget. This practical guide enables readers to understand and apply the TestFrame method - an open method developed by the authors and their colleagues that is rapidly becoming a standard in the testing industry.


Global Software Test Automation:
A Discussion of Software Testing for Executives

Global Software Test Automation

by Hung Q. Nguyen, Michael Hackett and Brent K. Whitlock

This is the first book to offer software testing strategies and tactics for executives. Written by executives and endorsed by executives, it is also the first to offer a practical business case for effective test automation, as part of the innovative new approach to software testing. Global test automation, as demonstrated here, is a proven solution, backed by case studies that leverage both test automation and offshoring to meet organizations' quality goals.



How a SaaS provider made microservices deployment safely chaotic

Remind was blindsided by performance issues with its microservices-based SaaS, until the company decided to sabotage its own product -- at scheduled times, in staging -- with a different kind of test.


Managing Distributed Resources – Offshore & Outsourcing Attitudes

By Michael Hackett, Co-Founder of LogiGear Corporation

Staffing is hard. Getting the right mix of skill sets, hiring good employees with great communication skills, creating positive team chemistry, and finding people of technical competence – all of these factors are crucial to product and project success. In the US, this task has become increasingly difficult. A tightening job market in all the big tech hubs, increasing salaries, and changing technologies have all led companies to look at increasing their distributed workforce. Whether it is placing emphasis on an easier domestic staffing location or offshore work distribution, each of these staffing solutions brings challenges.


5 Trends in Software Testing to Watch for in 2018

1. Blockchain

Blockchain technology has the potential to be disruptive and is going to be in high demand as this becomes more popular.

Testing Blockchain applications requires a solid understanding about core concepts such as decentralized applications, public/private ledgers, smart contracts, proof of work/stake, etc. to start. One would also have to have domain knowledge of how the critical issues such as such as security, regulatory & compliances are dealt with in enterprise applications. Companies will have to have a new set of technical skills like testing smart contracts at the API level, or building ‘testnet’ to securely and safely test decentralized applications.


Continuous Testing, Continuous Variation

With the arrival of continuous integration/continuous delivery (CI/CD) the notion of continuous testing (CT) is taking center stage. Knowing that comprehensive tests are running smoothly can be of great benefit for the CI/CD pipeline. But running tests can be both time and resource consuming, not to mention that tests can become boring and rigid. Using the repetitive character of CI/CD for testing can be a way to address this.

Reducing the Number of Test Runs

In a well-run CI/CD process, rebuild times will depend on how many components are affected by changes and need to be rebuilt, including execution of their unit tests. However, the amount of functional testing needed when there is a rebuild is less straightforward. Functional tests, in particular business level tests, can have a wider range than a single component and may need to run in multiple configurations and/or environments.


Software testing considerations for voice-first applications

How voice-first designed apps for devices like the Echo and Google Home can be tested for security and application flaws.

The names Alexa and Echo are household names and someday soon, most people will have these devices in their homes, ordering takeout, picking out a song, answering trivia questions. Welcome to the voice-first applications era. Amazon and Google have sold millions of Amazon Alexa and Echo and Google Home devices and 24.5 million voice-first devices are expected to ship before the end of 2017 Gartner is making the call that worldwide spending on virtual personal assistant (VPA)-enabled wireless speakers will be over $2 billion in the next three years


An insider's guide to the AI and IoT testing process

Testing the internet of things is one thing, but AI takes it to the next level. A LogiGear executive shares what the company learned from its first serious foray into this world.


Anki test director talks IoT and AI testing challenges

A complicated new game required a complicated testing strategy and help from the outside. Here's what game maker Anki learned from working with LogiGear on AI testing.


Management's Lack of Understanding Hinders Automation

According to LogiGear's State of Software Testing Survey, almost one-third of the respondents are experiencing classic test automation issues.

One problem commonly cited among respondents was that management didn’t fully understand what it takes to have a successful automation program. This included everything from process/team frustration, to tool choice.


Test Automation Trends & Adoption Rates

By Michael Hackett, Co-Founder of LogiGear Corporation

There is a great deal of content around the topic of software test automation, the topic of the third survey in LogiGear’s State of Software Testing Survey series. The sectors are numerous; tool choice, jumpstart platforms, cross platform, services, cloud. With all of this great test automation innovation comes great change.


DevOps Growing Pains Still Result in Forward Momentum

Test teams feel the need to adopt DevOps, but that migration is not always seamless, according to a new survey by LogiGear. That may be because 25 percent of respondents said their Ops/IT team is always helpful to the test team and its needs; 37 percent said Ops teams regularly help bring about good test environments; 27 percent said Ops can be "slow or difficult."


Is Free The Right Price For Your Software?

Why a 23-year-old software company transitioned to a freemium model

In the product business, capturing user adoption and market share is key to achieving success. If a product is well designed and solves a specific problem effectively, then it’s simply a marketing and sales game. You can invest in PR, promotion, digital marketing, lead-gen programs, trade shows, and your sales force. What if you want an alternative strategy? A freemium model might be the answer.


What it takes to be a Full Stack Automation Engineer

Setting aside the philosophical debate of why, this guide unfolds the noble, yet laborious journey of arming yourself with the knowledge to successfully transition from manual testing to a full-stack automation engineer. It’s a bold endeavor, which requires time, good navigation, and practice. The good news is that you don’t have to become an effective automation engineer overnight. The journey should be understood as a continuous spectrum instead of flipping a switch.


How to Successfully Transition from Manual Testing to Full-Stack Automation

Transitioning from manual testing to full-stack automation is a noble, yet laborious journey. Below is a guide to aid in a successful transition from manual testing to full-stack automation.

READ MORE >> - What Does DevOps Mean to Test Teams?

It wasn’t long ago that the Dev and test teams would work late hours, focused and rushed to meet a deadline: rapid fixing, reprioritizing and deferring bugs to close out the bug list, move everything to the staging server, do one last run of the regression and pass it over to Ops/IT to move to production. What happened after?


How Testing can be the Golden Egg

Automated testing has never been more important, and is gradually developing from a nice-to-have into a must-have, particularly with the influence of DevOps. In essence, DevOps means that the deployment process gets engineered just as the system is being deployed itself. This allows for rebuilding and redeploying, in one form or another, to happen whenever there are changes in the code. It is similar to the "make" process known in the UNIX world, where parts of a build process can be repeated when code files change. A major aid in that process is when tests can be created and automatically executed, meaning that the new deployment will work ok.


What Makes Test Automation Successful?

Fresh off his speaking engagement from Great Wide Open, Hans Buwalda has written an exclusive article for “What makes test automation successful?” explores the key factors that contribute to the success of a testing project in an Open-source environment.


Understand the Mobile Ecosystem Before you Test

Recently featured in the SD Times buyer’s guide, and written by LogiGear Co-founder, Michael Hackett, the article “Understand the mobile ecosystem before you test” explores the complexity of the mobile testing environment when it comes to choosing the best device mix for a robust testing strategy, and lays out the best plan to ensure that you test your mobile app effectively.


Scalability in Automated Testing

To scale automated functional testing for large, complex systems, you need to look at the design of the tests, how to organize the process, how the various players cooperate, the software's testability and stability—and, importantly, management's commitment. Hans Buwalda shares some testing tips.


Automation: Testing or Checking?

Interactive exploratory testing and organized automated testing seem to be on opposing ends of a spectrum, but much of that depends on how you apply them. Automated tests don't have to be shallow and boring. You can still explore, learn, and create good tests. Read on for more from Hans Buwalda.


Using Keywords to Support Behavior-Driven Development

Behavior-driven development tests can be efficiently automated with keywords, avoiding the need of a programming language and minimizing the involvement of developers. Hans Buwalda details how to support BDD scenarios with actions and keywords and switch between formats depending on your needs.


Scalability of Tests—A Matrix

Hans Buwalda highlights the scalability of unit, functional, and exploratory tests—the three kinds of tests used to verify functionality. Since many automation tools and strategies traditionally focus on functional testing, Hans provides some strategies to make functional testing more manageable.


Test Design for Automation: Anti-Patterns

Just like with design patterns, anti-patterns can benefit from a short and catchy name to make them easy to remember and talk about. Hans Buwalda shares a list of typical situations seen in tests that can harm automation and names for them.


Designing Data-Driven Tests with Keywords for Automation Success

When automated tests are well-organized and written with the necessary detail, they can be very efficient and maintainable. But designing automated tests that deal with data can be particularly challenging. Tests need certain base data to be available and in a predictable state when they run.


Improving Test Automation—What About Existing Tests?

A good test design is important because it improves the quality of the tests, helping to add breadth and depth, and it facilitates efficiency, in particular for automation. These points are obvious when starting a project from scratch, but what do you do when tackling a project with existing tests?


Improving Application Testability

Automating functional testing is almost never easy. As testers, how we organize and design tests has a big impact on outcomes, but developers can—and should—have a role in making automation easier. This ease or lack of ease is part of what is known as "testability."


Five Misconceptions about Test Automation

Hans Buwalda describes five of what he refers to as misconceptions about test automation.


Multi-Station Testing with Actions—The Lead Deputy Model

Hans Buwalda shares a model used for multi-station testing with actions—the lead deputy model—and shows how actions can be used to make a relatively complex task like multi-station available at a business level where even non-technical users can easily understand the thought process.


Automation Friendly Test Design—An Example

A major contributor to success in test automation is test design. If tests have many unnecessary detailed steps and checks, even a skilled automation engineer will not be able to make the automation efficient and maintainable. Hans Buwalda shares an example of a test design that is automation friendly.


What to Do When Bugs Are Found—Based on When They Are Found

When executing test modules, an interesting question to ask is “What needs to happen with issues that are found?” Hans Buwalda suggests making a distinction between issues found during a sprint and after the team has declared the functionality under test "done"—and describes how to proceed from there.


The Test Automation Design Paradox

Testing and automation have various paradoxes that are interesting to look at for insight into the challenges and limitations of our profession. Hans Buwalda describes these paradoxes and offers methods to bring about cooperation in teams, helping them achieve great automation results together.


Virtualization—A Tester's Dream?

For testers, virtual machines can be a game changer. To what degree the game really changes depends largely on how an organization decides to work with virtual machines and how active the testers themselves are in recognizing and leveraging virtual machines’ possibilities.


Reasons to Consider Software Tests as Products

Software tests have to meet quality and robustness criteria that are similar to the application under test, but tests seldom get the attention and investments that the applications get. Hans Buwalda outlines why you should consider tests as products.


How Software Testers Can Stay Relevant

Testers have an important responsibility to protect and further their craft. Many people who want to be considered testers should engage in career development more than they might have in the past. Hans Buwalda highlights four areas that testers need to understand to stay relevant.


Testing in Agile and DevOps: Where Are We Going?

When looking at what the software market is currently talking about, the top item is DevOps and Continuous Integration/Deployment, which seems to be taking over some of the spotlight from agile and is now a widely accepted new normal. Hans Buwalda looks at where the future of software testing is going.


The Cloud Is Metered

The cloud is metered—you pay by the hour, by the gigabyte, or by some other metric. The numbers might not necessarily be high, but they draw attention from managers. As testers we should look at these numbers as well. Hans Buwalda looks at how cloud-induced metering can impact testing.


Test Automation: Key Challenge in Software Testing

In this Guest Blog for DevOps Digest, Michael Hackett discusses the suprising results of LogiGear's Testing Essentials Survey.


6 Best Practices for Building a Long-Lasting Business

Hung Nguyen, CEO of LogiGear Corporation discusses six key factors for building a long-lasting enterprise, with strategies for businesses to continuously evolve, create and sustain growth for the long-term.


Action Based Testing, by Hans Buwalda in Better Software Magazine, March/April 2011

To address the challenges and fears of implementing automation in agile projects, LogiGear CTO Hans Buwalda presents Action Based Testing as the answer.

Hans Buwalda, CTO, LogiGear

How can automated functional testing fit into agile projects? That is a question we encounter a lot nowadays. Agile has become more common, but functional testing often remains a manual process because during agile iterations/sprints, there is simply not enough time to automate it. This is unlike unit testing, which is routinely automated effectively. The short answer is:

  1. A well planned and organized test design and automation architecture
  2. Organize the test design and automation architecture into separate life cycles

In this article I will show how the Action Based Testing method can help you to do just that. Let me first introduce Action Based Testing, followed by discussing how it can make both test design and test automation fit the demands of agile projects.

Action Based Testing

There are various sources where you can read more about Action Based Testing. Let me summarize the key principles here that are at the core of the method:

1. Not one, but three life cycles

It is common to have testing and automation activities positioned as part of a system development life cycle, regardless of whether that is a waterfall or an agile approach. ABT however regards three distinct life cycles. Even though they have dependencies on each other, in an ABT project they will be planned and managed as separate entities:

  1. System Development: follows any SDLC, traditional or agile model
  2. Test Development: includes test design, test execution, test result follow up, and test maintenance
  3. Automation: focuses solely on the action keywords, interpreting actions, matching user or non-user interfaces, researching technology challenges, etc

2. Test Design

The most important property is the position of test design. It is seen as the single most enabling factor for automation success, much more than the actual automation technology. In ABT, it is considered crucial to have a good "high level test design" in which so called "test modules" are defined. Each test module should have a clear scope that is different from the other and is developed as a separate "mini project.

A test module will consist of test objectives and action lines. The test objectives outline the scope of the test module into individual verbal statements defining what needs to be tested in the module.

The tests in the test module (which looks like a spreadsheet) are defined by a series of "action lines," often further organized in one or more test cases. Every action line defines an "action" and consists of an "action word" defining the action, and arguments defining the data for the action, including input values and expected results.

Note here the ABT test case figures, not as central as in some other methods. We feel the test case is too small and too isolated of a unit to give good direction to test development. Rather than having a predefined list of test cases to be developed, we like to make a list of test modules, and let the test cases in them be the result of test design, not the input of it.

Consequences derive from varying test cases and increase significantly during the creative process. Also, each test case can leave behind the preconditions of the next, resulting in a good flow of the test execution.

3. Automation

In ABT the automation activity is separated from the test development. Test design and automation require very different skill sets and interests. There might be people that are interested at doing both, which is fine, but in my experience that is not very common. Also it assigns ownership for "getting the test to work.

In ABT the automation engineers will concentrate on automation of actions and making "interface definitions" to manage the interaction with the interfaces (user or non-user) of the system under test. This type of automation activity requires advanced skills and experience.

Agile Test Development

In using ABT with its separate life cycles for test development and test automation, there are in fact two topics addressing how to fit automated testing in agile projects:

  1. Test design and development
  2. Automation

In ABT the automation engineers will concentrate on automation of actions and making "interface definitions" to manage the interaction with the interfaces (user or non-user) of the system under test. This type of automation activity requires advanced skills and experience.

Having said that and using a scrum project with sprints, testing activities in an agile project fall into three timelines:

  1. Testing in regular system development sprints
  2. Test development prior to development sprints
  3. Testing after development has finished

1. Testing in regular sprints

The most common practice is, and will remain, to develop and execute tests as part of sprints. In a sprint, functionality is progressively understood from user stories and conversations to become clear enough for testers to test it. This can be done in developed tests similar to ABT test modules, as well as exploratory and interactive testing. It can also be good practice to capture at least some of the "interesting" interactive tests in test modules for future use.

Unit tests are an invaluable asset, but in the ABT approach one would like to consider options to re-use and extend their reach across the lines of addressing single functions.

By defining test modules for unit tests and assigning them to actions, they can be strung together more easily to test with a wider variety of values and include other parts of the system under test, either during a sprint or later on.

2. Test development prior to development sprints

In the ABT method the use of actions, in particular high business level actions, allow for the development of tests with a non-technical focus on business functionality, often simply called "high level tests." Such tests stay away from details in the UI and focus on business transactions, like requesting a home loan, or renting a car.

Higher level tests can be developed early in a project. These tests don't have to wait for a system development sprint since there will be limited time to carefully understand business functionalities and create appropriate tests for them.

The number of, and whether or not business level tests can be made, depends on individual situations. In general, I would recommend the following:

  • Have as many business level tests as possible, as they add great value to overall depth and quality, as well as being resilient against system changes that do not pertain to them.
  • Use the high level test design step in ABT (where the test modules are identified) to determine what can be done early on in business level tests, and what needs to be completed in detail tests as part of development sprints.

3. Testing after sprints

Once sprints for individual system parts have finished and these parts come together, normally more testing will be needed to ensure quality and compliance of the entire system. Also, tests may be needed to retest parts of systems that were not touched by system changes and confirm the new system parts integrate well with the old ones. This could for example happen in regression or "hardening" sprints.

In my view, this "after-testing" is a key area where it can pay off most to have, in advance, well developed test modules and fully automated actions resulting in valuable time savings, particularly if a release date is getting close. The test development and automation planning should address this use in final testing as a main objective, and identify and plan test module development accordingly.

Agile Test Automation

The term often used for test automation in agile projects that best describes what is needed is "just in time automation." When ABT is applied, the term changes to "just in time test development." Independent to that, a high level of automation can play an invaluable contribution in improving the productivity and speed in sprints.

To obtain the automation quickly and timely, a number of rules should be applied:

  • Build the base early
  • Make automation resilient
  • Address testability of the system under test
  • Test the automation

1. Build the base early

A successful automation architecture should start with creating a solid base on which further action can be developed. This includes items like the ability to perform all necessary operations on all UI interface classes, access to API's, ability to query databases, compiling and parsing messages in a message protocol, etc.

Although much technical functionality is available in LogiGear’s TestArchitect tool, most of our projects will start with R&D efforts to address customer specific technical challenges, e.g. emulating devices in a point of sale system, working with moving 3D graphics for oil exploration, testing mobile devices, accessing embedded software in diagnostic equipment, etc.

This technical base is something to administer as soon as possible and as comprehensively as possible. Identify all technical challenges and resolve them. This typically results in the implementations for low level actions, that then in turn can be used for higher level actions, for example in development sprints. Addressing the technical base early also limits risks.

2. Make automation resilient

The essence of agile projects is that many details of the system under test only become clear when they are being implemented, as part of iterations like the sprints in Scrum. This holds in particular for areas that automation tends to rely heavily on, like the UI. Those details can change quite easily as long as the creative process moves along. The automation should in such cases not be the bottleneck. Flexibility is essential.

The action model by nature can give such flexibility as it allows details to be hidden in individual actions, which can then be quickly adjusted if necessary. However, there are some additional items to take care of as well. The most common in our projects has turned out to be "timing." Often automation has to wait for a system under test to respond to an operation and get ready for the next one.

What we found is that the automation engineer should make sure to use "active timing" as much as possible. In active timing you try to find a criterion in the system under test to wait for, and wait for that up to a preset, generous, maximum. If the criterion is met, the automation should move on without further delay. Paying attention to these and similar measures will make the automation solid and flexible.

3. Address testability of the system under test

When preparing automation, system developers should identify items that the system under test should provide to facilitate easy access by automation. When the items have been identified early on and are formulated as requirements, the teams can easily incorporate it in the sprints.

A good example is the provision of values for certain identifying properties that are available in various platforms for screen controls or HTML elements, properties that are not visible to a user, but can be seen by automation tools. Providing such values will allow automation to address the controls or elements easily, and in a way that is usually not sensitive to changes in the design.

In fact if such values are defined early on in a project, a tool like TestArchitect allows for the creation of "interface definitions" to take advantage of them before the system under test is even built.

Examples of such useful properties are the "id" attribute in HTML elements, the "name" in Java/Swing, and the "accessibility name" in .Net and WPF. All of these do not influence the user experience, and can be seen by the tools. Using them also solves issues of localization: an OK button can be found even if its caption is in another language.

4. Test the automation

Automation should be tested. In ABT this means actions and interface definitions must be tested. They are like a product that automation engineers provide to testers, a product in which high quality is required. We require in each testing project to have at least one folder (in the TestArchitect test tree) with test modules that test the actions and interface definitions, not necessarily the system under test.

Just like the test development, the automation activities must be well planned and organized, and a number of experienced people to be involved. If that is the case the combination of careful test development planning and automation planning should be able to meet the demands of agile projects quite easily.


The First Holy Grail of Test Design by Hans Buwalda

This article focuses on the first principle, the effective break down of the tests. I also like to refer to it as the "high level test design". In this step you divide the tests that have to be created into manageable sets like chapters in a book, which I call "test modules".

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation


In my previous article "Key Principles of Test Design" I discussed a vision for test design, built around three key principles (which I call the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

This article focuses on the first principle, the effective break down of the tests. I also like to refer to it as the "high level test design". In this step you divide the tests that have to be created into manageable sets like chapters in a book, which I call "test modules". Each test module should typically contain between a few to a few dozen test cases. The next steps in test development deal with designing the individual test modules ("holy grails" 2 and 3) and with effective automation.

Effective Break Down of the Tests

Although making a good high level test design is as much art as it is science, there are some guiding criteria for it that I like to use. They are organized as "primary" and "additional" criteria. The primary criteria are the more obvious ones that should be applied first. The additional criteria can help to further refine the line-up of test modules.

Primary Criteria

  • Functionality and other requirements. The basis for an IT system is the required functionality, usually organized into groups and/or categories. Tests can be organized along similar lines.
  • Architecture of the system under test. Just about every IT system is built up in layers, modules, protocols, databases, etc. All of these pieces have to be tested individually and in combinations. The line-up of test modules should reflect that.
  • Kind of test. Many kinds of tests such as functionality, UI, performance, screen lay out, security, and more can be done to even one small part of a system under test. Generally each test module should not do more than one kind of test.
  • Ambition level. I tend to categorize tests in levels of ambition. A low level is a smoke test, just to see if a system can start and do basic functions. The most common tests are medium ambition level, testing individual functions without combinations. High ambition level tests are "aggressive" tests that are designed to "break" a system under test. Organizing the tests of different ambition levels in different modules will make it easier to develop the test and most of all easier to run them (for example, run the smoke tests first, if successful run the functional tests, last come the aggressive tests).

Additional Criteria

  • Stakeholders. These are departments or individuals with a particular interest in some of the tests. One good line-up of tests is along the lines of stakeholders, so that each test module has only one stakeholder to be involved (for input and/or assessment).
  • Complexity of the test. Put particularly complicated tests in separate test modules, so that the other tests can run unaffected.
  • Technical aspects of execution. Some tests might need a complex environment or specific hardware to run, while others can run more easily. Make sure the module line-up reflects this.
  • Planning and control. Overall project planning and progress can impact whether or not enough information is available to develop certain test cases. Keeping such test cases separate from ones that can be developed earlier in the life cycle can allow you to obtain a more smooth progression of test development.
  • Risks involved. A risk analysis can provide great input for test design. When there are high risk areas in a system under test it can make sense to devote specific test modules to them. A good example is a premium calculation in an insurance system. Any bug in a core function like that is not acceptable, so it is worthwhile to plan for a test module for each single aspect of such a calculation.

The way to apply these criteria is to start with the straightforward ones first, one at the time, then review the results using all of the criteria, including the additional ones. Repeat this process a couple of times, preferably with a number of knowledgeable people involved. When you want to use outside consultants this step is a good candidate. There is also not much time involved in this step helping to keep down outside consulting costs.

When the modules are identified, they can be the basis for a Test Delivery Plan in which the modules selected to be developed are listed with tentative dates for the delivery of their first version (for example, to a stake holder who will review them).

Here are some examples of what typically can go into separate modules:

  • UI oriented tests, like "does a function key work" or "does listbox xyz contain the right values"
  • Do the individual functions (like transactions in a financial system) work
  • Tests of alternate paths in use cases, like does the system roll back after canceling a transaction
  • Higher level business level end-to-end tests, like: create a new customer, let him do a couple of transactions and see if his end balance is correct
  • Odd tests that are more difficult to execute, for example because they need multiple workstations (i.e., a test is done to exceed a limit and to see if a supervisor from another workstation will be involved to approve)
  • Tests that test other qualities of a system other than functionality, like a load/performance test
  • Tests that involve non-UI actions, like testing individual methods of classes used in the system under test, or message in a TCP/IP or SS7 protocol
  • Tests with different "ambition levels", like:
    • A simple low ambition smoke test to see if a new build of the system under test works well before running any other modules
    • An aggressive test, designed to break a system under test, typically to be executed after other modules were successful already


However you do it, try to end up with a list of test modules that are well-differentiated from each other and each have a single well-defined scope. The scope is the anchor point for the successive development of tests within the test modules.


The Second Holy Grail of Test Design by Hans Buwalda

This article discusses the "second Holy Grail", namely finding the right approach per test module. This step focuses on developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope.

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation


In the article "Key Principles of Test Design" I presented three key principles (the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

This article discusses the "second Holy Grail", namely finding the right approach per test module. In the text of the first Holy Grail article ("The First Holy Grail of Test Design") we saw that a first important step is the breakdown of tests into test modules, a step that can make or break your test design (and subsequent test automation).

Right Approach per Test Module

The next step or "second grail" is developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope. This can then lead to two sets of items for the test modules:

  1. Test requirements
  2. Test cases, related to the test requirements

The test requirements are a set of statements describing as comprehensively as possible what should be tested. The best way I have found to write and read them is to think of the words "test if" in front of them. Examples:

  • Coming directly from a system requirement: (test if) "the password must be a minimum of 6 characters"
  • More aimed at the test, only indirectly coming from system requirements: (test if) "a transfer can be made from Mexican to Chinese currencies"

Making test requirements is part "science" and part "art". It is the "analytical phase" of test development, in which you should actually analyze and understand system requirements and not just copy and paste them. The test requirements should show what you do. We have a more extensive guideline for test requirements, but here are some things to look for:

  • Make cause and effect clear, and mention cause first ("clicking 'Submit' empties all fields")
  • Make condition and effect clear, and mention condition first ("if all fields are populated, ok is enabled")
  • Split complex sentences into small statements
    • It is ok to combine two or more functionalities if this is not adding to complexity (like "ok becomes enabled if both first name and last name are specified")
  • Keep test requirements short. Leave out as many words as you can without loosing the essential meaning

After the "analytical" phase of devising test requirements, the next step is the "design" phase of creating the actual test cases. Once the test cases are developed they can be related to the test requirements. Sometimes this a is one-to-one, but in the majority of cases the relation will be many-to-many: one test requirement might be tested in more than one test case, and one test case can verify multiple test requirements.

As in the earlier phases of test development (test module break down and test requirements), the creation of test cases should show added value from the tester. We train both our on-shore and off-shore testers to "use their head before using their hands", meaning think about the test cases while they are developing them. Try to make them smart and aggressive:

  • To get maximum effect from a limited set of test cases
  • To make them aggressive in finding system faults

There are a substantial number of testing techniques available, many of which have been published over the years in books like Testing Computer Software (Cem Kaner, Jack Falk, and Hung Nguyen). The value of these techniques depends on the situation, in our terminology: the scope of your test module. Please make a good study of them, and keep using your own intelligence and creativity. Test development should most of all be an intelligent and creative activity (you have to find issues that, also intelligent, developers overlooked), not just a mechanical one.

From my own experience I have come up with a test design technique that is specifically meant to steer away from too much mechanical testing. I have called it "Soap Opera Testing", since I used the popular format of television "soap operas" as an inspiration. This technique can come in handy if: (1) the business processes in the system under test are complex, and (2) end-users are involved, or can be involved if needed. The idea is to write test cases as if they were an "episode" in a "series", as a way to make them creative and aggressive. For more information please see my article "Soap Opera Testing" which was published in Better Software magazine in February 2004 and is also available on the LogiGear web site in the downloads section.


Regardless of a specific technique, I feel that a combination of "analytical" test requirements that focus on completeness and "creative" test cases that focus on aggressiveness can lead to an optimal result:

  • Completeness in testing functionalities and combinations of functionalities
  • Aggressiveness in finding hard to find bugs
  • Lean design that leads to efficient and maintainable automation

For the automation the use of appropriate "actions" is significant too. This is a topic for the next article on the "third grail" of test design.

Most of all make sure that the scope of the test module is clear and that all test requirements and test cases adhere to the scope. Avoid "sneaky checks", like testing the caption of an OK button in a test module that focuses on a business aspect like an insurance policy premium calculation. Such checks should really go into another test module.


The Third Holy Grail of Test Design by Hans Buwalda

This is the last in a series of articles that outline how to do effective and efficient test design. This last crucial step is to write down the test cases as clearly and efficiently as possible.

By Hans Buwalda, Chief Technology Officer, LogiGear Corporation


This is the last in a series of four articles that started with "Key Principles of Test Design". In these four articles I present what I view to be three key principles to make test design successful (the "Holy Grails of Test Design"):

  1. Effective break down of the tests
  2. Right approach per test module
  3. Right level of test specification

If you followed the instructions of the previous articles you should now have a list of well-defined and differentiated "test modules". For each test module you should have "test requirements", and you should know what the test cases are going to be. Now a last crucial step is to write down the test cases as clearly and efficiently as possible.

Writing Test Cases at the Right Level of Abstraction

The challenge at this point, the "Third Holy Grail of Test Design", is to write the test cases at the right level of abstraction:

  • Detailed enough to clearly show the intention and logic of the test case: what is the input, what is verified, etc
  • At the same time hiding as many details as possible that are not relevant for the test

This principle is most clearly visible when you use Action Based TestingT (ABT) or a similar key-word driven approach. In ABT the tests are written as a sequence of actions with arguments. The actions are the basis of the automation. This allows you to "hide" those steps that are not signifi-cant for a test in the implementation of the action.

However, even for manual tests it can make sense to "hide" detailed steps that are not relevant, especially when such details are repeated many times. A common example is logging into the system. Let us say that the manual instruction is:

Enter a user name in the field "User Name", and a password in the field "Pass-word". Then click on the button called "Login".

It is not uncommon to find an instruction like this repeated many times in a set of manual test in-structions. Some disadvantages are:

  • Instructions are repeated over and over again, which can be a lot of work.
  • The test cases are hard to read, because of the needless detail it is difficult to see the forest through the trees.
  • If there are changes in the logon screen of the system under test all the test cases have to be updated (or become outdated).
  • In this example the values that actually are interesting are the user name and password. However, they are not specified, only mentioned implicitly. This means that during test execution the tester has to come up with the values over and over again.
  user password
logon hans logigear

The values are now explicitly specified, while the actual steps needed to log on are not visible. They are "hidden" in the interpretation of the action "logon". Technically this is a simple step, simi-lar to defining subroutines in a programming language. The important point though is the test de-sign objective of:

  • Showing those details that are relevant for a test, like input values
  • Hiding anything else as much as possible

As another example consider these lines. They click a node in a tree, and check whether an item called "parabola" appears in a list:

  window tree tree item path
click tree item main pictures My Projects/Main/Picture 1
wait for window main    
  window list item
check list item exists main picture elements parabola

These fragments are rich in details, explicitly telling us which item in a tree to click, to wait for a window to respond, and to check for an element in a list. Whether or not this is appropriate de-pends on the scope of the test:

  • Was the goal to verify the workings of the "pictures" tree and the "picture elements" list? In that case it is good to show the details of this interaction.
  • If the goal was just to see if "parabola" appears as an element in the "picture 1" picture then the details should be avoided.

In the project where this fragment comes from, the goal was just to verify the contents of "picture 1", and therefore the fragment was too detailed. This was more so because the similar fragments (with other values) appeared in many dozens of places throughout the test set. In such a case it is much better to write something like this:

  project picture element
check picture element Main Picture 1 parabola

With this notation the purpose of the check is clearer, the number of lines are reduced and there will be less maintenance when the system under test undergoes changes.

Another category where it is easy to save on details is action arguments. In many cases argu-ments like a "zip code" or "phone number" are not relevant for a test. They are just there to com-plete underlying dialogs. If that is the case, leave the arguments out and make sure the action implementations use suitable default values for them.


Exactly what to show and to hide is not always an easy decision, which is why I have named this the third "holy grail". A crucial note here is that the decision is a test design decision, not an engi-neering one! The purpose of hiding or showing details is not necessarily to make the test as short as possible, like you would when writing a program with subroutines. What you need to do most of all is write clear test cases that:

  • Show explicitly what is relevant for the test allowing the reader to understand the test based solely on the test lines, without having to look into the details of an action
  • Hide those steps and arguments that are not relevant in the scope of the test, to avoid unneeded maintenance, and to make the test easier to read

These types of test design decisions need to come from the testers, not from the automation en-gineers. Automation engineer can, however, play a useful role pointing out to testers what is pos-sible, but ultimately the test design decisions belong to the testers.


Key Success Factors for Keyword Driven Testing by Hans Buwalda

Keyword driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:

  • Keywords, which are typically base level and describe generalized UI operations such as "click", "enter", "select"
  • Business templates which are typically high level such as "login", "enter transaction"
  • Action Words, or short "Actions", which can be both base level and high level and in their most general form allow earlier defined key words to be used to define higher level action words

Keyword driven testing is a very powerful tool helping organizations to do more automated testing earlier in the testing process and making it easier to maintain tests over time. As with any complex undertaking, there are "success factors" that can determine whether or not a testing effort will be successful. This paper will outline key success factors for keyword driven testing including base requirements, the vision for automation, success factors for automation, and how to measure success.

Base Requirements

There are numerous requirements that I consider to be "base requirements" for success with keyword driven testing. These include:

  • Test development and automation must be fully separated - It is very important to separate test development from test automation. The two disciplines require very different skills. Fundamentally, testers are not and should not be programmers. Testers must be adept at defining test cases independent of the underlying technology to implement them. Individuals who are skilled technically, the "automation people" (automation engineers), will implement the action words and then test them.
  • Test cases must have a clear and differentiated scope - It is important that test cases have a clearly differentiated scope and that they not deviate from that scope.
  • The tests must be written at the right level of abstraction - Tests must be written at the right level of abstraction such as the higher business level, lower user interface level, or both. It is also important that test tools provide this level of flexibility.

Vision for Automation

It is also important to have a clear vision for automation. Such a "vision" should include things such as:

  • Having a good methodology - It is important to have a good integrated methodology for testing and automation that places testers in the driver's seat. It is also important to employ the best technology that supports the methodology, maximizes flexibility, minimizes technical efforts, and maximizes maintainability.
  • Have the right tool - Any tool that is employed should be specifically designed for keyword based testing. It should be flexible enough to allow for the right mix of high and low level testing. It should allow the testers to quickly build keyword tests, without difficulty. It should also not be overly complicated for automation engineers to use when implementing the automation.
  • Succeed in the three "success factors for automation" - There are three critical success factors for automation that the vision should account for. They are:
    • Test design
    • Automation solution
    • Organization

Success Factors for Automation

Test Design

Test design is more important than the automation technology. Design is the most underestimated part of testing. It is my belief that test design, not automation or a tool, is the single most important factor for automation success. To understand more about test design see these previous articles:

Comprehensive Automation Architecture

An automation architecture should emphasize methodology over technology, manageability, and maintainability. The methodology should control and drive the technology so that technology supports the methodology and the importance of manageability and maintainability.

Organization and management

Organization and management are also very important. Success is highly dependent on how well you organize the process including:

  • Management of the test process
  • Management of the tests
  • Efficient and effective involvement of stake holders, users, auditors

A plan of approach should be written for test development and automation. In it should be items such as:

  • Scope, assumptions, risks
  • Methods, best practices, tools, technologies, architecture
  • Stake holders, including roles and processes for input and approvals...and more.

The "right" team must also be assembled. This team should include:

  • Test management who is responsible for managing the test process.
  • Test development who is responsible for production of tests. Test development should include test leads, test developers, end users, subject matter experts, and business analysts.
  • Automation engineering who is responsible for creating the automation scheme for automatic execution. Members of this team include a lead engineer as well as one or more automation support engineers.
  • Support functions, providing methods, techniques, know how, training, tools, and environments.

For the team there should be a clear division of tasks and responsibilities as well as well defined processes for decision making and communication.

Some Tips to Get Stable Automation

  • Make the system under test automation friendly. While developers are not always motivated to do that, it pays off. In particular ask development to add specific property values to the GUI interface controls for automated identification like "accessible name" in .Net and Java, or "id" in Web controls
  • Pay attention to timing matters. In particular use "active timing", based on the system under test, not fixed amounts of "sleep".
  • Test your automation. Develop a separate test set to verify that the actions work. Make separate people responsible for the automation.
  • Use automation to identify differences between versions of the system under test

How to Measure Success

With any major undertaking, it is important to define and measure "success". There are two important areas of measurement for success - progress and quality.


You should measure test development against the test development plan. If goals are not reached, act quickly to find the problems. Is the subject matter clear? Are stake holders providing enough input? Is it clear what to test (overall, per module)? Is the team right (enough, right skill set mix)?

You should measure automation and look at things such as implemented keywords (actions) and interface definitions (defined interface dialogs, pages, etc).

You should measure test execution looking at things such as how many modules are executed and how many executed correctly (without errors)?


Some of the key quality metrics include:

  • Coverage of system and requirements
  • Assessments by peers, test leads, and by stake holders (recommended)
  • Effectiveness
    • Are you finding bugs?
    • Are you missing bugs?
    • Can you find known bugs (or seeded bugs)?
    • After the system is released, what bugs still come up? You should consider calculating the "Defect Detection Percentage" (Dorothy Graham, Mark Fewster)
  • Mine your bug base for additional insights


It is important to understand that keywords are not magic, but they can serve you well. What is more important is to take the effort seriously and "do it right". Doing it right means that test design is essential, both global test design and the design of individual test cases. Automation should be done but it should not dominate the process. Automation should flow from the overall strategy, methodology, and architecture. It is also very important to pay attention to organization - the process, team, and project environment.

Following the success factors outlined in this paper can lead to a successful implementation of keyword driven testing.


Key Principles of Test Design by Hans Buwalda

Test design is the single biggest contributor to success in software testing. Not only can good test design result in good coverage, it is also a major contributor to efficiency. The principle of test design should be "lean and mean." The tests should be of a manageable size, and at the same time complete and aggressive enough to find bugs before a system or system update is released.

Test design is also a major factor for success in test automation. This is not that intuitive. Like many others, I initially also thought that successful automation is an issue of good programming or even "buying the right tool". That test design turns out to be a main driver for automation success is something that I had to learn over the years, often the hard way.

What I have found is that there are three main goals that need to be achieved in test design. I like to characterize them as the "Three Holy Grails of Test Design", a metaphor based on the stories of King Arthur and the Round Table. Each of the three goals is hard to reach, just like it was hard for the knights of King Arthur to find the Holy Grail. This article will introduce the three "grails" to look for in test design. In subsequent articles in this article series I go into more detail about each of the goals.

The terminology in this article and the three follow up articles is based on Action Based Testing (ABT), LogiGear's method for testing and test automation. You can read more about the ABT methodology on the LogiGear web site. In ABT test cases are organized into spreadsheets which are called "test modules". Within the test modules the tests are described as a sequences of "test lines", each starting in the A column with an "action", while the other columns contain arguments. The automation in ABT does not focus on automating test cases, but on automating individual actions, which can be re-used as often as necessary.

The Three Goals for Test Design

The three most important goals for test design are:

  1. Effective breakdown of the tests

    The first step is to breakdown the tests into manageable pieces, which in ABT we call "test modules". At this point in the process we are not yet describing test cases; we simply identify the "chapters" into which test cases will fall. A break down is good if each of the resulting test modules has a clearly defined and well-focused scope, which is differentiated from the other modules. The scope of a test module subsequently determines what its test cases should look like.

  2. Right approach per test module

    Once the break down is done each individual test module becomes a mini-project. Based on the scope of a test module we need to determine what approach to take to develop the test module. By approach I mean the choice of testing techniques used to build the test cases (like boundary analysis, decision tables, etc), and who should get involved to create and/or assess the tests. For example, a test module aimed at testing the premium calculation of insurance policies might need the involvement of an actuarial department.

  3. Right level of test specification

    This third goal is where you can win or lose most of the maintainability of automated tests. When creating a test case try to specify those, and only those, high-level details that are relevant for the test. For example, from the end-user perspective "login" or "change customer phone number" is one action; it is not necessary to specify any low-level details such as clicks and inputs. These low-level details should be "hidden" at this time in separate, reusable automation functions common to all tests. This makes a test more concise and readable, but most of all it helps maintain the test since low-level details left out will not have to be changed one-by-one in every single test if the underlying system undergoes changes. The low-level details can then be re-specified (or have their automation revised) only once and reused many times in all tests.
    In ABT this third principle is visible in the "level" of the actions to be used in a test module. For example, in an insurance company database, we would write tests using only "high-level" actions like "create policy" and "check premium", while in a test of a dialog you could use a "low level" action like "click" to see if you can click the OK button.


Regardless of the method you choose, simply spending some time thinking about good test design before writing the first test case will have a very high payback down the line, both in the quality and the efficiency of the tests.


Capitalizing Testware as an Asset by Hans Buwalda

Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs; methods, tools or practices that reduce those costs are considered profitable.

Software testing is generally regarded as an activity, not a product: the test team tests the products of the development team. In that sense testing is seen in terms of costs and savings: The activity costs money; finding bugs early saves money. Test automation can reduce the cost of the testing itself.

Managing the testing effort in financial terms of profit and loss (costs and savings) is a good thing, particularly if it leads managers to make conscious decisions about the amount of testing that should be performed: More testing costs more, and less testing increases risks, which are potential (often much higher) costs down the line.

Very few companies think of software tests as products, or in financial terms, company assets. Test teams are not seen as "producing" anything. This is unfortunate, since it underestimates, particularly in financial terms, the value of good "testware".

The underlying reasons for not treating testware as a long term assets are hardly surprising:

  • In manual testing, the bulk of the hours are spent executing tests against the system, even if test cases are documented in a test plan.
  • In most test automation projects, the test scripts are not well architected and too sensitive to system changes.

If an organization begins to consider it's tests as assets, then it can significantly enhance the way that it approaches testing. Consider the following:

  • Test cases for your application have a definite value, and just like any other capital asset, can depreciate over time as the underlying application changes.
  • Well-written test cases, along with thoroughly documented requirements and specifications, are one of the few ways to consolidate the 'intellectual capital' of your team members. With today's global teams, and the increasing challenge of retaining engineers, especially overseas, being able to retain knowledge as people come and go is critical to the success of your testing (and the entire product development) effort.
  • Well-automated tests can be re-used over and over again, thus forming assets which produce profits for the company.

So how can you apply this idea at your company?

Creating automated tests is the best way I've found to maximize the output of your investment in software testing. Not only does test automation reduce your costs (a positive impact to your P&L), but well-designed test automation is also a valuable asset (a positive impact on the balance sheet of the company) that can be used across many different versions of your product, even as you switch between platforms!

  • As much as possible, define your tests at the 'business process' level, leaving out unneeded details of the application under test, like its UI build-up or screen flow. Business processes change less frequently than the systems that are supporting them, so your test will require less maintenance (i.e. depreciate less quickly.)
  • The tests should be executable either automatically or manually, so that they still provide value even when the system has changed and some updates to the automation are required. Keyword-driven testing is a great example of how tests can be defined in a format that can be executed either way.
  • Remember that test automation tools are not silver bullets. To maximize the output of your investment in test automation, you must combine good methodology and technology. A poorly planned test automation effort can quickly become a burden on your organization that provides little value.


Bonus Bugs by Hans Buwalda

Hans Buwalda discusses “bonus bugs,” bugs caused by fixes or code changes and how to avoid them from the point of view of the developer, tester and manager.

Bonus bugs are the major rationale for regression testing in general and test automation in particular, since test automation is the best way to quickly retest an entire application after each round of code changes.

Since this is probably a 'bonus' you want to avoid, how do we prevent the bonus bugs from occurring, and how do we detect them when they have been introduced? I will give some notes here from the perspective of the developer, the tester and the manager respectively.

Let's first talk about the developer. A developer can do quite a lot to reduce the chances of bonus bugs. Today's systems are becoming more and more complex, and this complexity only increases over time as changes to the system are made. Any change can easily trigger a problem somewhere else, thus producing a bonus bug.

There is a lot written about commenting and documenting code, which I will not go into here, but whatever standard you adhere to (or are told to adhere to), make sure that somebody can easily "inherit" your code. It should take minimal energy for somebody to "decipher" and maintain the code you have written. Code should be written in small blocks each of which starts with a meaningful comment. For example is there something that you want the next person to know about the code (e.g. some technical pitfall that you had to work around), state it explicitly in the code comments.

Another good policy is to have code changes reviewed and approved by either a peer programmer, or even better by a supervising "architect" who understands how the system is built up and what consequences of system changes could be.

From the point of view of the tester, there are two main items to worry about: test design and level of automation.

Test design is one of the most underestimated topics in IT. Most tests that I encounter in companies and other organizations are "lame"; they simply follow the system requirements one by one and don't even attempt to combine several different parts of the system functionalities with each other in creative ways that could reveal unexpected problems, like bonus bugs. Even though requirement bases tests are useful, they have a low "ambition level" and it can pay out to allocate time and resources to make more aggressive tests.

A high level of test automation will greatly enhance your capability to catch the bonus bugs before they reach the release. To get to such a high level, simply buying a test tool will not be enough. A well thought-out method of test automation, such as keyword-driven testing, is essential, combined with training and coaching by experienced test automation experts.

Finally, a few words from the perspective of the manager: Here the recommendation is in fact quite simple: make a determination what bonus bugs can cost, and what it is worth to prevent them. This is a business estimate and decision: having bonus bugs can cost money; efforts to prevent them cost money too. Effects of bonus bugs (or any other kind of bugs) can typically be loss of time before or after system release, and/or decreased appreciation by end-users of you and your company. Preventing bonus bugs takes extra time and money to follow policies and procedures for development and testing, which can include reviews of code and setting up a high level of test automation.

By understanding how and why bonus bugs get introduced into applications, we can both prevent them from being introduced, and find them when they are. This takes a combined effort of the developers, testers, and managers, and it's a very important step in ensuring that your end-product satisfies your customers and other stakeholders.


Business Test Policies by Hans Buwalda

In a previous newsletter I discussed Test Governance, the topic of organizing and managing testing activities in an organization. In this article, I want to discuss something called "business test policies." These are statements that serve as basis for the Test Governance, and describe how testing is positioned in the overall company strategy, environment and culture.

Business test policies give the corporate perspective on testing (and test automation), using explicit policy statements. An example of a business test policy is "Performance testing is a responsibility of the system development groups", or "tests and their automation are regarded as company assets and need to be managed accordingly".

These policies are not something that many companies or institutions will have developed, but it makes sense to spend some time here, since software quality is critical for business and testing is hard to organize and manage. Considering that in a typical IT organization, testing makes up about 30% of costs, it deserves the attention.

Let me give you the bad news first: to be effective, business test policies need to come from upper level management. Directions on testing, like how much it may cost, are business decisions: too much testing means too much expenditure, while too little testing introduces risks that threaten the company's revenue. This means test managers need to engage in deep discussions with upper level management about the testing objectives, which can be intimidating. Due to their experience with these sorts of discussions, external consultants can be quite helpful in these situations.

The good news: Developing good policies shouldn't take too long. Most companies I have been in already have some sense of the position of testing. For example, in a recent discussion with a major technology company about a consumer product, they told me the product must not contain any bugs that are visible to the user; another company dealing with specialized geological data was generally tolerant for bugs in dialogs and controls, as long as the data underlying data was flawless.

Policy statements need to be meaningful, not just "lip service". A statement along the lines of "testing is good, bugs are bad" is not enough. The best way to think about it is in terms of money: every statement has cost consequences, so is it important enough to justify the cost?

Thinking about testing in business terms is challenging. Testers should not try to, or be expected to, set their own goals "in a bubble" (i.e. "we should test, because it is good to test"). Unfortunately this is the case more often than not, and it leads to a lack of commitment from the rest of the organization to the testing effort. Testing costs time and money, so there should be a business reason, coming from a business manager, to test. A business manager is responsible for costs and profits: he or she is accountable for money spent on testing, but also responsible if the company loses money because a system was released without enough testing. For a tester/test manager, life is much more comfortable if there is a clear assignment from the business, i.e. what to test, and what the budget is.

This leads to the hardest part in getting business test policies: If you as a test manager want to establish them effectively, it is best to think in business terms and address business issues, without even considering testing at first. I call this a "U-turn": you step out of your testing world, engage in a business discussion, and then translate the business considerations back to testing and test automation considerations.

Here are some of the concerns that an organization can formulate business test policies for:

  • What is the significance of testing and how much can be spent on it?
  • How does testing connect to critical success factors of the company?
  • Do we have problems, and if yes, what are their causes?
  • When should testing be done in the system development life cycle?
  • Who should be involved in testing (test development, assessment, reporting) and who is responsible for testing?
  • What testing expertise is needed, and how will it be provided?
  • Is testing centralized or decentralized in the organization?
  • Are there methods and tools that should be used?
  • What degree of test automation should be used?
  • What (if any) degree of outsourcing of (1) testing, (2) development of tests and/or (3) automation should be used?

Most importantly, keep an exercise on business test policies practical. That way they can contribute to an effective and efficient test process.


Test Governance

Hans Buwalda, LogiGear, 12/29/2005

Software testing is commonly perceived as a chore: products that are made by other developers have to be verified. Chores are something that you don’t want to spend to much attention and money on.

With our Action Based Testing method we have shifted the focus from “testing” to “test development” (with automated execution). This is successful because creating tests becomes a more systematic and easier to plan and control activity, resulting in tangible and valuable products.

Another shift that I would like you to think about is in focus: in stead of regarding software testing as a derivate of software development, give it a central separate focus and manage it as a key asset of the company. To summarize this thought I use the term “Test Governance”.

There is a good case to be made to do so:

  • Testing is a large part of the efforts in IT, typically 30% of all efforts
  • Good testing is very hard to do. It needs skilled staff and there is a lot to learn
  • Testing is often on the critical path of system development and maintenance
  • Test automation is a potential solution, but in itself is very hard to do successfully
  • Testing needs to be organized well, including who is responsible for which tests and how to report results

When discussing Test Governance a word of warning is in place. It is important to be practical about testing. Thinking about Test Governance should not lead to introducing all kinds of bureaucracy that nobody cares about.. Be careful with impractical standards and heavy life cycles that miss their purpose.

In my view three elements should be part of Test Governance:

  • What are the “business test policies” around testing
  • How should testing be organized in projects
  • How should testing be organized across projects

Business test policies are statements that describe, in broad terms, what the point of view is of an organization on testing. I will discuss these in a next newsletter article. Sufficient for now is to know that they should describe the importance of testing in the business of the organization and how it should be organized.

Testing activities are usually part of system development projects. Sometimes they are also organized in separate projects, usually to introduce test automation. Most books and articles on testing are about the activities within projects, and this is rightfully so. Consider creating a standard plan of approach for testing projects that deals with questions like responsibilities, communication structures, resources and skills, and obviously the planning, budget and risks.

However, projects tend to have a very strong “solution focus”: what is it that we need to achieve and how do we get there within given budgets and timelines. Projects are actually not a good environment to learn and improve best practices. Therefore I recommend to consider additional structures that have an “improvement focus”. This could be something traditional like a central test support department, or a more “light” solution, like one or more coordinating committees with members from various departments.

An addition to formalized structures consider “soft” ones, where staff members meet and discuss matters of know how and experience. For example one could introduce “Special Interest Groups” (SIG’s) that have regular informal meetings, typically in the off-hours. Members of a SIG share a common interest, for example “test design”, “test automation” or “test management” and an evening is typically structured around a presentation and discussion. SIG’s can also run sites on the intranet. All of these activities provide an inexpensive and light way to improve competence, but also help people “find” each other for advice and discussion of matters in projects.


Contact us today to find out
how affordable our services are


Today’s Regression Automation Challenge for Continuous Delivery [STP Exclusive]

6 Steps to Performance Testing Like a Pro
6 Steps to Performance Testing Like a Pro

This webinar explains how to define goals for automated regression for CD, and how to apply continuous delivery goals to your testing plan. We discuss the right way to get started with automation, the methods and tools to help you focus your automation, and how to optimize your existing pipeline.


6 Steps to Performance Testing Like a Pro

6 Steps to Performance Testing Like a Pro
6 Steps to Performance Testing Like a Pro

Performance testing can help you identify your website or application’s bottlenecks. Follow these steps to ensure that it performs well under pressure.


Not your Mother’s Game Testing: Games Testing Strategy for the 21st century

Agile Support On-Demand - A Cloud-Like Approach to Testing Services
Agile Support On-Demand - A Cloud-Like Approach to Testing Services

In this webinar, a long time games-world tech expert Stephen Cobb describes the new reality of game testing, the skills involved and where game testing is going.


Application Performance Across The Software Development Lifecycle

Agile Support On-Demand - A Cloud-Like Approach to Testing Services
Agile Support On-Demand - A Cloud-Like Approach to Testing Services

Two trends are driving major changes in the way performance testing is done in software development teams today: an increase in the pace of development and the requirement for multi-screen applications. With these trends in mind, teams can no longer push off performance testing tasks until the last minute before a release.


Agile Support On-Demand - A Cloud-Like Approach to Testing Services

Agile Support On-Demand - A Cloud-Like Approach to Testing Services
Agile Support On-Demand - A Cloud-Like Approach to Testing Services

A team might be done with work items in a sprint, but it’s often the case that development and automation of functional tests aren’t finished. To get automated testing “done” in agile sprints, handing over excess workload for test development and automation to a service group-on-demand is an efficient, viable option. This webinar outlines how you can implement a process to relieve teams and keep automated testing in sync with development by employing “Outsourcing 2.0”.


Automating Testing in Real-Time Environments

Automating Testing in Real-Time Environments
Automating Testing in Real-Time Environments

Agile application delivery requires test automation, and also the ready applications and infrastructure to efficiently execute large-scale testing. Learn how using on-demand virtual environments enables you to rapidly scale testing and remove the constraints that commonly hold back testing cycles, resulting in both faster testing and increased test coverage.

According to Forrester Research, nearly 50% of Agile teams can't automate more than 29% of their tests. Furthermore, recent voke research indicates that 63% of organizations experience development delays and 68% of organizations experience QA delays due to waiting for an environment.

Presenters from LogiGear and Skytap here outline how organizations can automate more tests, shorten market release cycles, and lower the cost of development per release by combining test automation with on-demand production environments.


Scalability of Tests - A Matrix

Scalability of Tests - A Matrix
Scalability of Tests - A Matrix

LogiGear Chief Technology Officer Hans Buwalda authored this article in TechWell, discussing the scalability of unit, functional, and exploratory tests. Since many automation tools focus on functional testing, Hans proposes options to make this type of testing easier to manage.


Test Automation: Garbage-in = Garbage-out

Test Automation: Garbage-in = Garbage-out
Test Automation: Garbage-in = Garbage-out

In this video Hans Buwalda outlines how to design and organize tests for efficient automation, and how the leading test methods, Action Based Testing (ABT) and behavior-driven development (BDD), enable good test design.


Successful Testing by Design - Hans Buwalda

Successful Testing by Design - Hans Buwalda
Successful Testing by Design - Hans Buwalda

In this webcast Hans Buwalda examines the importance of test design for maintainable automation and how Action Based Testing (ABT) facilitates successful test design.


Automate Testing within the Same Sprint

Automate Testing within the Same Sprint
Automate Testing within the Same Sprint

Automating tests in the same dev sprint can be a game changer. This webcast outlines how it can be done following the same processes the TestArchitect software development team uses.


Halliburtons's Last Mile to Continuous Integration

Halliburtons's Last Mile to Continuous Integration
Halliburtons's Last Mile to Continuous Integration

Cheronda Bright of Halliburton shares how she leveraged LogiGear's expertise to integrate TFS, MTM and TestArchitect to allow testing to keep up with rapid development cycles.


Automated Testing with Keywords

Automated Testing with Keywords
Automated Testing with Keywords

Like Agile, there can be a lot of variation in how keyword testing is applied. In this webcast, Hans Buwalda, the pioneer of the keyword method, presents how to make automation with keywords effective.


Michael Hackett discusses how to avoid technical debt

Michael Hackett discusses how to avoid technical debt
Michael Hackett discusses how to avoid technical debt

Cut a little testing here and a little there and before you know it, you have a big pile of technical debt. In this webcast, Michael Hackett offers some tips on how to avoid a nightmare testing situation.


Contact us today to find out
how affordable our services are


How does TestArchitect fit into Continuous Delivery?

What is DevOpsIn this webinar Deliveron's John Weland and Mike Douglas discuss how to leverage TestArchitect for building an effective software delivery pipeline


TestArchitect Oracle E-Business Suite Demo

What is DevOpsTestArchitect, powered by Action-Based Testing method, is a functional test automation tool for Oracle applications such as Oracle E-Business Suite, JD Edward and Oracle Database.


Test Design Essentials for Great Test Automation

What is DevOpsIn this joint webinar, Hans Buwalda of LogiGear & Titus Fortner of Sauce Labs discuss best practices for creating well-organized, easy-to-read tests that can be automated in an efficient and maintainable way.


What is DevOps?

What is DevOpsPart 1 of the DevOps and Continuous Testing Series. This video explores an overview of DevOps, in regards to the roles and responsibilities for Development, Operations and IT teams.


Understanding the DevOps Practices

Understanding the DevOps PracticesPart 2 of the DevOps and Continuous Testing Series. This part defines some key terms that test teams must know for DevOps and Continuous Testing. Examples include Continuous Integration and Continuous Monitoring and Continuous Delivery.


The Ops Side of DevOps

The Ops Side of DevOpsPart 3 of the DevOps and Continuous Testing Series. In this video Michael Hackett discusses the role of Operations and IT as it relates to test teams and Continuous Testing.


Testing Strategy in Continuous Testing

Testing Strategy in Continuous TestingPart 4 of the DevOps and Continuous Testing Series. In this video Michael Hackett discusses test strategy for Continuous Testing in DevOps in addition to Agile, as well as how it relates to Continuous Delivery.


Test Automation in DevOps

Test Automation in DevOpsPart 5 of the DevOps and Continuous Testing Series. In this video Michael Hackett discusses modern test automation and the role of traditional quality assurance as it relates to DevOps.


An Interview with Anki's Jane Fraser

An Interview with Anki's Jane FraserLogiGear has helped Anki successfully test and in this interview Jane Fraser discusses how she uses LogiGear for her testing service needs. Jane discusses the history of her work with LogiGear—through her tenure at Electronic Arts, to her current position as Test Director at Anki.


Action Based Testing in Agile with Hans Buwalda

Action Based Testing in Agile with Hans BuwaldaHans shares how using action-based testing practices like modularization and keywords can make your tests easier to create, automate and maintain, as well as simpler to understand.


Not your Mother’s Game Testing. Games Testing Strategy for the 21st century

Not your Mother’s Game Testing. Games Testing Strategy for the 21st centuryIn this webinar, a long time games-world tech expert Stephen Cobb describes the new reality of game testing, the skills involved and where game testing is going.


Hung Nguyen Interview by NetViet TV

Hung Nguyen Interview by NetViet TVLogiGear’s CEO Hung Nguyen was recently interviewed by NETVIET TV. He shares with us how he established software testing as critical section of software development and the growth of his business.


How to get Automated Testing “Done”

How to get Automated Testing “Done” Hans discussed the following solutions on how one can apply better test design to drive better automation, a number of technical strategies, what developers and product owners can do to help, and how to handle the testing and automation work that is still left after a sprint has finished.


Application Performance Across The Software Development Lifecycle

Application Performance Across The Software Development Lifecycle Two trends are driving major changes in the way performance testing is done in software development teams today: an increase in the pace of development and the requirement for multi-screen applications. With these trends in mind, teams can no longer push off performance testing tasks until the last minute before a release.


Paul Holland on Rapid Software Testing, Part 1

Paul Holland on Rapid Software Testing, Part 1 In this video interview, testing consultant Paul Holland discusses rapid software testing with Hung Nguyen.


Harry Robinson Talks Training

Harry Robinson Talks Training

At VISTACON 2011, Harry sat down with LogiGear Sr. VP Michael Hackett to discuss various training methodologies.


Michael Hackett: Agile Automation

Michael Hackett: Agile Automation

Agile Automation
Michael Hackett, Senior Vice President, LogiGear Corporation


Views from Around the World

Views from Around the World

Michael Hackett, LogiGear Senior VP, asks conference participants, "What is the most important issue to resolve in the Global Software Engineering?"


VISTACON 2010 Keynote - The Future of Testing by BJ Rollison

VISTACON 2010 Keynote - The Future of Testing by BJ Rollison

BJ Rollison - Test Architect at Microsoft VISTACON 2010 - Keynote


New Roles for Traditional Testers in Agile – Part 1/4

New Roles for Traditional Testers in Agile – Part 1/4

MICHAEL HACKETT - Certified ScrumMaster
Michael shares his thoughts on "A Primer - New Roles for Traditional Testers in Agile"


New Roles for Traditional Testers in Agile – Part 1/4 (cont.)

New Roles for Traditional Testers in Agile – Part 1/4 (cont.)

MICHAEL HACKETT - Certified ScrumMaster
Michael shares his thoughts on "A Primer - New Roles for Traditional Testers in Agile"


New Roles for Traditional Testers in Agile – Part 2/4

New Roles for Traditional Testers in Agile – Part 2/4

MICHAEL HACKETT - Certified ScrumMaster
Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"


New Roles for Traditional Testers in Agile – Part 2/4 (cont.)

New Roles for Traditional Testers in Agile – Part 2/4 (cont.)

MICHAEL HACKETT - Certified ScrumMaster
Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"


New Roles for Traditional Testers in Agile – Part 3/4

New Roles for Traditional Testers in Agile – Part 3/4

MICHAEL HACKETT - Certified ScrumMaster
Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"


New Roles for Traditional Testers in Agile – Part 4/4

New Roles for Traditional Testers in Agile – Part 4/4 MICHAEL HACKETT - Certified ScrumMaster
Michael shares his thoughts on "The Common Problems and Misconception with Extreme Programming"


Connect with Us