WHO WE ARE
Founded in 1994 by top thought leaders in the software testing industry, LogiGear has completed software testing and development projects for prominent companies across a broad range of industries and technologies.
LogiGear provides leading-edge software testing technologies and expertise, along with software development services that enable our customers to accelerate business growth while having confidence in the software they deliver.
LogiGear is headquartered in the heart of Silicon Valley with the majority of the software testing and software development staff located in Ho Chi Minh City and Da Nang Vietnam. We are among the largest employers of software testing and development professionals in Vietnam, and our close partnerships with universities throughout the country allow us to attract and recruit top software engineering talent.
LogiGear continues to grow as companies realize the benefits of outsourcing their software testing and development. We have been listed among the fastest growing privately held companies by Inc. 500|5000 in 2009, 2012, 2013 and 2014.
The senior executive team has co-authored several top-selling books on software testing and test automation, including:
- Testing Computer Software, by Cem Kaner, Jalk Falk and Hung Q. Nguyen
- Testing Applications on the Web, by Hung Q. Nguyen, Michael Hackett and Robert Johnston
- Integrated Test Design and Automation, by Hans Buwalda, Dennis Janssen, Iris Pinkster, and Paul Watters
- Global Software Test Automation, by Hung Q. Nguyen, Michael Hackett, and Brent K. Whitlock (foreword by Apple Computers co-founder Steve Wozniak)
Test design driven automation
Automated testing has never been more important, and is gradually developing from a nice-to-have into a must-have, particularly with the influence of DevOps.
In essence, DevOps means that the deployment process gets engineered just as the system is being deployed itself. This allows for rebuilding and redeploying, in one form or another, to happen whenever there are changes in the code. It is similar to the "make" process known in the UNIX world, where parts of a build process can be repeated when code files change. A major aid in that process is when tests can be created and automatically executed, meaning that the new deployment will work ok.
For the lowest level of testing, the unit tests, this is not arduous. Unit tests are essentially functions that one-by-one test the other functions (methods) in a system. At one step higher, component tests can verify the methods exposed by a component, usually without having to worry about the UI of system under test. Similarly, REST services can easily be accessed by tests. In all these cases the automation of the tests is intrinsic, and such tests are usually not that sensitive to changes in the target system.
However, higher level tests like functional and integration tests can be more cumbersome, particularly if they have to work through the UI. It is this category that this article will address.
What to Do When Bugs Are Found—Based on When They Are Found
Action Based Testing (ABT) is based on the importance of test design to drive automation success. It uses uses a modular keyword-driven approach, which means that tests are organized in “test modules” and built of sequences of “actions”—each consisting of an action name (keyword) and zero or more arguments. In our TestArchitect tool we define these in a spreadsheet-like format that is easy to work with. Test modules can contain multiple test cases that need to fit into the scope of that particular module. The test cases can form a narrative in which each test case can set up the preconditions for the next one.
The development and automation of test modules fits well in sprints. Typically a sprint will start with higher-level test modules that are at a similar level as the user stories and acceptance criteria. Once the sprint starts creating the detailed UI, the lower level “interaction test” modules can be created as well.
When executing test modules, an interesting question to ask is “What needs to happen with issues that are found?” I like to make a distinction between issues found during a sprint and issues found after the team has declared the functionality under test “done”.
For issues found while a sprint is still ongoing, consider skipping a heavyweight bug tracking process. Share any failing test module—as is—with the rest of the team, regardless of whether it only has one failure or multiple failures. If the scope of the test module is well defined, the focus can be on that, such as “the log in process has issues”. A developer may still be working on the code and can take a look at what the test module is revealing, and he will be pleased that this module came to him without delay. When you’re in an acceptance test driven development process, something like this may already be routine. Do not enter bugs in a bug tracking system or ALM, and avoid the overhead and delays of reproducing, bug-crawling, prioritizing, assigning, etc.
For significant issues that are found after a sprint has closed, I prefer to follow a more formalized process, which is also visible outside the team. The further down the road a problem is found, the more important tracking bugs becomes.
For problems that come up after a sprint, I ask three questions:
- Is it a bug? A percentage of problems reported are caused by defects, but other are because of other factors, such as a wrong understanding by users. Such lack of clarity may also need attention but not necessarily by developers.
- What is the root cause? Make sure the real problem is addressed, not just a quick fix of a symptom.
- Why didn’t we find it? This is not to assign blame to testers, but bugs getting unnoticed can point to weaknesses in the tests.
Make sure to address the three questions in the given order. Too often I see people ask “Why did the testers not find this?” This can lead to discord, because the team hasn’t yet determined if the problem is actually a bug in the software. For our own product, we have defined fields for the three questions in the ALM system that we use. This is to encourage teams to answer them and to have these answers readily available to learn from them.
When question two has been answered, I typically like to know for question three whether the defect is based on a mistake made by a developer when implementing an otherwise well-defined functionality (I call these “coding bugs”) or it showed up due to an unexpected situation (I call these “jungle bugs”). Unfortunately these are more common.
It is in catching jungle bugs early that I think testers can really shine. Coding problems are often caught in unit tests already, but only when a system enters the real world (the “jungle”), will its true resilience show.