WHO WE ARE
Founded in 1994 by top thought leaders in the software testing industry, LogiGear has completed software testing and development projects for prominent companies across a broad range of industries and technologies.
LogiGear provides leading-edge software testing technologies and expertise, along with software development services that enable our customers to accelerate business growth while having confidence in the software they deliver.
LogiGear is headquartered in the heart of Silicon Valley with the majority of the software testing and software development staff located in Ho Chi Minh City and Da Nang Vietnam. We are among the largest employers of software testing and development professionals in Vietnam, and our close partnerships with universities throughout the country allow us to attract and recruit top software engineering talent.
LogiGear continues to grow as companies realize the benefits of outsourcing their software testing and development. We have been listed among the fastest growing privately held companies by Inc. 500|5000 in 2009, 2012, 2013 and 2014.
The senior executive team has co-authored several top-selling books on software testing and test automation, including:
- Testing Computer Software, by Cem Kaner, Jalk Falk and Hung Q. Nguyen
- Testing Applications on the Web, by Hung Q. Nguyen, Michael Hackett and Robert Johnston
- Integrated Test Design and Automation, by Hans Buwalda, Dennis Janssen, Iris Pinkster, and Paul Watters
- Global Software Test Automation, by Hung Q. Nguyen, Michael Hackett, and Brent K. Whitlock (foreword by Apple Computers co-founder Steve Wozniak)
LOGIGEAR CTO TO LEAD SESSIONS AT STAREAST 2016
LogiGear’s Hans Buwalda shares expertise in three conference sessions
Foster City, CA May 1st - 6th , 2016— LogiGear, a world leader in software testing solutions, today announced that its Chief Technology Officer, Hans Buwalda, will be speaking at STAREAST 2016, scheduled May 1st- 6th at the Renaissance Orlando at SeaWorld in Orlando, Florida.
Hans will address conference attendees first on Monday, May 2nd at 1:00pm in a half-day tutorial session. His presentation “Better Test Design for Great Test Automation” will explore how better test design makes the difference between test automation success and failure. Hans will share a template attendees can use to organize tests and learn efficient automation. He will discuss techniques—including action-based testing, and behavior-driven development—that will help attendees achieve better test design and automation.
On Tuesday, May 3rd, at 8:30 am, Hans will lead a full-day tutorial titled “The Challenges of BIG Testing, Automation, Virtualization Outsourcing and More” where he will share his experiences and present strategies for organizing and managing testing on large projects. Hans will discuss how to design tests specifically for automation, including how to incorporate techniques like keyword testing and behavior-driven development. Additionally, he will review what roles virtualization and the cloud play—and the potential pitfalls of each. Among the other takeaways include tips to stabilize automation and recommendations on how to manage the numerous versions and configurations common in large projects.
Hans’ final presentation of STAREAST takes place on Wednesday, May 4th at 11:30 am titled “Anti-Patterns for Automated Testing.” Hans will discuss anti-patterns, common responses to recurring problems that tend to be counterproductive, but that he nevertheless commonly sees in automated test design and that he feels inhibit scalability and maintainability. Attendees are encouraged to bring in their own anti-patterns and situations that are counterproductive toward achieving manageable and maintainable automation.
STAREAST is one of the longest-running, and most respected conferences on software testing and quality assurance. The event week features over 100 learning and networking opportunities and covers a wide variety of some of the most in-demand topics including, Test Management, Test Techniques, Test Automation, Agile Testing, DevOps & Testing, Testing the Internet of Things, Mobile Testing, Testing Metrics, Cloud Testing and Performance Testing.
For additional information about the conference, visit stareast.techwell.com. For more information about testing solutions, and the full range of software testing and development services provided by LogiGear, please visit LogiGear.com.
LogiGear Corporation (http://www.logigear.com) provides leading-edge software testing technologies and expertise, along with software development services that enable companies to accelerate business growth while having confidence in the software they deliver. Founded in 1994 by top thought leaders in the software testing industry, LogiGear has completed software testing and development projects for prominent companies across a broad range of industries and technologies.
Multi-Station Testing with Actions—The Lead Deputy Model
Today I want to share a model we use for "multi-station" testing in Action Based Testing. Action Based Testing is a method for defining and automating tests. It relies heavily on designed tests in such a way that they are easily readable for humans and at the same time are fully automated in a maintainable way. To do this, tests are organized in "test modules" as a sequence of keyword based "actions." I have written about this in various articles, including in Better Software Magazine (March 2011).
Let's take as an example a system for bank tellers. These are the friendly people in the bank offices who can help you with transactions, such as depositing checks or withdrawing cash. A typical action based test could look like this:
Notice that the test is written in business terms. You don't see UI details; you aren’t able to distinguish from this test what the platforms for the application are, such as web based, .Net, Java Swing, classic mainframe, etc.
Note: A key recommendation in Action Based Testing is to keep such business tests completely separate from "interaction tests" that verify whether a user—or other system in case of non-UI—can interact with the application under test. Such tests will have more detailed actions, such as "select list item" or "check window exists."
Now let's make the case a bit more complicated by adding that if a requested withdrawal is larger than $10,000, a supervisor needs to approve it. This supervisor can do this from his own workstation.
In this example, "deputy machines" are used. These work like normal test machines; they are able to interpret and execute actions but are in a special mode and listen to commands from the lead machine—or possibly another deputy machine. They also have a temporary logical name assigned to them—in this case "supervisor." Notice that a transaction number is kept in a variable "tx" that is made available to the deputy (">>" is a variable assignment, "#" denotes an expression).
A variation on the model is to allow deputies to work in parallel. For example: Letting a deputy load a large database while the lead machine does some more UI interaction, such as logging into the application. To do this, we have an extra argument "parallel" for "use deputy,” and additional actions "wait for deputy" and "wait for all deputies" to specify when the lead and the deputies should meet again.
I hope this shows how actions can be used to make a relatively complex task like multi-station available at a business level where even non-technical users, such as business domain experts, can easily understand the thought process. The examples included here are logical models. We have implemented them in our own automation framework (TestArchitect), but it is not overly complicated to support them in other automation frameworks as well.