WHO WE ARE
Founded in 1994 by top thought leaders in the software testing industry, LogiGear has completed software testing and development projects for prominent companies across a broad range of industries and technologies.
LogiGear provides leading-edge software testing technologies and expertise, along with software development services that enable our customers to accelerate business growth while having confidence in the software they deliver.
LogiGear is headquartered in the heart of Silicon Valley with the majority of the software testing and software development staff located in Ho Chi Minh City and Da Nang Vietnam. We are among the largest employers of software testing and development professionals in Vietnam, and our close partnerships with universities throughout the country allow us to attract and recruit top software engineering talent.
LogiGear continues to grow as companies realize the benefits of outsourcing their software testing and development. We have been listed among the fastest growing privately held companies by Inc. 500|5000 in 2009, 2012, 2013 and 2014.
The senior executive team has co-authored several top-selling books on software testing and test automation, including:
- Testing Computer Software, by Cem Kaner, Jalk Falk and Hung Q. Nguyen
- Testing Applications on the Web, by Hung Q. Nguyen, Michael Hackett and Robert Johnston
- Integrated Test Design and Automation, by Hans Buwalda, Dennis Janssen, Iris Pinkster, and Paul Watters
- Global Software Test Automation, by Hung Q. Nguyen, Michael Hackett, and Brent K. Whitlock (foreword by Apple Computers co-founder Steve Wozniak)
LogiGear Executives to Speak at Software Testing Conference
Attendees to learn successful strategies about Test Automation and DevOps in Continuous Testing
Foster City, CA, March 7th, 2017 — LogiGear, a world leader in software testing solutions, today announced that its Chief Technical Officer Hans Buwalda and Senior Vice President Michael Hackett will speak at the Software Test Professionals Conference (STPCon) Spring 2017 in Phoenix, Arizona, March 14th-17th, 2017. Software Testing Professionals is a global software testing and quality assurance community that empowers professionals with educational and networking opportunities.
Buwalda, a renowned software testing expert will present a workshop on the topic, “What Makes Automated Testing Successful?” on Tuesday, March 14th, 2017 from 1 p.m. to 5 p.m. MT. Attendees at this workshop will learn about processes that will make automation successful including the importance of test design in automation, and how a modularized keyword-driven approach can help with automation success.
Hackett, a software testing industry veteran, will also present a workshop, “Move Into DevOps – Experiences From the Real World on Continuous Testing”, on Tuesday, March 14th, 2017 from 8 a.m. to noon MT. Drawing on three real-life implementations, Hackett will educate attendees on test strategy, processes in the DevOps cycle, and the top issues with solutions for test teams to be successful moving to Continuous Testing and DevOps, among others.
Hackett will also be premiering “Case Study for the 21st Century – Building a Mobile & IoT Development and Test Lab for an Offshore Team”, on Thursday, March 16th, 2017 from 2:30 p.m. to 3:30 p.m. MT. In this talk, he will share details of how a Mobile and IoT lab for development, testing and test automation became a training and innovation platform for LogiGear.
“The Software Test Professionals Conference has long been the choice of testing practitioners managing the testing and QA practice in their organizations. We have seen a substantial increase in the attendance of both Automation and DevOps workshops and sessions as more organizations try to streamline and consolidate their testing practice. At STPCon we are excited to have two well known, experienced, and respected testing professionals lead workshops and sessions that deliver ideas, suggestions, and learning which can be taken back to their organization and used immediately,” said Peggy Libbey, CEO, Software Test Professionals, STPCon.
STPCon is a premier educational event for the testing industry. Leaders, strategists and professionals converge at this conference to exchange and discuss the hottest trends. The topics covered include agile testing, performance testing, test automation, mobile application testing, and test team leadership and management.
Why thinking like a customer is so important for QA teams
Quality assurance traditionally entailed only of teams evaluating the code and ensuring that nothing was breaking or performing abnormally. In these setups, there wasn't time to worry about how end users would take it, it just mattered that the app would work. However, in today's world of endless software, quality assurance must make the customer the priority driving their work. Thinking like the customer is going to be crucial for QA teams for a few important reasons.
Uncover gaps in understanding
Development and QA teams must work together to create the best solution for their users, but they can only operate on the information that they are given. Before a project begins, teams meet with stakeholders and the customer to discuss requirements and answer any initial questions. However, if the groups come out of this gathering without all of the data they need, that could lead to major issues in design and functionality.
TechBeacon contributor Joe Colantonio suggested using domain-specific language to communicate in the same terms as the user. This will help create a more meaningful conversation on both sides to identify ambiguities or misunderstandings before teams start working on the code. Changing areas of an app can be expensive, so it’s essential to have all of the specifics fleshed out before the real effort begins. Thinking like a customer in this way also enables QA to fully visualize what the user is looking for and what types of tests should be created to ensure that deliverables are thoroughly assessed.
Deliver true value
It’s a common issue that developers and testers do more work than they need to, or they complete items that may not add value to the project. These efforts take up additional resources that need to be used for essential efforts within software creation. However, it can be difficult for teams to know where to draw the line and when they have created enough tests for their set. StickyMinds contributor Paul Fratellone suggested reviewing the approach and risks with stakeholders to identify any excessive testing as well as the risk associated with the strategy. By understanding the greatest risks, the number of test cases and how long it will take to deliver quality to the user, organizations can assess their costs and make plans to work off these numbers. With this data, there should be no surprises when a tough decision appears, and teams can be prepared to quickly take action. This will help deliver true value to the customer and demonstrate QA’s viability.
“Reliability, usability, and accuracy will manifest in the number of test cases and techniques used to satisfy the level of quality that the end-user is expecting and on which the business owner must plan to spend,” Fratellone wrote. “Complete transparency enables the team to make sound business decisions and decide on appropriate levels of risk and tradeoffs when plans are not being met.”
Discover possible defects
Testers understand by now that not all defects can be found. However, teams should still use capable quality testing tools and processes to discover as many as they can. With automation and other setups, tests are only as smart as we program them to be, meaning that there could be gaps in coverage. Relying only on repetitive scripts would open a lot of risk to businesses and their development projects. Instead, QA should aim to use exploratory and manual testing methods to break the system the way a user would. Software Testing Help noted that QA must be aware of what happens on the customer end if they enter the wrong information. If the app responds in an unexpected way, it’s integral to get this fixed before it is delivered to the user.
In these situations, it’s critical for QA to drive the initiative and fully think as a customer would. Computers cannot think subjectively and might not see the same errors that a person might. This applies to things like the user interface, navigability and overall feel of the application. Test automation frameworks might clear the code, but there could be glaring issues with how the software looks or how one function flows to another. By thinking like a customer, QA can uncover these flaws and deliver true value, building better user relationships.
QA must try to break apps in the same way a user might.