Language menu for mobile

Articles

By Michael Hackett, Co-Founder of LogiGear Corporation

Staffing is hard. Getting the right mix of skill sets, hiring good employees with great communication skills, creating positive team chemistry, and finding people of technical competence – all of these factors are crucial to product and project success. In the US, this task has become increasingly difficult.

1. Blockchain

Blockchain technology has the potential to be disruptive and is going to be in high demand as this becomes more popular.

With the arrival of Continuous Integration/Continuous Delivery (CI/CD) the notion of continuous testing (CT) is taking center stage. Knowing that comprehensive tests are running smoothly can be of great benefit for the CI/CD pipeline. But running tests can be both time and resource consuming, not to mention that tests can become boring and rigid. Using the repetitive character of CI/CD for testing can be a way to address this.

How voice-first designed apps for devices like the Echo and Google Home can be tested for security and application flaws.

The names Alexa and Echo are household names and someday soon, most people will have these devices in their homes, ordering takeout, picking out a song, answering trivia questions, etc.

Testing the Internet of Things is one thing, but AI takes it to the next level. A LogiGear executive shares what the company learned from its first serious foray into this world.

A complicated new game required a complicated testing strategy and help from the outside. Here's what game maker Anki learned from working with LogiGear on AI testing.

According to LogiGear's State of Software Testing Survey, almost one-third of the respondents are experiencing classic test automation issues.

One problem commonly cited among respondents was that management didn’t fully understand what it takes to have a successful automation program. This included everything from process/team frustration, to tool choice.

By Michael Hackett, Co-Founder of LogiGear Corporation

There is a great deal of content around the topic of software test automation, the topic of the third survey in LogiGear’s State of Software Testing Survey series. The sectors are numerous; tool choice, jumpstart platforms, cross platform, services, cloud. With all of this great test automation innovation comes great change.

Test teams feel the need to adopt DevOps, but that migration is not always seamless, according to a new survey by LogiGear. That may be because 25 percent of respondents said their Ops/IT team is always helpful to the test team and its needs; 37 percent said Ops teams regularly help bring about good test environments; 27 percent said Ops can be "slow or difficult."

Why a 23-year-old software company transitioned to a freemium model

In the product business, capturing user adoption and market share is key to achieving success. If a product is well designed and solves a specific problem effectively, then it’s simply a marketing and sales game.

Setting aside the philosophical debate of why, this guide unfolds the noble, yet laborious journey of arming yourself with the knowledge to successfully transition from manual testing to a full-stack automation engineer. It’s a bold endeavour, that requires time, good navigation, and practice. The good news is that you don’t have to become an effective automation engineer overnight. The journey should be understood as a continuous spectrum instead of flipping a switch.

Transitioning from manual testing to full-stack automation is a noble, yet laborious journey. Below is a guide to aid in a successful transition from manual testing to full-stack automation.

It wasn’t long ago that the Dev and test teams would work late hours, focused and rushed to meet a deadline: rapid fixing, reprioritizing and deferring bugs to close out the bug list, move everything to the staging server, do one last run of the regression and pass it over to Ops/IT to move to production. What happened after?

Automated testing has never been more important, and is gradually developing from a nice-to-have into a must-have, particularly with the influence of DevOps. In essence, DevOps means that the deployment process gets engineered just as the system is being deployed itself. This allows for rebuilding and redeploying, in one form or another, to happen whenever there are changes in the code.

Fresh off his speaking engagement from Great Wide Open, Hans Buwalda has written an exclusive article for opensource.com. “What makes test automation successful?” explores the key factors that contribute to the success of a testing project in an Open-source environment.

Recently featured in the SD Times buyer’s guide, and written by LogiGear Co-founder, Michael Hackett, the article “Understand the mobile ecosystem before you test” explores the complexity of the mobile testing environment when it comes to choosing the best device mix for a robust testing strategy, and lays out the best plan to ensure that you test your mobile app effectively.

To scale automated functional testing for large, complex systems, you need to look at the design of the tests, how to organize the process, how the various players cooperate, the software's testability and stability—and, importantly, management's commitment. Hans Buwalda shares some testing tips.

Interactive exploratory testing and organized automated testing seem to be on opposing ends of a spectrum, but much of that depends on how you apply them. Automated tests don't have to be shallow and boring. You can still explore, learn, and create good tests. Read on for more from Hans Buwalda.

Behaviour-driven development tests can be efficiently automated with keywords, avoiding the need of a programming language and minimizing the involvement of developers. Hans Buwalda details how to support BDD scenarios with actions and keywords and switch between formats depending on your needs.

Hans Buwalda highlights the scalability of unit, functional, and exploratory tests—the three kinds of tests used to verify functionality. Since many automation tools and strategies traditionally focus on functional testing, Hans provides some strategies to make functional testing more manageable.

Just like with design patterns, anti-patterns can benefit from a short and catchy name to make them easy to remember and talk about. Hans Buwalda shares a list of typical situations seen in tests that can harm automation and names for them.

When automated tests are well-organized and written with the necessary detail, they can be very efficient and maintainable. But designing automated tests that deal with data can be particularly challenging. Tests need certain base data to be available and in a predictable state when they run.

A good test design is important because it improves the quality of the tests, helping to add breadth and depth, and it facilitates efficiency, in particular for automation. These points are obvious when starting a project from scratch, but what do you do when tackling a project with existing tests?

Automating functional testing is almost never easy. As testers, how we organize and design tests has a big impact on outcomes, but developers can—and should—have a role in making automation easier. This ease or lack of ease is part of what is known as "testability."

Hans Buwalda describes five of what he refers to as misconceptions about test automation.

Hans Buwalda shares a model used for multi-station testing with actions—the lead deputy model—and shows how actions can be used to make a relatively complex task like multi-station available at a business level where even non-technical users can easily understand the thought process.

A major contributor to success in test automation is test design. If tests have many unnecessary detailed steps and checks, even a skilled automation engineer will not be able to make the automation efficient and maintainable. Hans Buwalda shares an example of a test design that is automation friendly.

When executing test modules, an interesting question to ask is “What needs to happen with issues that are found?” Hans Buwalda suggests making a distinction between issues found during a sprint and after the team has declared the functionality under test "done"—and describes how to proceed from there.

Testing and automation have various paradoxes that are interesting to look at for insight into the challenges and limitations of our profession. Hans Buwalda describes these paradoxes and offers methods to bring about cooperation in teams, helping them achieve great automation results together.

For testers, virtual machines can be a game changer. To what degree the game really changes depends largely on how an organization decides to work with virtual machines and how active the testers themselves are in recognizing and leveraging virtual machines’ possibilities.

Software tests have to meet quality and robustness criteria that are similar to the application under test, but tests seldom get the attention and investments that the applications get. Hans Buwalda outlines why you should consider tests as products.

Testers have an important responsibility to protect and further their craft. Many people who want to be considered testers should engage in career development more than they might have in the past. Hans Buwalda highlights four areas that testers need to understand to stay relevant.

When looking at what the software market is currently talking about, the top item is DevOps and Continuous Integration/Deployment, which seems to be taking over some of the spotlight from agile and is now a widely accepted new normal. Hans Buwalda looks at where the future of software testing is going.

The cloud is metered—you pay by the hour, by the gigabyte, or by some other metric. The numbers might not necessarily be high, but they draw attention from managers. As testers we should look at these numbers as well. Hans Buwalda looks at how cloud-induced metering can impact testing.

In this Guest Blog for DevOps Digest, Michael Hackett discusses the suprising results of LogiGear's Testing Essentials Survey.

Hung Nguyen, CEO of LogiGear Corporation discusses six key factors for building a long-lasting enterprise, with strategies for businesses to continuously evolve, create, and sustain growth for the long-term.

To address the challenges and fears of implementing automation in agile projects, LogiGear CTO Hans Buwalda presents Action Based Testing as the answer.

This article focuses on the first principle, the effective break down of the tests. I also like to refer to it as the "high level test design." In this step you divide the tests that have to be created into manageable sets like chapters in a book, which I call "test modules."

This article discusses the "second Holy Grail," namely finding the right approach per test module. This step focuses on developing the individual modules. When a good job is done on the module breakdown, each test module should now have a clear scope.

This is the last in a series of articles that outline how to do effective and efficient test design. This last crucial step is to write down the test cases as clearly and efficiently as possible.

Test design is the single biggest contributor to success in software testing. Not only can good test design result in good coverage, it is also a major contributor to efficiency. The principle of test design should be "lean and mean." The tests should be of a manageable size, and at the same time complete and aggressive enough to find bugs before a system or system update is released.

Keyword driven testing is a software testing technique that separates much of the programming work of test automation from the actual test design. This allows tests to be developed earlier and makes the tests easier to maintain. Some key concepts in keyword driven testing include:

Companies generally consider the software they own, whether it is created in-house or acquired, as an asset (something that could appear on the balance sheet). The production of software impacts the profit and loss accounts for the year it is produced: The resources used to produce the software result in costs; methods, tools or practices that reduce those costs are considered profitable.

Hans Buwalda discusses “bonus bugs,” bugs caused by fixes or code changes and how to avoid them from the point of view of the developer, tester and manager. 
Bonus bugs are the major rationale for regression testing in general and test automation in particular, since test automation is the best way to quickly retest an entire application after each round of code changes.

In a previous newsletter, I discussed Test Governance, the topic of organizing and managing testing activities in an organization. In this article, I want to discuss something called "business test policies." These are statements that serve as basis for the Test Governance, and describe how testing is positioned in the overall company strategy, environment and culture.

Hans Buwalda, LogiGear, 12/29/2005 
Software testing is commonly perceived as a chore: products that are made by other developers have to be verified. Chores are something that you don’t want to spend too much attention and money on. 
With our Action Based Testing method we have shifted the focus from “testing” to “test development” (with automated execution). This is successful because creating tests becomes a more systematic and easier to plan and control activity, resulting in tangible and valuable products.