3 Things to know when applying Machine Learning & Artificial Intelligence to QA Test Automation

The Development of Performance Testing - from the Center of Excellence to Open Source

Mark Moore

3 Things to know when applying Machine Learning & Artificial Intelligence to QA Test Automation

Today some forward-thinking Agile teams are starting to probe new ways of wielding machine-learning tactics. Just think about the typical day in the life of a software developer writing code to support an application build. There is pressure to instantly understand the impact of their code on test cases and to continuously deliver and use it in production.

 

Can emerging machine-learning algorithms help us forecast, enhance, and accelerate test automation with a particular emphasis on continuous integration and continuous application quality? What would that return on investment look like, and what are the risks of a new machine-learning approach?

 

Here are three steps you can take to begin to incorporate machine learning in ways that will benefit your organization and the quality of the software you produce.

 

Step 1 - Recognize Patterns in Test Automation

 

There are opportunities to leverage machine-learning methods to optimize continuous integration of an application under test. They begin the instant Jenkins detects a code change and triggers a regression test suite, and they continue all the way through the continuous testing and continuous deployment cycle.

 

Today collecting data from testing is straightforward. But making practical use of all this data within reasonable time limits is what remains elusive. One particular example is the ability to recognize patterns formed within test automation cycles. Why is this important? Well, patterns are present in the way design specifications change and in the methods programmers use to implement those specs. Patterns lurk in the results of load testing, performance testing, and functional testing.

 

Machine-learning algorithms are great at pattern recognition. But to make pattern recognition possible, human developers must determine which features in the data might be used to express valuable patterns, then collect and wrangle the data into a consistent form, and in knowing which of the many ML algorithms to feed data into, is critical to success.

 

What this means is that human engineers must introduce the intelligence that makes machine-learning work. And it’s no small feat! Finding the right data and the right algorithm to feed it into are tough tasks.

 

How will a team member or a project leader know when it is practical to engage machine learning in a continuous integration pipeline? To answer these questions, let’s look at recent successes and challenges – as illustrated in Steps 2 and 3 below.

 

Step 2 - Establish Predictability in Data for Continuous Integration

 

On the success side, research now shows that ML can predict which parts of an app are verified accurately by a specific set of tests. This relieves some of the burden on developers to rethink every iteration. Predictability is the KEY. In an environment where continuous testing is a part of a continuous delivery pipeline, developers need to be able to ascertain a predictable level of test coverage for each new system build.

 

According to a 2018 study, it is now possible to use test data generation tools in conjunction with machine-learning algorithms to predict the minimum subset of classes you need to test to achieve branch coverage. And the same study showed it is both practical and profitable to do so in a traditional Test Center of Excellence (CoE).

 

Many of these learning models yield data to predict user behavior or how code changes will affect load test results. If you want to explore such models within your own organization, you’ll want to engage QA engineers with a data science background.

 

Step 3 – Apply Human Ingenuity to Complex Algorithms

 

In a recent interview with an MIT-trained data scientist who works for a popular car manufacturer, we learned how progressive companies are applying machine-learning methods with more optimism for tomorrow than success today.

 

David is the lead developer and researcher on a project, where the task is to make use of ML to classify and predict driver gestures/behavior via the vehicle’s dashboard - from turning on the aircon to listening to the radio or activating the vehicle GPS for directions. By making use of a model of embedded systems in his lab, he first warehouses data gathered from cloud-based orchestrations of thousands of iterations of system tests. He then uses K-means clustering to classify user gestures into a meaningful pattern of action and reaction. The underlying theory is that if the algorithm fails to predict user gestures, there is a bug in the app used to power embedded dashboard systems.

 

As the model traverses the graph of each pair of user gesture and system response, the test automation platform starts to learn and generate a full suite of regression tests that are automatically applied. This is the goal, though arriving at this end state will still take some time.

 

“Every test run generates a treasure trove of data,” says David. “In the past we thought there must be something we could do with all that data. Now we are doing something useful with it. But it’s going to take time to refine…a lot of experimentation.”

 

So, we asked the crucial question, “How feasible is this machine-learning method for implementation in a continuous integration context today?”

 

“Right now, the current version of K-means clustering achieves about 76% accuracy for predicting user gestures,” David says. “It straddles the fence between useful and impractical. It depends on whether or not you’re willing to go the extra risky mile to extend this accuracy in your own lab!”

 

Take the Leap into the Future

 

David is fortunate that his visionary company enables him to incorporate exploration into practical development methods. The hope is that this research investment will one day lead to automation of the lion’s share of orchestration and deployment for application development in his organization, which is increasingly growing beyond human capacity to operate efficiently. Although this outcome isn’t feasible today, David remains hopeful that a breakthrough in data feature analysis will improve future outcomes and bring automation to life – giving his company a significant competitive edge in the race to deliver products to customers in a continuous integration environment.

 

Not all companies can afford to gamble on breakthroughs or employ an MIT-trained data scientist to search for miracles. But perhaps we can benefit from the experiences of those companies that can – beginning by adopting test automation best practices that will prepare us for the advent of machine learning and artificial intelligence.

 

3 Things to know when applying Machine Learning & Artificial Intelligence to QA Test Automation

 

For this to happen, one must take decisive steps to accelerate for continuous test automation in the Test Center of Excellence for software quality across the CI/CD pipeline. In building out a repository of test data that will help fuel the start of ML and AI adoption. A pragmatic approach for the Test Center of Excellence will be to extend test coverage at scale, for distributed Agile development teams across the business to shift left testing in software delivery lifecycle, for the critical data needed for ML testing. Learn more on how to test early and often, and visit us at www.blazemeter.com/shiftleft.

Learn How BlazeMeter Can Help Your Company

Request a Demo

Interested in writing for our Blog? Send us a pitch!

DEVELOPER
X

Billing Information

Credit Card Information

Apply