We all know that continuous software delivery has its risks, and that’s why canary testing exists. It serves as an “early warning” system for your Agile development pipeline to test early and often – letting you run small, automated tests that can quickly verify whether your code is ready for production.
Making the most of Canary Testing
Canary testing tools are typically open source-based, backed by a rapidly evolving community of support. And now innovators are taking the revolutionary step of bringing machine learning to canary test analysis, in driving test automation to the next level.
To understand why machine learning is important, let’s look at how canary tests are conducted today. You begin by releasing your new code to a small set of actual users, while the rest of your user base remains on your existing, stable version. A canary test suite operates in the background to compare functional and performance results/outcomes for both the new and existing releases. After the data is crunched, you get metrics that help you decide whether to roll back the new version for further work or release it to your full customer base.
The one dilemma, though, is sorting all that information from your canary test to make “go/no go” decisions. It can be overwhelming. How do you decide which results are important and which aren’t? And how do you turn all that data into actionable decisions?
The benefit of machine learning
A solution to the data overload dilemma can be found in machine learning algorithms. These powerful tools enhance today’s automated canary analysis to reduce manual interactions, simplify decision making, and help you improve the speed and quality of application delivery.
Here’s how machine learning fits into your canary testing workflow. As your test suite runs and collects performance and functional metrics comparing your new and existing code, subtle variations in user behavior can be garnered with machine learning algorithms. Patterns can then be uncovered and flagged for your review – assisting you in the decision to release your new code or not.
Importantly, these algorithms continue to learn over time. With enough data, they can build a graph of typical user gestures and then flag instances where gestures for a canary release differ substantially from known patterns. A QA engineer can then investigate and resolve any bugs.
Preparing for automated rollback
With each new wave of canary testing, the algorithms have new data to improve their learning and their forecasting accuracy. It’s easy to imagine a day when machine learning algorithms can decide on their own whether to roll back a new release or to move it into production – all without human intervention.
In fact, steps are already being made in that direction. Many application performance monitoring platforms now have user interfaces that enable a developer to click on a chart of flagged anomalies, where the machine learning program learns from it to offer up insights for the developer. At that point, the developer armed with this information will then need to decide and instruct the system to ignore a particular issue raised. This same machine learning program will learn from the developer’s decision, as part of a continuous loop of learning and interaction, to improve on its abilities to provide better/richer insights to add further value to the test cycle. Some platforms even include toolkits that allow you to select cells and apply a neural network function to columns on a spreadsheet. This human interaction then becomes further training data in a machine-learning model – used to forecast and interpret regressions.
Additional evolution and innovation are required, of course. But with growing momentum, success seems inevitable – helping to reduce risk across your CI/CD pipeline. Strong strides have already been made in:
- Forecasting failures based on previously identified risk scenarios
- Detecting anomalies by comparing existing metrics with canary metrics
- Visualization to simplify the human interaction needed to resolve predicted failures
- Training models that use data from user feedback to inform machine-learning algorithms
Getting started with Canary Testing
At first glance, canary testing may seem too complex to tackle. But it actually is much easier than you might think. You can begin by defining and orchestrating small-scale tests off your local machine with Taurus – an open source project that provides a framework for test automation.
Taurus allows you to easily describe tests with 10 simple lines of syntax and run open source test tools (e.g. JMeter, Selenium, Gatling, Locust, etc.) from your independent development environment of choice. You can create tests for reuse, execute them to evaluate quality before committing to your code, and then review the results – all from a single environment.
Once you’re ready, you can scale the same tests on-demand using an open-source, enterprise ready integrated test toolchain like CA BlazeMeter for performance, functional and API testing. BlazeMeter gives you an easy way to extend regression testing across distributed Agile teams and to accelerate test automation for continuous quality across a CI/CD pipeline.
You might also find these useful:
Interested in writing for our Blog?Send us a pitch!