Noga Cohen is a Sr. Product Marketing Manager for CA BlazeMeter. She manages the BlazeMeter blog and other content activities. Noga focuses on creating technological content in the fields of performance, load testing and API testing, both independently and by managing writers who are developers. Noga has more than 5 years of experience in a wide scope of writing techniques: hi-tech, business, journalist and academic.

Become a JMeter and Continuous Testing Pro

Start Learning

Test Your Website Performance NOW!

arrow Please enter a URL with http(s)

Overcoming Performance Testing Challenges in Continuous Delivery Pipelines

Companies who want to speed things up, lower costs, improve product quality and be the spearhead of the tech industry are moving towards Continuous Integration (CI) and Continuous Delivery (CD). Incorporating load testing testing tools like JMeter or CA BlazeMeter into the CI processes is a crucial part of “shifting left” and ensuring the full development process is connected rather than divided.


In this blog post, we will go over different challenges that DevOps, developers and QA engineers have when performance testing in Continuous Delivery pipelines. This blog post is based on a talk by our Chief Scientist, Andrey Pokhilko, at Jenkins World 2016, which you can check out here.


Performance Testing in CI/CD -  Challenges and Solutions


1. The Test Environment


The Challenge


Performance testing environments are complex. Layered applications, multiple dependencies and third-party APIs require different storages and CDNs for testing. This takes up time and resources, which complicates the CI process instead of simplifying it.


The Solution


Simplify the test environment. We need to trade in being realistic for practicality and the price of build time. This is in line with performance testing in CI goals: using KPIs to examine trends and comparisons over time that let you see if you are improving or not. So stub all 3rd party APIs, run it in the smallest configuration possible, to fit into a single machine and simplify deployment.


2. Time Consumption


The Challenge


Ideally, the complete CI cycle should take no longer than 10 minutes, and we know that up to 30 minutes is good to have. But if the process takes more than an hour, we are losing the “continuous” part, which we all can agree is pretty important. But having a large number of functional tests, or adding performance tests, can turn the CI cycle into a long one.

The Solution

- Prepare upfront and overnight. Don’t use up your day for generating test data sets.

- Reuse what you can, like generated data sets, deployed environments, etc. Instead if deploying everything at the same time, divide the job and take parts out of the CI cycle.

- Make short tests - 1-5 minute load tests can reveal many insights and show trends and changes.

- Don’t put spike and endurance tests in CI. You can use Jenkins as an automation platform, but don’t put them in the CI cycle for every build.

- Parallelize tests by using Jenkins 2.0 and Taurus, an open-source automation tool. Taurus can run divided tests in parallel.  CA BlazeMeter can provide resources for parallelizing in the cloud.


taurus open source automated testing dashboard


3. Debugging CI Jobs


The Challenge


As can be expected, automating test will result in problems and failing jobs due to issues in the job itself that needs maintaining. This effect should be minimized because it affects and spoils your build history and your ability to gather system health information and analyze trends.


The Solution


Take the debugging out of the CI process. Taurus enables you to debug locally and then commit back into the repository. This doesn’t affect the history of the performance testing results because the failures from the debugging trials appear on the local machine.


4. Results Analysis


The Challenge


Analyzing performance testing results is difficult in CI machines, because these machines need to support multiple technologies. Even Jenkins, which is very flexible, has its limits. In addition, the results analysis needs to be available for non-techies and it can’t let decisions be made by a machine.


The Solution

- Integrate services that can analyze the data, like static analysis, artifact storage, bug tracking integration and cloud services integration.

- Keep the overview in Jenkins but use the CA BlazeMeter reporting system for rich reporting that can be configured, like comparisons and easier trend lines.


ca blazemeter reports dashboard

- Use pass/fail criteria. This is complex because you are examining graph curves and multiple factors rather than yes/no questions. Taurus has a mechanism for displaying and analyzing complex expressions of pass-fail.


taurus pass fail criteria


While performance testing in CI cycles has its challenges, they can be overcome and provide you with new possibilities. Click here to learn more about Continuous Integration with Taurus, or request a demo to see CA BlazeMeter in action.


You might also be interested in viewing our on-demand webcast on Continuous Testing for Containerized Applications, featuring special guest Codefresh. 

arrow Please enter a URL with http(s)

Interested in writing for our Blog?Send us a pitch!