Powering Performance Testing With Observability Tools
June 21, 2023

Powering Performance Testing With Observability Tools

API Testing

Any experienced developer or tester will tell you that the greatest challenge with performance tests is not building or running a test suite. JMeter and other modern performance testing platforms have simplified that process — to the extent that any developer with a bit of free time can put together a rudimentary suite. The greatest challenge with performance testing does not occur before or even during the test. It happens right after.  

How do you create a performance test pipeline that boosts team productivity rather than causing them to chase down repeating issues? How can you truly understand what it means when tests fail—and then apply code changes based on those learnings? 

Zeroing in on specific, meaningful code or architecture deficiencies can often be a needle in a haystack situation. Fortunately, there is a solution: observability tools. In this blog, we will examine what observability tools are, how they work with test data to improve code, and why BlazeMeter and Digma make a powerful combination to improve performance testing at the code level. 

Observability tools provide root-cause analysis and aggregate data into logs, metrics, and traces. They process data into events to inform software testing teams where issues are popping up so that they can more quickly fix them. 

These tools and platforms can help gather and provide some useful information on the application's backend following performance test runs. However, observability tools are not a guaranteed catch-all. Without a quick and easy way to resolve failing tests, testing initiatives are doomed to fail. Even hardened engineering teams find that it's easier to ignore, adjust thresholds or take any other conceivable action instead of spending another lengthy session trying to get to the bottom of what a failing test means. 

This is exactly where the synergy between BlazeMeter and Digma, a code runtime linter, can make the difference between a successful performance testing initiative and a failed one. By using observability tools like Digma, testers can identify and diagnose issues caused by coding errors or other factors that may impact system performance under stress. 

BlazeMeter is a powerful SAAS platform for executing performance and load Testing. Testers can upload JMeter scripts (or scripts from other frameworks) to the cloud-based engines and use hundreds or even thousands of Virtual Users (VUs) to generate load on the system being tested. Metrics like Response Time and Bandwidth can be measured, and BlazeMeter can even be integrated with APM tools to gain additional insight into the system under test. Every time the test does indicate a problem, however, the team is on the clock to get to the root cause as quickly as possible. 

That's where Digma comes in. As an Observability tool, Digma integrates with IDEs and can bridge the gap between code and system performance. By placing tags within code blocks, Digma correlates measurements taken from OpenTelemetry with the methods in the code. 

How Does OpenTelemetry Fit Into This?  

While Digma provides advanced processing of the test telemetry, having observability data at hand is still a prerequisite. Without a way to collect application data during the test run, you’re effectively running blind. This is where OpenTelemetry has made a huge impact on the observability landscape. Simply put,  OpenTelemetry (OTEL) is a new and easy way to collect data about your code execution. It has wide support, to the extent that you can often activate it without any code changes. It is free, open, and extremely easy to ‘turn on’. Instead of embarking on a long observability project to increase your visibility of how the code works, you can start collecting data right away. 

BlazeMeter and Digma work together to provide deep insights into performance issues in the underlying code. The BlazeMeter platform generates load on the system, OpenTelemetry gathers information on system performance, and the tagged code is marked as the source of the system being tested. This enables developers to receive Continuous Feedback (CF) from the system in real-time as it's being tested and use this information to determine how well the underlying code is functioning under stress. 

 

Back to top

Observability Tools in Action: Digma & BlazeMeter

Now that we know the background of these two powerful tools and how they work together, let’s see them in action. 

To get started, we’ll take a sample Spring app (the familiar PetStore example), run some JMeter tests, then see what we can learn about specific areas in the code and the type of feedback we should expect as developers when working with a Continuous Feedback platform such as Digma. 

We’ll demonstrate an end-to-end example in three easy steps. 

Step One: Setting up Digma and collect OTEL data in testing

Digma can easily be installed locally to provide developers with data about their code within minutes. However, this time we want to install Digma in a central location so it can ingest data from multiple testing environments. This can be achieved using the Digma helm file which we’ll run to deploy Digma on a Kubernetes cluster. Digma’s deployment is similar to a pipeline, receiving telemetry data from the application under test, processing that data, and producing insight about the code from the other end of the pipeline. 

 

Once Digma is deployed, all we need to do is install the IDE Plugin and configure the Digma endpoint to match the service endpoint created by the Helm file (as described in the Helm documentation). 

 

Finally, we’ll need to instrument our application with OpenTelemetry. In the IDE, Digma can collect data automatically. For the test environment, we can set up observability by setting a few environment variables to enable the OpenTelemetry agent. You can find the documentation on the OTEL website. Here’s a naive example: 

export JAVA_TOOL_OPTIONS="-javaagent:path/to/opentelemetry-javaagent.jar" 
export OTEL_SERVICE_NAME="your-service-name" 
java -jar myapp.jar 

We can also define the deployment environment so that we see the results, we can do that very simply by setting another environment variable: 

export OTEL_RESOURCE_ATTRIBUTES="digma.environment=LOAD_TEST" 

That’s it! We now have a working instance of Digma waiting to accept data from the application under test. 

Step Two: Setting up a BlazeMeter performance test

If you do not have a BlazeMeter account, it is quick and easy to set up. Log into the BlazeMeter interface and execute a test run. You can use the test script that is already present in the repo. 

Execute the test with some load parameters, in this example, we’ll just start with about 20 simulated users. The test results will soon appear: 

 

The report looks good at surface-level, but do we know what is exactly happening here? Which parts of the code are not scaling well? This is precisely where 90% of performance testing initiatives fail. For such testing to be effective and sustainable, the cognitive effort required to process them must be manageable. Each test result needs to be instantly mappable to actionable items, insights about the application, or a better understanding of its limitations. 

Step Three: Using Digma to make sense of the results

Now that we have used BlazeMeter to generate some data, let's see what Digma can tell us about it. With Digma, we go directly to the code to understand where the issues are and what could be some of the weak areas. 

 

Looking at our API code we can immediately see that there is an issue and we can click the annotation to see more details about the problem: 

 

While we were working on launching the test, Digma was busy analyzing the test results, comparing it with the normal benchmark for each function, removing outliers, and identifying bottlenecks and key areas for concern. Specifically, it identified a scaling issue that was impacting the system with a cost of about one extra second for each concurrent user. Not only that, Digma was able to identify the root cause of the issue to be the ValidateOwnerWithExternalService function. 

This is a transformative step in the testing process. Digma essentially eliminates the manual and painful step of processing the result data and understanding what each negative result means. Instead, the results are forwarded straight to the developers and into the code. Now, each developer can be completely aware of how their code is faring in the performing testing wilderness. They can identify key weak points and fix them continually without running any deep statistical analysis themselves. The insights can simply be applied to assess and improve complex code. 

When using BlazeMeter to run performance tests, Digma can be integrated with the platform to capture and analyze metrics such as response time, throughput, and error rates. This integration allows for real-time monitoring and analysis of performance data during load tests, providing insights into the performance of the application or system under different levels of stress. 

Digma also provides advanced analytics capabilities, such as machine learning-powered anomaly detection, which can automatically identify unusual patterns of behavior in performance data. This can help teams identify and investigate issues more quickly, improving their ability to resolve them before they impact end users. 

Accelerate the velocity of your performance testing. See how powerful BlazeMeter can be when paired with observability tools like Digma. Start testing for FREE today! 

Try Digma for yourself >>>>>

Back to top