Why You’re Blowing $$$ With Every Load Test
Are your load tests unnecessarily draining the company budget? Chances are, the answer is yes!
Why? When creating reliable test scenarios for performance tests, you need many temporary resources to support application stack replicas. This is especially true if you work for a large, modern enterprise that holds stacks containing hundreds, or even thousands of resources in the cloud. If you run multiple test environments in an uncontrolled and unoptimized way, you’re probably wasting a great deal of money.
So, what can you do? Obviously, a well organized and controlled test scenario is vital. But another very easy way to stop wasting money is to stop your load tests before they finish. You don’t need to continue running a test for its entire duration if you spot a bottleneck just minutes after you started it. By stopping a load test as soon as a failure is detected, you can generate efficient, cloud-based performance test environments.
The Hassle-Free Way to Stopping Your Test at the Right Point
Repeatedly trying to manually find the exact point where a load test fails is tiresome and inconvenient. That’s why, at BlazeMeter, we’ve recently introduced a tool that allows you to stop load tests as soon as you hit a bottleneck...and it’s pretty simple to use.
Before you start your load test, you can define thresholds and criteria that determine when it should be considered a failure. As soon as a system can’t withstand a certain threshold, you receive an automatic alert to indicate the failure, and the test can be stopped. Tests can also be configured to stop automatically.
Of course, you can also view all metrics throughout the test. In order to create load tests, engines generate loads, then consolidate aggregated data while counting hits and measuring latency. This data is constantly streamed along BlazeMeter’s app servers, making metric and threshold data available almost immediately. Supporting statistical calculations, such as standard deviation, min/max, first/last, and more, this data is used for metrics such as latency, response time and response data size. Metrics are continuously calculated, updated and displayed throughout a test. As you can see below, once the test is done you get bottom line results, including exceeded metrics. In this case, we see that there was a specific exception that didn’t impact the overall response time. But is that enough to learn your test’s overall results?
Blazemeter’s “Sliding Window” feature does the exact same thing for each minute of the test. It aggregates data and calculates metrics in order to detect the specific response peaks that might not impact the test’s final average results. With the Sliding Window capability, users can set a specific threshold, per minute, so that the system can report and stop the test at any point in time during the test.
For example, as shown in the image below, the system reports on a specific minute that shows a peak in response time:
What This Means for You
With the Sliding Window feature, testers can clearly see if a test has reached its goals. This applies to the overall average of the test as well as local response peaks and performance anomalies.
In addition, dev and test environments require more capacity and can be more expensive to maintain than production environments. These environments change constantly, scaling up before a release, and shrinking down again afterwards. As such, they need to be continuously controlled and optimized. Stopping load tests and using the Sliding Window mechanism gives companies a chance to respond quickly to test failures, save testers’ and developers’ precious time, conserve funds, and increase network optimization.
Want to find out more? Check out our documentation to learn how to set your test performance criteria for various metrics, such as response times, errors, hits/s, test duration etc.
Found this article interesting? You might also like: