As a performance testing expert and the person who drives performance testing across my team, I am in charge of testing all the media platforms my company delivers and uses. These media platforms include TV streaming, apps, our website, the wifi connection, etc. In this blog post I would like to share how I use BlazeMeter to run load, stress and mobile tests, to ensure our systems perform and don’t crash under heavy traffic.
Our Performance Tests
Our company develops multiple media platforms, and my team is in charge of testing them all, separately and consolidated. Therefore, I have to plan my testing carefully. I have three main goals I want our performance testing to achieve:
- Ongoing assurance that all of our systems are performing at all times - that the apps respond, the web services are reactive, etc.
- Periodic in-depth examination of our system from end to end. I need to know that the ongoing changes we make to our code have not disrupted the multiple-platform user journey and will not crash before major events.
- Responsiveness to major releases throughout the year. The goal is the same as the end to end test from point 2, only these tests aren’t periodical. Instead, they follow development release schedules.
To achieve these goals we run three types of BlazeMeter performance tests:
- Continuous testing of smaller tests 2-3 times a week.
- A large, multiple platform, end to end load test once a quarter.
- A large, multiple platform, end to end load test after every major release.
The scenarios for these tests are created by my team. We align with each platform’s developers and with the business flow of the product, and create testing scripts in Apache JMeter™. Then, the JMX files are uploaded to BlazeMeter and configured according to the test goals: the number of users to test, geo-locations, APMs to monitor in, etc. We are currently also looking at open source Taurus for script creation, for automation and ease of use.
Running Continuous Testing
We maintain different test environments that we add to the Jenkins continuous testing pipeline, which we are developing. With the help of SVN repositories we are able to upload all the scripts at one place and then run an automated test, which reduces manual interventions.
The continuous testing tests are given to the developers to run their tests, but it’s a whole different ball game when running our quarterly tests.
Running and Monitoring Large Scale Load Tests
The unique aspect of our end-to-end company test runs, is that the developers from the 7-8 platforms we test participate together in the running and monitoring of the test. Because our developers are spread out globally, each quarter we have special testing events with dozens of people online from all over the world, together with BlazeMeter’s Professional Services and Support. We are all online together to monitor our large scale stress tests, which run for millions of concurrent virtual users. This joint run is possible due to BlazeMeter’s sharing capabilities, of tests and of test results.
After the large multi-test finishes running, we share the digital reports with everyone who was on the call. These reports enable drilling down into labels and KPIs, and analyzing trends over time. From these reports, each platform team can learn which issues they need to fix on their side.
For example, we were able to identify key bottlenecks like firewall issues and queuing issues on the web servers. These were highlighted to the programme manager and helped in optimising the code and the server configurations.
After everyone fixes their bugs and errors, we run the load tests again, to see if the fixes helped.
The Importance of Successful Testing
We keep testing and fixing until the tests are successful. A successful test means that our SLA response time was met for both the requests per second (throughput) and the number of users we aim to support.
For the large-scale tests, we need to ensure the platforms and systems are able to cope up the load within acceptable response times and server stats. For any ongoing tests, we sign off the load test as successful if all the metrics are within acceptable limits. Once the tests are successful, we provide a sign off for the team to release.
Recently we’ve also started using BlazeMeter’s end user experience feature, so we know how the user sees the user device under a load. All of the KPIs above ensure we provide quality services that don’t crash on our users.
In the future, we plan on integrating more scripts into Jenkins, for automation that will enable testing and releasing more frequently.
To start testing with BlazeMeter yourself, just out your URL in the box below and your test will start in minutes.
You might also find these useful:
Interested in writing for our Blog? Send us a pitch!