How Piksel Uses BlazeMeter for Continuous Performance Testing and Analysis

ABOUT PIKSEL

Piksel is a video solutions organization for the broadcast media industry. Designing, building, developing and managing video products, Piksel delivers over 3 billion on demand streams each year. In addition, Piksel stores over 24 billion objects on its platform and manages more than 50,000 live broadcasts a year.

Piksel’s R&D division consists of 6 development teams, each concentrating on a different part of the product, each with a QA function within. As part of their shifting left strategy, the responsibilities of the QA engineers include creating guidelines for test automation tools and mentoring the teams on them.

“We previously used a load test company that did not give us the flexibility to write and maintain our own test code. This company wrote the tests and ran them on their platform. We believe we can move faster by owning the test code as well as the application code.”

 

THE CHALLENGE

Anu Johar, QA practice lead, describes two main development challenges for Piksel:

Ensuring every new product version will work when deployed
Piksel needs the confidence that every new release will work, when deployed. This includes ensuring that every function is still operating, that response times have not changed, that there are no issues, and more. This is especially important for customer-focused code areas, which are critical by nature.

Ensuring the environment is stable at all times
In addition to code change stability, Piksel also needs to ensure that the environment is stable and keeps performing at all times. The environment needs to perform under heavy loads, for a long time, and for multiple services.

THE SOLUTION

Piksel chose BlazeMeter as its solution, for implementing automated and continuous performance testing.

Running performance tests with every Git push
Piksel runs automated Gatling load testing scripts in BlazeMeter for high profile services, every time new code is pushed into Git and with every release. Piksel runs these tests in parallel 5 times for 100 users each, from one geo-location. The purpose of the parallel run is to benchmark the results. Pass fail criteria is set so that if the tests fail, the pipeline fails. Then, the dev teams can further investigate the cause of the failure before releasing.

Piksel’s QA engineers created various Gatling scripts that test different user scenarios. The question “What to test?” was answered by looking at the website areas and actions that customers use most, like content browsing. The QA team determined these areas from the application logs.

The BlazeMeter performance is run together with unit tests and acceptance tests. In the future, these tests will be integrated into all of the CI/CD pipeline.

Running nightly performance tests on the environment
Every night at 20:00 Piksel automatically runs longer performance tests on its environment. These tests are of a higher load, for a larger number of concurrent Virtual Users and for more services, and they check the environment’s stability. Piksel’s teams get the test feedback in the morning, through a Slack alert they integrated with BlazeMeter. In case of test failure, the environment is investigated. These tests are covering key user journeys.

THE RESULTS

Continuous analysis of performance test results
Now in Piksel, tests are run automatically with every release and every night. BlazeMeter’s reports dashboards and Slack integration enable engineers to analyze them. Thus, they can get a complete picture of the code they developed and of the application, all the time and when they want it. Shareable test reports links also enable sending the KPIs to product managers, who can then see the application status.

Continuous code fixing
If the results show a test failure, for example in concurrency or throughput, developers can further investigate the cause, fix the code and run the test again, until it’s ready for release.

More professional and involved developers
BlazeMeter’s easy to use GUI, shareable results and integration with Taurus have made the R&D team understand the whole system better and therefore develop better applications. How? Failed tests have encouraged developers to research and understand the root cause of the failure. As a result, the developers understand the environment and the whole code release thoroughly, instead of just looking at their change. This allows the team to focus on the entire application, including non-functional requirements.

In addition, developers now understand the testing process better. Teams own the test code and can change it as their application changes.

Scaling performance testing adoption among teams
Automated performance testing started out in one of the development teams. The goal was to create a proof of concept. After learning together the best ways to implement performance testing into the development process, Piksel is now in a mature position to replicate the solution and scale it to the other teams. The QA engineers will now work on creating Gatling performance scripts with them, which they will eventually do on their own. Eventually, automated performance tests will run for the complete product.

START TESTING NOW

 

Related Resources: