AI testing innovation
September 27, 2023

Performance Testing Innovation: How AI and ML Will Change the Game

DevOps
Test Automation

Ever since ChatGPT took the world by storm, it has become clear that the AI movement is not something to be overlooked. AI and ML are expected to change every technological aspect of software development, and performance testing is no exception. With AI, performance testing can become more efficient, accurate, and swift, ensuring high quality applications and services in production.

The new AI tools are also democratizing the use of AI. In the past, companies had to rely on a limited number of data scientists for developing algorithms, building models, and implementing AI in products. This dependency, along with an industry-wide skills shortage, impeded the adoption of AI across products. Today, however, AI tools have made AI accessible and they can be used with little to no data science knowledge. This enables the implementation of AI in products and services with minimal effort, in a manner of months.

In this blog post, we share practical information for testers on how to implement AI in their day-to-day. We cover potential use cases, the benefits and pitfalls, best practices, and recommendations. At the end, we share how BlazeMeter has implemented AI for the creation of test data and other purposes.

This blog post is a must-read for any performance tester or software leader. With AI permanently changing the game, engineering professionals can either embrace this new technology or stay behind. In this blog, we explain how.

To learn more, you can watch the fascinating webinar this blog is based on. The webinar, “The Future of Testing: A Conversation About the Use of AI and ML”, features Rod Cope, CTO, Stephen Feloney, VP Product Management, and Lana Truong, Sr. Product Marketing Manager, of Perforce Software.

Back to top

AI & ML in Software Testing: The Benefits

Using AI and ML for testing provides significant advantages for testers, developers, and businesses. It can streamline the testing process, improve accuracy, and contribute to a more robust and high quality software product in production.

The main advantages of using of AI and ML in performance testing are:

  • Improving quality - AI helps ensure the application or service is successful in production.
  • Increasing efficiency - AI shortens the time it takes to test and eliminates manual errors.
  • Democratizing testing - AI allows users with any skill set level to run and understand performance tests.
  • Providing confidence - AI supports testers and all users with tools and knowledge that can improve their testing abilities.
Back to top

ML & AI in Software Testing Use Cases

How can testers enjoy the aforementioned benefits? Testers can use AI to*:

  • Auto-generate tests
  • Maintain existing test scripts
  • Understand test results
  • Pinpoint failures, errors, duplicates, and bottlenecks in the code
  • Create test data
  • Cleanse test data to remove PII and other issues
  • And more.

In addition, AI can help developers write code with no (or less) performance issues, which will also help ensure optimal app and services performance.

*This is a non-exhaustive list. The opportunities are vast, just use your imagination (more on this, below).

Back to top

Drawbacks of Using AI and ML for Testing

However, using AI and ML for testing also calls for exercising caution. Some of the potential pitfalls of using AI and ML for testing include:

  • Hallucinations - AI models are far from perfect. They often produce errors, also known as “hallucinations”. Yet, we often tend to rely on them as if they were, instead of verifying the results.
  • No transparency - We do not always understand why the AI model functioned the way it did and why a certain output was given to the prompt. When you need to recreate or fix the response, the lack of transparency makes it difficult to do.
  • Risk of plagiarism - The AI model’s response could be based on data that is subject to copyright protection. If this data is used as is, the organization could be at risk.
  • Job loss - AI is expected to create a significant change in the job market, impacting many people’s jobs. The more it is used and optimized, the bigger the risk becomes.

Despite these significant drawbacks, the advantages outweigh the disadvantages and it is highly recommended to use AI and ML for testing.

Back to top

Implementing This Testing Innovation: Recommendations for Teams

AI is a permanent game-changer and it is here to stay. Therefore, we recommend that engineering teams embrace AI and ML. Otherwise, they will get left behind. Based on our extensive experience working with testers, developers and engineering leaders across enterprises, SMBs, and startups, here are best practices for teams to get started with AI:

  • Learn - Start by playing around with different AI tools like ChatGPT. Educate yourself on what they can offer you. Understand that using them requires a different and new way of thinking. Get your team to learn as well.
  • Plan - Don’t implement AI blindly. Think about how AI can benefit you and your team and where it fits in your organization, processes, and methods. Be innovative and leverage AI for your requirements and KPIs.
  • Automate - Incorporate AI tools as part of your workflows in an automated manner. Don’t use these tools as a one-off. For example, instead of asking ChatGPT to create a single test case, build a testing plan with ChatGPT and then automate the generation of a new test for every new released feature.
  • Verify - Expect hallucinations with AI. Verify results before adopting them.
  • Secure - Check your company policies and security guidelines regarding which data are you allowed to use in public AI tools. For example, don’t share the most sensitive source code with ChatGPT. Compliance is also an important factor to consider. For instance, there should be policies in place to protect customers’ privacy.
  • Share - Let your users know you have incorporated AI in your product. They also need to adhere to their company standards. Otherwise, they're inadvertently using third-party AI that they don't know about.
  • Get buy-in - Implementing AI in products requires buy-in from the CEO and, ideally, the board. Since there is currently still a lot of industry-wide suspicion towards this new technology, it needs to be approved top-down (unlike many developer tools, which infiltrate bottom-up). One way to convince leadership is to suggest starting with heavy and restrictive security policies, and relaxing them as needed. Another way is to demonstrate quick wins and ROI.
Back to top

Advice for Testers About Testing Innovation

This rapidly changing landscape might feel intimidating for testers who feel uncertain about what the future holds and their role in it. But no so long ago, the open source revolution sparked similar reactions and fears. Then, and now, businesses, leaders, professionals, and users were asking themselves:

  • Can we trust this?
  • What about copyright issues?
  • Does it have malware?
  • And more

At the time, open source wasn’t allowed in code due to these apprehensions. Now, open source is one of the fundamental building blocks of software, present in commercial tools, production environments, and data centers around the world. This will happen with AI as well.

We recommend embracing this change, even if you are a person who hates change. To do so, think about how you can benefit from this revolution and drive progress in your organization and for yourself.

It’s important not to pretend that testers’ jobs will not be impacted. This doesn’t mean the testing role will go away. But it will change. The flavor of the day-to-day will be different. AI models and tools will do the actual work, while human testers will perform more of the supervising, directing, and iterating. This requires testers to step up a level, perform more abstract activities and become more strategic. AI is not a panacea, but it is changing the game.

This is not the first time the testing world has to recently adapt to new technologies amidst threats of them becoming redundant and obsolete. For example:

  • Automation - When automation penetrated the testing world, testers’ jobs changed. The testing industry was changed with shifting left, but developers didn’t do all the testing while leaving testers without jobs.
  • Cloud - When the cloud emerged, scaling didn’t happen automatically, despite the dystopian prophecies. Testers and performance testing are still needed to ensure quality and reliability at scale.

In addition, it’s important to remember that the impact of AI will go way beyond testing. For example, AI can help legal departments sift through tens of thousands of contracts, identifying anomalies, alerting when they need to be renewed, highlighting problematic clauses, and more. It can also help customer success, by pulling accurate answers from the wealth of written knowledge accumulated across emails, Slack, and documents. There are many more examples.

AI doesn’t have to be scary. But it’s happening, faster than we thought. Make sure you’re on board.

Back to top

Bottom Line

At BlazeMeter, we’re thrilled to share that we’ve implemented AI as part of BlazeMeter Test Data Pro — the AI-driven test data automation suite. BlazeMeter's Test Data Pro enhances test data through its AI-Driven Data Profiler, boosts system resilience with Chaos Testing, and streamlines the process of data generation AI-Driven Data Creator. These AI-driven functionalities offer unparalleled improvements in testing efficiency, coverage, and quality.

To join the testing wave of the future, you can request a custom demo of Test Data Pro and also start testing with BlazeMeter for free.

Request Demo

Start Testing Now

Back to top