Lukas Rosenstock is an independent API design, development and integration expert and founder at CloudObjects.

Become a JMeter and Continuous Testing Pro

Start Learning
Slack

Test Your Website Performance NOW!

arrowPlease enter a URL with http(s)
Assert:is found in response
Apr 07 2021

3 Things to Look Out for When Stress Testing Your API

When you have an API, and you want to guarantee its performance, running a stress test is a valuable technique to determine the existing system's boundaries. In this article, I'm sharing three things that you should pay special attention to when designing and running your load tests: the test environment, traffic patterns and the test conditions. Before we go into them, let's get the basic terms straight to make sure we're on the same page. 

 

Stress Tests and API Tests 

First of all, what is a stress test? The practice of stress testing is a variety of performance testing that determines how robust a system is when it's under stress. For software, that means that the server running the software receives a hefty load, for example, twice as many concurrent users as you usually expect. Stress testing aims to identify breaking points beyond which the software no longer works as expected. Depending on the test's length and ramp-up and ramp-down time for the number of virtual users hitting the software, stress tests can be further classified into spike tests and soak tests

 

Next, what is an API test? Well, API testing is a process that validates that an API is working as expected. Compared to other forms of software testing, e.g., UI and end-to-end testing, API testing works on the level of API calls and doesn't include the frontend or client-side code of the application. In other words, your test design uses raw HTTP requests instead of user interactions. 

 

With that said, let's look at three critical considerations for stress tests: the test environment, the traffic patterns, and the test conditions. 

 

Test Environment 

Any performance test can make requests against the production environment, i.e., your live API, or a specifically created test or staging environment. 

 

Of course, going with your production environment saves you from maintaining another copy of your system and infrastructure. But with stress tests, there's a huge chance that things will break. After all, the whole purpose of the stress test is to bring your software to its knees. It's crucial to consider the implications for your users who are using it productively. The applications consuming your API should handle errors such as your API breaking down, but you never know how good their error handling is unless you've tested that, too. When you have specific usage patterns, such as generally only getting substantial traffic on weekdays, you can avoid this through scheduling. For example, by running your tests on weekends. Ensure you set up dedicated test users in your production environment, delete or disable them after the test runs, and do not include them in business metrics (these aren't real customers!). 

 

When you set up a dedicated test environment, you have to look at the data inside that environment. A common practice is cloning your production system so you have an equally sized database. However, if there's a chance that real customer data leaks during testing (e.g., through more extensive logging or security holes opening up under stress), you are at risk of violating your users' privacy and infringing upon data protection regulations. A test environment with unique test data resembling production data but without personal data works best, even if it's more time-consuming to set up. 

 

Here's a pro tip: Depending on how your backend works, a request to your API could result in calls to other APIs, either your own or a third-party's. Sometimes that is desired, but sometimes you want to test one API without straining your other APIs or calling third-party APIs who may charge you for each request. When setting up a test environment to stress a single API, you can deploy that component (e.g., a microservice) and replace its dependencies with mocks instead of calling actual APIs. BlazeMeter's mock services may come in handy. 

 

Traffic Patterns 

When you're running API tests, you can always choose between testing individual API requests or chains of API requests following your API consumers' typical interaction patterns. Both approaches have their rightful place in a testing strategy. 

 

The advantage of testing individual requests is that these tests are easier to configure, and they help you identify operations that act as a bottleneck for the whole application. I recommend starting with these as you run your first stress tests. If these already lead to failures, it may indicate fundamental problems with your API or the infrastructure. You can compare different API operations to see which is the most stressful for your software to handle. However, if you don't see problems with simple endpoint tests, this doesn't necessarily mean everything's fine. 

 

Thanks to caching, computers are great at doing the same thing many times over. That's why you should ideally simulate traffic patterns. You can go through typical use cases for your API and design a test around them. For example, by looking at applications with high consumption or from the documentation for a public API. Even better, you can take your log files from production, extract the requests actual users made, identify common patterns, and replay them against your API. These chains of requests provide the most insights on potential breaking points in the workflow of real users. 

 

A third, somewhat in the middle approach, is to take your logs and group your API endpoints by the percentage of requests they receive. Then, you can run your stress test with comparable ratios. Your system won't run actual workflows, but it will get a similar request profile overall. 

 

With BlazeMeter, you can upload or enter your API URLs directly and set up a test from the web-based dashboard if you want to make just basic endpoint tests. For advanced test scenarios, you can use tools like Taurus or JMeter to build test scripts. 

 

As you can see, designing good stress tests - or really any API performance tests - requires that you have some insights into your API. I already mentioned log files, but API monitoring tools, which BlazeMeter also provides, and more general APM (application performance monitoring) tools like New Relic are also helpful. 

 

Test Assertions 

What does it mean that an API breaks down? Well, the meaning of that behavior could vary greatly. It could return one of the well-defined error messages in its OpenAPI description. Alternatively, API requests may stay open without receiving a reply until the end of some timeout threshold. Your server might stop serving them at all and return a low-level TCP error, or your web server in front of the application sends HTTP errors like 502 Bad Gateway. Your tests must handle and report all of these conditions. 

 

Functional API tests help you validate that your API returns correct responses and appropriate error messages for input errors. APIs are a contract. Like any agreement, they should not just define the expected outcome but also what happens when things go wrong. Error handling is a part of API design. 

 

With stress testing, you can determine whether your API fails as gracefully in cases of overload as well. Hence, make sure you also include assertions of API responses in your stress tests and don't be satisfied with just testing that you get a response at all. 

 

A stress test may reveal internal error messages that you don't expect to see under normal circumstances. Those error messages could include stack traces that show your application architecture or display personal data, both of which are security issues. 

 

Sometimes applications fail with empty responses or error messages accompanied by the 200 OK status instead of the 5xx response they should have. Make sure you watch out for those and use the stress test as an opportunity to improve error messages! Your API consumers will thank you. 

 

In summary, if you start creating and executing stress tests, pay attention to your test environment and how it interferes with your production setup. Review your traffic to make sure you run realistic test scripts or understand the limitations of simplifications. Finally, make sure you test your API responses and pay good attention to the unexpected ways that errors appear. Then, you can relax and be confident in knowing the robustness of your API! 

 

Learn more about API testing in BlazeMeter. To get started for free, sign up here. 

   
arrowPlease enter a URL with http(s)

You might also find these useful:

Interested in writing for our Blog?Send us a pitch!