Shift API Testing Right with API Monitoring
November 2, 2021

Shift API Testing Right with API Monitoring

API Testing

If you’re a good test automation engineer, you know all about the role played by API testing within broader test automation routines. You make sure that you simulate APIs under heavy load when you run tests prior to releasing an application, which allows you to monitors APIs so you can assess how APIs impact application performance prior to deploying the application into production.

But if you’re a really good test automation engineer, you know that no amount of pre-deployment API testing will guarantee flawless API performance in a production environment. That’s why you also perform API monitoring in production as part of a “shift-right” testing strategy.

Here’s the why and how of shift-right API monitoring, along with guidance on incorporating API monitoring into a larger test automation strategy.

Back to top

API Testing vs. API Monitoring

API testing is the use of simulated API calls to evaluate how an application responds to those calls. Given that the typical modern cloud-native application relies on a variety of internal and external APIs to operate, API testing has become a common feature of most software testing routines.

In contrast, API monitoring is the monitoring of API availability and performance for applications that have already been deployed into production. Instead of simulating API calls, API monitoring tracks actual API requests within production environments in order to alert teams to several types of problems, including:

  • API availability: API monitoring detects situations where an API becomes completely unresponsive. That could happen due to problems like service mesh failure in the case of internal APIs, or failure of an upstream service provider in the case of external APIs.
  • API latency: API monitoring allows you to track how long it takes APIs to respond to requests. If APIs become slow to respond, the user experience can degrade.
  • Data responses: By tracking API responses as part of API monitoring, you’ll know if the data that APIs are sending in response to requests is improperly formatted, incomplete, or otherwise flawed in ways that could compromise application performance and quality.
Back to top

API Monitoring and Shift-Right Testing

It may be tempting to think of API monitoring as a task that falls to the IT operations engineers who manage production environments, not the test engineers who are responsible for ensuring application quality prior to release. After all, as long as test engineers can demonstrate that they did their part by including simulated API calls in test automation routines, they can’t be held responsible for problems that APIs cause in production, right?

Not quite. Beyond the fact that the “production-is-someone-else’s-problem” mindset ignores the principles of shared ownership and collective responsibility on which modern teams are supposed to thrive, test engineers can benefit in several key respects from the visibility that API monitoring provides. By embracing these principles, they can implement a shift-right testing strategy that extends test operations into production environments, which in turn maximizes test engineers’ ability to maximize application quality.

📕 Related Resource: Learn more about What is Shift-Right Testing?

Writing Better, Broader API Tests

First and foremost, you can’t optimize API testing if you aren’t sure how APIs perform in production.

By performing API monitoring for production applications in addition to running simulated API tests, test engineers are in a stronger position to identify potential API issues that they may not have anticipated. They can then write tests that cover those issues, which leads to broader API testing.

 

Monitoring APIs under Non-Peak Load

Typically, when you simulate APIs during pre-production testing, you try to simulate peak API load. In other words, you generate millions of simulated API calls in order to assess what happens when an application and its APIs are placed under heavy load.

That makes sense to the extent that availability and performance problems tend to occur most often during times of peak load. However, non-peak conditions may trigger certain API issues, too. An infrequent type of API request may result in high latency due to lack of data caching associated with the request, for instance. Or, there may be API calls that don’t happen frequently enough to be covered by simulated tests, but still occur occasionally in production.

API monitoring alerts test engineers to these types of risks, which they would otherwise overlook if they run tests only under conditions of high API load.

 

External API Issues

API tests are often predicated on the assumption that third-party APIs will remain available and high-performing. They further assume that application problems are caused by the way applications interact with third-party APIs rather than the APIs themselves.

In reality, of course, external APIs can degrade or fail for any number of reasons. Because those failures are typically caused by issues with a third-party service provider, it’s difficult to anticipate, let alone simulate, all of them.

From this perspective, API monitoring is important because it provides test engineers with an opportunity to understand how unexpected problems with external APIs can impact application performance. Even if they can’t reliably simulate all of those issues during testing, simply having insight into how external APIs affect their application under real-world conditions is invaluable.

 

Fix Customer-Impacting Problems Faster

Finally, API monitoring is important for the simple reason that it’s essential for ensuring that customer-impacting issues can be mitigated as quickly as possible. And while the IT Ops engineers who monitor production environments in general may bear primary responsibility for detecting and responding to API problems, test engineers who work with simulated API calls during testing may understand how APIs impact application performance in ways that an IT Ops engineer – who would typically not play a role in testing – doesn’t.

In that sense, performing API monitoring can help test engineers to contribute more positively to the collective goal of maximizing application quality and optimizing the end-user experience, no matter which types of problems arise in production.

 

Back to top

API Monitoring as a Key to Continuous Quality

API testing is great, but it’s not sufficient on its own to protect against every type of application performance issue that may emerge in a production environment. DevOps teams, and especially test engineers, must also leverage API monitoring to ensure that they can detect and respond to problems triggered by APIs that their tests failed to anticipate.

 

Back to top

Continuous API Testing and Monitoring with BlazeMeter

 With BlazeMeter’s new Continuous Testing functionality, you can test and monitor your APIs throughout the entire SDLC. I’ll talk you through how to do it.

API Testing and Monitoring with BlazeMeter

 

3 Easy ways to run API Tests in BlazeMeter

 There are several easy ways to create an API Test with BlazeMeter

1. you can simply create it within the UI

API Testing with BlazeMeter UI

2. Toggle to add your own script in Taurus

API Testing with Taurus

3. Or you can import from file types like JMeter and Swagger

 

API Test with Swagger

 Now these tests can be data driven, if we look at the scenario level in the next screenshot data can be imported from a CSV file and variables can be defined at a scenario level to be used by all of the tests within the scenario.

 

API Test with JMeter

Now we click Run Test.

Running API Tests in BlazeMeter

 You can see all your API test results here. You can also see more detailed reporting.

API Reporting

 We can also look for the content of the response to validate that the data is correct within the API Test.

 

Back to top

 API Monitoring within BlazeMeter

 Having an API down can be devastating to your business. We need to be able to understand not only “Is the API down” but also, is the vendor sticking to the SLA they supplied to us.

 We don’t always have access to the 3rd party infrastructure, so we need a way to be alerted if that third party API is no longer available.

 We can check this in the API Monitoring screen of the BlazeMeter platform

API Monitoring with BlazeMeter

 You can click on the test itself within the application to look at the output.

 

API Monitoring with BlazeMeter

 

Here we sent a request, got back a “200”, and we can see the details of the response, both for passed or failed API tests, as in the example below.

BlazeMeter API Monitoring

 You can also enable the scheduler to easily run API Monitoring tests on a specific API at regular intervals.

Easy API Monitoring for Developers

BlazeMeter’s API Monitoring feature provides access to enterprise level reporting, so that you can continuously monitor and track and multiple reports, that can then easily be shared with other users in your workspace.

Enterprise API Monitoring

 These reports show:

  • Average response rate
  • Success rate
  • You can change the time frame to adjust the view
  • You can also drill into the failures to better understand what the problem was.
Enterprise API Monitoring
  • You can connect the monitoring features to your daily workflow solutions to be notified if an API goes down.

Current integrations include:

  • Slack
  • Pagerduty
  • AWS CodePipeline
  • Datadog
  • Hipchat
  • Ghost Inspector
  • Jenkins

START TESTING NOW

 

Related Resources

Back to top