Run massively scalable performance tests on web, mobile, and APIs

Request a Demo
May. 2nd, 2018

API Testing: Best Practices

In 2002, software bugs cost the United States economy approximately $59.5 billion (Software). In 2016, that number jumped to $1.1 trillion.

 

 

The longer a software bug exists throughout the product life-cycle, the more it costs. If the bug is caught during the development process - it cost is next to nothing to get it fixed as part of the implementation.

 

When a bug is discovered in a finished version of the product before the release, the cost grows, as locating the source, and finding a solution in a now “finished” code may be more complicated. A bug is found post release is likely to surge in costs, either due to a SLA violation penalties, possible complicated patches onto a released product or even the loss of potential customers assessing several products serving the same purpose.

 

 

 

The fastest tests to integrate into online product development processes are API tests

 

 

The advantage of API testing is the speed with which they can provide us with a picture of the product status in the development process. They also give the developer the ability to perform self tests at a low cost, eg: for younger startups that can’t devote a large amount of resources (time/people) to create wider coverage.

 

API Testing can also help you find breaches that would most likely be missed in any other sort of tests, because you can quickly flood your server with parameterized requests. It’s not just that we can flood the server, it can also perform actions that that don’t exposed in the UI (even if your backend supports it.) Once you have an API, your users may have access to it. This can open you up to security breaches. It allows you to test your software, beyond its usual UI capabilities.


In order to successfully run API tests, it is recommended to uphold the same principles for testing as you would do for any software development.

 

 

Here are 4 of the best practices that we apply at BlazeMeter to make your testing process quicker, smoother and more collaborative.

 

1. DRY (don’t repeat yourself)

Create a client for your SUT (System Under Test) before adding tests.

 

You want to avoid repeating your code, but many tests require you to address the same components or address similar actions. In these cases you can create a commons library to wrap your test requests with making the usage shorter and more simple in the process.

 

2. Clarity

Write clear tests that easily enable debugging.

 

While tests are running successfully they require no attention or time. When tests start failing, resources need to be allocated to find the cause of failure. This process may be very time consuming during the product development and go as far as pushing deadlines or cutting new features from your product.


Why might your tests fail?

 

  • Flawed automated test
  • Unstable test (race condition in test)
  • Environment failure or limitation
  • Changes to product without changes to test
  • Flawed functionality
  • Unstable functionality (race condition in product)

 

 

In order to save resources and optimize the process, debugging should be made a priority when you are creating the test.

 

  • Each SUT component should be tested separately for each possible configuration.
  • The failure clause of the test should be clear, and shown in the report.
  • All the additional information (descriptors, ids etc..) should be included in the report.
  • If possible - the results of the failure should be saved and accessible in the system so that data can be tracked back.

 

3. Mapping and Execution


Design the tests to run under different SUT configuration options.

This test should be designed so that you are not limited by the system. They should be configurable to run in any of your working environments under different configurations and provide a clear picture for each of them with a split report.

The more flexible the tests are, the less effort is required when they are version split, based on different customer requests or when there is a major change in the product behavior under different configurations/conditions.

 

4. Prerequisites and Cleanup

 

To focus on the the components currently under test, split the test into three sections:

 

  • Prepare (Setup)

 

Setup creates all the conditions and resources required for testing the component under test. Failures in setup should not indicate the test as failed but as “Not Executed” because that means that the failure we encountered is in a different component and not the component currently under test.

  • Execution (Test)

 

Execute the test that uses/relies on the prepared components. And verify/assert the outcome matches the expected result

 

  • CleanUp (Teardown)

 

Teardown is the post test stage in which we delete or reset system resources that were created or modified as part of the test.
The exception to this is data that may be required for the debugging process in case of a test failure.

 

A pytest framework example of setup and teardown using a yield fixture

 

Getting started with API Testing

 

There are 4 key stages for successfully getting started with API Testing:

 

1. Mapping the system into representative components


Before you start writing tests, you need to get a clear picture of what different test suites you will need and how are they going to look. In order to do that you need to understand your system from a holistic view, so you can break it into concrete components which you can start planning tests for.


2. Choosing the test type and parameters for each SUT


Once we have a clear picture of the system components and the integration between them, we can plan the tests in a way that covers all the possible iterations and usages of those components - This is where the different types of functional testing come into the picture.

 

3. Combining the results from each SUT into one big picture


At the end of the testing cycle we want to be able to have a clear picture of what does work, what doesn’t work, when did it stop working and ideally, “Why didn’t it work?”

 

4. Continuous testing


Once the tests and reporting mechanisms are in place, the next stage is to create a continuous process that runs alongside the product development, and provides a look into the the product status at minimal cost.


This should provide the tools for testing existing features as well as new ones, and in case something does not function properly, provide alerts during the development stages, when the cost of fixing it is lowest.


API Functional Testing with BlazeMeter

 

BlazeMeter has a new, intuitive way to easily create API Functional Tests, using either the UI, or configuration snippets.

 

 

 

Now you can use the same platform to create performance tests and massive scale load tests for your API, as part of your continuous integration process.

 

BlazeMeter is built for devops and development teams who are looking to incorporate testing into their continuous delivery approach. It is based on open source technology and built for test automation, through dedicated Continuous Integration plugins for Jenkins, TeamCity, Bamboo and any other CI system.

 

BlazeMeter also provides comprehensive detailed reporting on request level as well as historic trend reports. Collaborate and share test results with teams.



To start your API Functional tests you can request a BlazeMeter demo or simply put your URL and response in the box below to start testing.

 

 

     
arrow Please enter a valid URL
Assert: is found in response

Interested in writing for our Blog? Send us a pitch!