[Test Data] Introducing Test Data Orchestration  

BlazeMeter Test Data extends its functionality by Test Data Orchestration – the ability to interact with a test environment or external systems and components in order to: 

  1. Seed (write, publish…) generated test data into the test environment to run data-driven tests with consistent test data. Say bye to invalid test runs caused by out of sync test data. 

  1. Read (fetch) test data from external systems during the test start-up to drive test by the test data already seeded to the test environment or get that way test data provided by external systems. 

  1. Combine both sources of test data – generate and publish some of the data, read back other data from the test environment, combine both and use it for test. You can even sync it with associated data-driven Mock Services. 

  1. Validate correctness of existing test data at the beginning of every test run in order to avoid invalid test runs caused by expired test data. 

  1. Clean up your test environment automatically when test execution finishes - so your environment is ready for another test run. 

For more details check BlazeMeter Test Data Orchestrator deep dive video, blog post and BlazeMeter Test Data Orchestration documentation

 

 

[API Monitoring] Optionally disable DNS pre-check  

Radar agent currently performs a DNS pre-check before sending any actual HTTP/s request.  If the hostname of the request is unable to be resolved by the local DNS resolver of the system that the agent is running on, the entire tests will fail with an error message. This causes issues for users who want to use a proxy to access hosts that are not directly accessible by the system that Radar is running on. 

As of Radar agent version 1.9, you can disable DNS pre-checks by through command line interface or config file. 

Follow the BlazeMeter Guide for more details how to disable DNS pre-checks. 

 

[API Monitoring] Display user who executed test  

In cases when a test is executed in an ad hoc way via UI, it is useful to know who actually executed that specific test run. API Monitoring UI now displays such information for the test executions triggered from UI – both on the test run details screen as well as in the Results table. 

 

 

[API Monitoring] Show test retry indication 

Scheduled tests could be set in “Retry on Failure” mode. In such case, a failure in test execution is followed by one additional test re-run as a “retry”. Such retry test runs are now indicated in Results table as well as in Recent test runs view. In addition, the “Retry of” indication is clickable and opens the detail screen of the parent test run that failed and triggered that particular re-run. 

 

[API Monitoring] Display agent version 

API requests triggered by API Monitoring now contain information about the specific version of the agent that triggered the request. This information is especially useful for debugging and investigation purposes where there could be a difference in behavior between different versions of the agent.