Tame Your Test Data With BlazeMeter Test Data Orchestration
October 31, 2022

Tame Your Test Data With BlazeMeter Test Data Orchestration

Test Data Management

Having challenges with test data is a commonly recognized problem that prevents shift-left testing and causes unnecessary and risky trade-offs in application quality. Test data challenges typically deal with testers and developers having little or no access to proper and good enough test data that would drive their tests and help cover multiple testing scenarios. 

That is why BlazeMeter introduced its Test Data functionality – to help developers and testers quickly and easily define, share, and manage their test data. Using the power of synthetic data generation and the dynamic nature of the data, BlazeMeter helps to create test data that is always fresh and conforms to the required rules and properties. 

This blog will explore our Test Data functionality in depth, with a particular focus on our Test Data Orchestration capability and how it helps address the ever-present issue of test data consistency. 

Back to top

The Struggle to Build Proper Test Data 

There are various reasons why the lack of proper test data is so rampant. Manufacturing high-quality test data with all the necessary properties like uniqueness and structural and contextual consistency (e.g. birthday and age must match, US state and zip code must match, credit card numbers must follow specific rules) may be time-consuming and difficult.  

Developers or testers must spend time working on this overhead. Yet in many cases, overhead is cut short by using some trivial test data that does not properly support the test scenarios needed to perform. In other cases, testers and developers will depend on a dedicated team that is supposed to craft and deliver test data sets for them. And being dependent on anything external always smells of unexpected delays or faulty execution. 

The synthetic nature of BlazeMeter Test Data helps generate variety in the test data to cover more cases than basic happy paths, including the edge or negative ones. Additionally, the synthetic data generation approach guarantees no headaches with PII data being accidentally used for testing. And all that is out of the box is available for BlazeMeter tests (performance & functional) as well as BlazeMeter Mock Services. 

The Main Problem: Test Data Consistency  

We now know that BlazeMeter Test Data enables shift-left (testing is not blocked by limited availability of test data anymore), saves time, and leads to much better coverage (therefore fewer issues with application quality). However, the test data problem is much broader than just “making” the test data.  

Another major test data concern is known as “Test Data Consistency,” which ensures that the test data that drives the test is aligned with the test data seeded to the test environment. This alignment can eventually extend to the environment dependencies (services, components) that could be real or mocked by BlazeMeter Mock Services. 

Let us illustrate test data consistency with the help of the following diagram: 

Test data consistency in BlazeMeter Mock Services

This diagram describes a scenario of a simple test case where a specific user is used to log in to the application. Then, the assertion expects some specific content to be displayed on the home page for that user – in this case, to display orders submitted by the logged-in user. It is evident that if the test is using “Joe” to log in, that the application must have some order data for that user – i.e. it is not possible to use “Joe” to log in and have only order data for, say, “Jack”. There must be some order entries for “Joe” in order to proceed.  

The same applies for the actual authentication that is performed by, say, an LDAP service, which is in this case mocked. Again, that service vice must be able to authenticate “Joe” in order to let our scenario proceed. Therefore, “Joe” is highlighted across the environment, since this is the exact data point that must be consistent. 

Test data consistency challenges like the previous example are typically the most difficult problems to deal with as part of shift-left testing. These challenges cause delays in testing, unnecessary tradeoffs in quality, and prevent full shift-left from occurring. 

Back to top

The Solution: Test Data Orchestration 

BlazeMeter now offers our Test Data Orchestration feature, which is designed to solve test data consistency challenges. Aside from keeping test data and Mock Services data consistent, BlazeMeter now allows users to define how they will synchronize the test data that drives the test in their test environments. Therefore, Test Data Orchestration creates a consistent test data plane for your testing.  

You can say goodbye to your invalid test runs caused by inconsistent data, as well as hours of manual effort to keep test data consistent put into preparations, writing scripts, reading instructions in Wikipages, and maintaining distributed data sets. 

BlazeMeter can now solve that problem behind the scenes automatically through its built-in features. Any Test Data Entity in BlazeMeter can have one or more “data targets” and associated API request sequences that ensures the test data is orchestrated (written, read) between the test and its test environment for the specific test run. Moreover, BlazeMeter Test Data Orchestration can also clean up this test data inside the test environment once the test is complete. 

BlazeMeter Test Data Orchestration Data Targets
Back to top

BlazeMeter Test Data Orchestration Use Cases 

This section will delve into three fundamental use cases that BlazeMeter Test Data Orchestration supports. But know that there are many use cases beyond the ones that we will discuss where BlazeMeter Test Data Orchestration is either a huge help or the only solution. 

Test Data Publish 

In the test data publish scenario, a tester relies on test data synthetically generated by BlazeMeter to ensure that the test will get fresh and dynamic test data whenever it is executed. However, it is necessary to make sure that these test data are not only available for BlazeMeter tests, but also seeded to the test environment (e.g. in order to allow synthetically generated user accounts to log in to the application or to seed some specific test data pre-requisites in the application backend). 
BlazeMeter Test Data Orchestration allows users to define a sequence of API steps to “publish” (write) generated test data to the test environment. Once the test executes, test data is synthetically generated and published to the test environment, allowing for seamless test execution. 

Test data publish example for BlazeMeter Test Data Orchestration

Test Data Fetch 

The use case of “Test Data Fetch” applies when there is test data (or a subset of it) already seeded somehow into the test environment. Unlike the previous use case of “publish” which is a push model, this use case requires BlazeMeter to respect the test data that already exists in the test environment and use it drive the test.

BlazeMeter Test Data Orchestration is used to define steps that “fetch” (read) test data from a remote system. This fetched test data is then used to drive the test execution.  

Test data fetch example for BlazeMeter Test Data Orchestration

It is also possible (and in many cases also necessary) to combine both the publish and fetch approaches. While part of the test data is fetched, the other part must be generated and published.

In some cases, test data must also be fetched after other information was already published (e.g. the new user account published to the system gets unique ID that could be assigned only by the remote system, but has to be used later by the test execution). 

Test data publish vs. Test data fetch

Ensuring Existing Test Data Validity 

Another scenario where BlazeMeter Test Data Orchestration is especially helpful is in the case of ensuring the validity of existing test data. In this case, the focus is not on how to create or retrieve test data from a remote system – BlazeMeter Test Data Orchestration could be used to fetch data, or there can simply be a CSV file with the test data available within a test.  

Yet the challenge is that this test data – or specifically, some of their entries – may become invalid over time. Maybe other tests or testers used the same data, making the data obsolete. For example, in a set of users where all are expected to be without a credit card, some of them got a credit card assigned. This change makes part of this test data set invalid – simply because the test only expects users without credit cards. 

In these types of use cases, it is possible to define BlazeMeter Test Data Orchestration steps that will check all test data entries and selectively exclude specific data entries from the test data set for a particular test execution, based on criteria such as specific content responses. This way it is possible to prevent unnecessary failed or invalid test runs caused just because some test data entries got messed up accidentally. 

Ensuring test data validity example for BlazeMeter Test Data Orchestration
Back to top

Test Data Orchestration Deep Dive

After covering a few test data orchestration use cases on a higher level, let’s take a closer look at one use case: confirming that test data is consistent with the test environment before the test is executed. 

In this use case, we have a financial application that needs to confirm the mortgage application process. When a user applies for a mortgage, an application form needs to be completed. To complete the form, you need test data including:

  • Applicant’s first and last name.
  • Email address.
  • Date of birth.
  • Street address.
  • State and zip code.
  • Salary.
  • Type of mortgage. 

During the test execution, the mortgage type and current interest rate will need to be selected from a drop-down. The Interest Rate is a value that changes regularly, so the test data must be updated with the current interest rate, which can only be accessed from the application server. 

Step 1: Building a Test Data Model

When creating synthetic test data, we need to first create a test data model that consists of the synthetic data generation rules needed to generate the relevant test data. These rules will create realistic names and email addresses, as well as dates of birth and physical addresses. This data can also be combined with data from a CSV file generated from the test environment if needed. 

At execution time, the executed orchestration job will retrieve the mortgage interest rate from the application under test based on the type of mortgage that has been generated by the data generation rules. That interest rate will then be incorporated into the test data.

The following table shows the test data requirements for the mortgage application form and the corresponding data generation rules and examples of the data that is generated. BlazeMeter provides 60+ data generator functions and more than 50 built-in Seed lists that help to easily get names, addresses, or other structured data according to your testing needs and desired quantity – including arithmetical, string, and JavaScript functions to fine-tune your data requirements. 

Data Requirements

Parameter Name

Data Generation Rule

Data Example

Date Of Birth must be between 18 and 100 years old


datetime(dateOfBirth(18, 100, "1990-02-10"), "MM-DD-YYYY")


User Suffix must be consistent with First Name




Realistic First Name not random string


ifCondition(${suffix} == "Mr.",randFromSeedlist("firstnamemaleamerican"),randFromSeedlist("firstnamefemaleamerican")).replace(/ /g,".")


Realistic Last Name not random string


randFromSeedlist("lastnames").replace(/ /g,".")


Properly formed Email Address which is consistent with the First and Last Name




Realistic Street Address


valueFromSeedlist("usaddressbig-multicol",${seq},2)+" "+valueFromSeedlist("usaddressbig-multicol",${seq},3)

345 Bankhall street Pittsburgh

State must match corresponding Zip code




Zip Code must match corresponding State




US Phone Number


"("+randDigits(3, 3)+")"+" "+randDigits(3,3)+"-"+randDigits(4,4)

(314) 501-8058

Phone Type







Annual Salary between 50K and 100K




Mortgage Type based on the following:

40% – Fixed Rate

30% – Adjustable Rate

20% – Home Equity

10% – Jumbo Loan


randFromList(perclist("40%Fixed-Rate", "30%Adjustable-Rate", "20%Home-Equity","10%Jumbo-Load"))


Mortgage Length must be between 10 and 30 years




Mortgage Interest Rate



The following table is a sample of the synthetic generated data using the corresponding data model. As you can see, the Interest Rate column has no generated data as no rule is applied to the parameter. The data needs to be gathered from the mortgage application before the test execution. 

Synthetic generated data example.

In the following section, we will show how to build the Test Data Orchestrator rules to return the mortgage Interest rate from the application server and assign it to the variable interestRate.

Step 2: Build Test Data Orchestrator Rules

In addition to generating test data, BlazeMeter can also use the Test Data Orchestration engine to run a series of API calls before and after the test execution. Data returned from these API calls can be incorporated into the test data and used by the test. Some typical use cases for using the Test Data Orchestrator are:

  • Use the newly generated data to populate an application using the applications API / API’s.
  • Ensure the newly generated data meets the test requirements by validating against the application under test.
  • Remove any generated data from the test data set that will cause a test failure.
  • Make a call to an authentication server to return a token to be used within the test.

So, we now have a data entity to generate the test data. By using the Test Data Orchestrator to execute a series of API calls against the mortgage application server, the mortgage application will return the current mortgage rate for each of the mortgage types. This data can be incorporated into the data set ensuring that the interest rate value is always correct.

The rules to define the API call are part of the test data orchestration Data Target, which is a part of the BlazeMeter test data entity. A data entity can have multiple data targets so a test can run a single or multiple data orchestration rules. Creating Data Targets is as simple as adding a data target to an existing data entity. 

A publish API request in the data target is formed of four main components:

  • API Request Structure. Made up of the API Method, URL, Headers, and Body. Local variables and variables from the data entity can be used as inputs and passed between API calls. For our use case, we are taking the “mortgagetype” from the generated data entity and using this as a parameter in the GET request.
API Request Structure
  • Extract from Response. The extract from response can be used to extract data from the response and assign it to a variable. The Mortgage rate API request returns a JSON body and the extract from response functionality can be used to find the JSON Pointer value (/mortgageRate) and assign this to a Variable name (interestRate). This variable stands for the parameter in the data model.
Extract from Response
  • Response Actions. These control how Test Data Orchestrator should respond when certain response codes or response body content is returned. This functionality can be used to exclude any of the generated data that is invalid or for any application/network errors.
Response Actions
  • Clean Up API Request. Clean up API requests are run after the test has completed. During the test of the Mortgage form application, multiple mortgage applications will be submitted to the application server. At the end of the test execution, the clean-up process can be used to call the mortgage request API to remove these mortgage application requests using the First / Last name, email address, date of birth, and mortgage type to find the relevant mortgage applications to be removed.
Clean Up API Request

To test the API requests the publish button can be used. The publish button will execute the Publish API requests and the “Publish Logs” tab can be used to review the log showing the API requests and corresponding responses. 

The Test Data Orchestration rules are a part of the data model associated with the test which ensures that the test data generation and orchestration execute whenever the test is run.  Within the test definition, there is also a “Test Data Orchestrator” section where the user can select the defined Data Targets which will be run when the test is executed. 

Test Data Orchestrator data targets

When a test is executed the first phase of the test is the initialization phase. During this phase, the BlazeMeter engines are provisioned and BlazeMeter will also generate Test Data using the data generation rules.

BlazeMeter will then run any selected Data targets from the “Test Data Orchestration” section of the test. The Test data file shipped with the test will contain the combined data from the data generation and the executed data targets.

When the test execution is complete, the logs section of the BlazeMeter test results will contain the logs for the pre and post-data target executions and the artifact.zip file will contain the test data artifacts ensuring that all the details on the test data generation are available to the tester.

Back to top

Bottom Line 

With their latest Test Data Orchestration feature, BlazeMeter now solves both the problem of creating and getting the right test data for the test, as well as achieving complete test data consistency. This latest capability makes the process of building and using dynamic test data easier than ever.  

Experience BlazeMeter Test Data in Action 

Try BlazeMeter for free today and see how we can take your shift-left testing to the next level. You can also take a deeper dive into our test data feature at BlazeMeter University. 

Start Testing Now

Explore BlazeMeter University


Related Resources: 

Back to top