[General] API Functional testing deprecation process started 

We have some exciting news for you. Going forward there will be a single place for all your API Functional Testing and API Monitoring. That means one way to rule them all! :-) 

Starting February 6th, 2022 please use the API Monitoring capabilities to create and run your API Functional Tests. New API tests creation via Functional tab won’t be available any longer. Neither via GUI nor via API. You’ll still be able to run existing API Functional Tests for the coming months. 

This change sets the stage for an exciting enhancement to speed up your API Testing/API Monitoring that we are currently working on and that will become available in the next few months…Stay tuned! 

 

[General] Private Location configuration change - Run Type and Dedicated/Shared terminology removed from the UI 

We are now allowing to freely use the “Parallel engine runs” field without constraints. 

The behavior remains the same for existing and new Private Locations: 

  • Private Locations that were configured as “Dedicated” will maintain their original “Parallel engine runs” value of 1.  

  • Private Locations that were configured as “Shared” will maintain their original defined “Parallel engine runs” value. 

The type field will now be used to indicate whether a Private Location is being shared between other workspaces or not. 

 

[GUI Functional testing] Notes field added for tests in a test suite 

There was a minor discrepancy between a GUI Functional Test Suite report and a Single Test report, in case Test Suite, there was no way to add custom comments on a Test level. 

Now you can add custom comments for each Test inside your Test Suite regression report. 

Notes field added

[Performance] BlazeMeter Test Data for Performance tests 

Use BlazeMeter Test Data features inyour Performance tests. Leverage synthetic data generation to let BlazeMeter generate real-looking data in quantity needed for your tests so you do not have to worry if your test execution have the fresh data – BlazeMeter will do it for you . In case of already existing CSV files used by tests, you can benefit from built-in CSV editor and ability to augment existing CSV data with dynamic data as needed. Data definitions created for a single test could be saved to workspace to enable reuse in other BlazeMeter tests (performance, functional) and Mock Services.

Follow BlazeMeter Guide for more details.

BlazeMeter Test Data for Performance tests

[Mock Services] Make your Mock Services data-driven with BlazeMeter Test Data  

Do your Mock Services have to support multiple data points and it is difficult and time-consuming to hardcode as many transactions as there are data points your mock has to support? Now, you can drive your Mock Services by data – let BlazeMeter to generate data synthetically for you or use any existing CSV files you have. Instead of hardcoding values to transactions, use parameters that refers to data definitions in order to set up data lookup behavior driven by your data tables. In addition, when your Mock Service is added to a BlazeMeter test, you can combine test data with data from your Mock Service to achieve automatic data alignment between Mock Service and test. 

Mock Services data-driven with BlazeMeter Test Data

[Mock Services] Randomized think time option and think time per transaction settings   

Mock Services can simulate response delays by setting think time options. Up until now, think time was globally defined on a level of the entire Mock Service – every request processed by a Mock Service followed the same fixed delay. Now you can define random think time behavior – e.g. let the delay to be randomly selected between the provided range which better simulates real world conditions (like network glitches) if needed. And in addition, you can also define specific think time per transaction to better simulate cases where some requests are expected to get responses slower/faster than the others within the same Mock Service.

Randomized think time option

[Mock Services] CPU and Memory thresholds in BlazeMeter VSE 

Based on customer feedback we are now recommending minimum CPU and memory settingf for a BlazeMeter VSE. The guidance is here on BlazeMeter Docs. The Environments section in the Mock Services tab, will now show individual alerts as required for each BlazeMeter VSE. The red alert message will be flagged when the current memory allocation for the BlazeMeter VSE is configured to be higher than the memory configured for the private location. The yellow alert message will be flagged when the BlazeMeter VSE memory and CPU allocation is below the minimum recommended threshold which will be different for a functional and performance VSE. Instructions are provided to configure the Memory limit for a Private Location as well as configuring the CPU, Memory and JVM JMX allocations.