During GUI Functional testing, you may need to configure your browser for specific cases, like emulating mobile devices in the browser, or emulating web cameras and many more. This can be achieved by passing appropriate browser options/arguments, or even experimental options. The Taurus YAML file syntax now allows you to pass such options to the browser: https://gettaurus.org/docs/Selenium/#Browser-Options
We’ve added new browser versions including Chrome version 90, and Firefox version 88. These versions are set as default.
We’ve added a new Search field to Performance and Functional tabs! Now you can quickly search within the tab for tests and reports by their name, without having to go through “Show All Tests” or “Show All Reports” side bars.
The field will display top 5 recently updated tests, and top 5 recently executed reports that match your search. To see the full list of tests or reports, click on “Show All Results” which will open the full list in a sidebar.
Looking for a more complex search, a way to find a test created by a specific user or during the last month? Stay tuned; We will add advanced search options to this field soon.
Taurus has added support for the MQTT protocol. Now you can load test your IoT product with BlazeMeter.
Software updates require free disk space to be installed, and now you can get an alert and take action before free disk space drops below a threshold of your choice. To create an Agent alert, go to the Alerts page in Workspace Settings menu, and create a new alert. Like Test alerts, the notification channels for Agent alerts are Email, Slack, or both. Once an Agent alert is defined, you will be notified in case any agent in any private location in your workspace goes below the threshold.
We have enhanced the PEM-encoded Client Certificates based authentication, for API Monitoring Tests, to also accept a key file or a passphrase.
You can optionally set up the passphrase as a Secret at the team or bucket level using the Secrets Management feature. This ensures that the passphrase need not be shared with every member of the team and is also not visible in Tests. This ties together two powerful features within BlazeMeter API Monitoring, designed to monitor your secure APIs and keep your API Tests secure.
Comparison capabilities in performance testing are now enhanced with the option to define a Baseline for a test. Defining a test run as a baseline helps testers make sure the application performance remains stable as code changes.
A test run can be defined as a Baseline from the (redesigned) test history tab, or from the report itself:
Once a baseline is defined, following test runs will be compared to the baseline, making it easier to identify degradations and bottlenecks, finding the related changes in code, and quickly taking actions to resolve them. You will find comparison data in the report Summary and in the Request Stats tab:
The Compare Report page and the Trend Charts tab in the test will display a visual representation of the comparison to the baseline.
BlazeMeter also helps you automate the decision making process, by allowing you to configure the failure criteria Threshold as a deviation from the baseline, so that test runs that significantly deviate from the baseline will automatically be marked as "Failed". To do that, check the “Use from Baseline” checkbox in the failure criteria section in the test, submit the deviation you are willing to accept, and the Threshold will automatically be calculated based on your selection.
Watch a brief demo on the Baseline Comparison feature:
Understanding which requests and responses were handled by your Mock Services is critical in cases when there is a need to debug why certain requests were or were not returned by a Mock Service. BlazeMeter Mock Services now provide an "Inspection view" for transaction-based Mock Services, which displays details about recent traffic handled by a specific Mock Service.
You can now quickly identify specific requests by full-text search and display corresponding response details. The Inspection view also includes responses served by real service in case of “redirect to live system” no match mode is selected. Such responses from live system can be saved as new transactions and pushed to the running Mock Service directly from the Inspection view screen.
Mock Services view now provides a “Filter by Status” option which helps in cases where there are many Mock Services defined. By using this filter option, it is easy to display a subset of Mock Services based on a certain status. For example, you can display only Mock Services that are running or stopped.
Reporting for Mock Services running on BlazeMeter VSE now gives you the option to display reports for Mock Services that were previously running, but are now no longer available. This is useful if there is a need to display historical reports for Mock Services that were deleted. However, it is still important to understand how many transactions they handled or what were their hits per second characteristics.
To use this functionality, simply open a new analytics tab, define the desired time period and select "Mock Service" from the list of available Mock Services that were running in that particular time frame.
It is a very common need to share reports with others or store them for your own reference. Reports for Mock Services running on BlazeMeter VSE can be now exported and downloaded as PNG images or PDF documents.
It is possible to download vse_matches logs for a selected BlazeMeter VSE directly from the Environments screen.
You can now use Istio to help route Ingress traffic into the desired pod in containers and the cluster.
Parallel controller plugin now compatible with latest versions of JMeter.
[GUI Functional testing] Location and browsers configuration is now available on a Test Suite level
After grouping individual GUI Functional tests into a Test Suite you may want to change location and browsers on a Test Suite level, so that all of the tests inside a Test Suite are executed on the same location and browsers. Previously, you would change location and browsers for each of the tests in a Suite.
Setting a location and browsers for a Test Suite will not affect individual test configuration. In other words, you may have different location and browsers set on individual test level and on Test Suite level.
We’ve added new browser versions including Chrome version 90, and Firefox version 88 - these versions are set as default.
We have enhanced the API Monitoring Secrets Management feature to also support secrets at the bucket level. Each bucket can now have secrets that are not shared with other buckets. Secrets specified at a bucket level can only be used by Tests in that bucket.
Secrets specified at the team level can continue to be used by Tests in all buckets. Check out the docs to learn more. Requires a qualifying plan.
We have enhanced email notifications for API Monitoring Tests to also support sending notifications to email distribution lists or non-member emails.
First the owner of a team must specify a whitelist of email domains (e.g. mycompany.com,xyz.com) in the Team Settings and Usage page. This ensures that only emails that belong to admin-approved domains can be added to buckets to receive test notifications.
Bucket owners can navigate to the Bucket Settings page and add one or more distribution lists or non-member emails that can then be used by every API Test within that bucket.
The Email Notifications section of each API Test in the bucket will display all the distribution lists or non-member emails specified for that bucket. Test owners can select one or more of these emails or distribution lists to be notified.
[Mock Services] Search transaction by ID
Searching for a particular Mock Services transaction is a very common task. You need to find the right one in order to edit its response, and add it to your running Mock Service, or to double-check whether it is part of your Mock Service template or not.
Up until today, you could only search transactions by name or tag. However, a transaction name could be long and complex and the tags may not be unique. On the other hand, a transaction ID is unique and short. BlazeMeter now enables you to search transactions by transaction ID.
[Test Data Management] Share test data models within workspace
Test data models defined in Scriptless tests are now re-usable between Scriptless tests and no longer tied to one particular test. With the ability to save and open the test data model, you can now save the test data model in your workspace and load it into a different test within the same workspace.
[Test Data Management] Export/Import test data models
You can now export test data models into files, store them in an external system and let other teams import them into their workspaces.
[Test Data Management] Download generated data as CSV file
Every test data model can be now downloaded as a CSV file, even before the test is executed. You can review generated data in a CSV file, make adjustments where needed, or use this CSV file as test data input for your other tests.
[Test Data Management] Integration with Broadcom Test Data Manager Find & Reserve feature
In many cases, test data may be already seeded in a database within your test environment. You can now use the integration with Broadcom Test Data Manager (TDM). Find & Reserve models defined in your TDM can be linked to BlazeMeter Scriptless tests including the ability to provide specific search query criteria to narrow down your test data sets. Prior to the test execution, the BlazeMeter Scriptless test retrieves test data from the database via TDM according to the Find & Reserve model and criteria specification.
You can manage test data and parametrize your GUI Functional Scriptless test with synthetic data, which is randomly generated each time you execute a test.
We’ve added the ability to rerun a GUI Functional test with the same data that was generated for an exact test execution.
For example, if you have a GUI Functional test that failed on a specific set of data, now you can rerun the same test with the same data set, in order to make sure your application is failing with specific data.
We’ve added new browser versions including Chrome version 89, and Firefox version 86 - these versions are set as default.
We have made it easier to take a closer look at the logs, especially when it comes to tests with multiple engines! The new Logs tab in the report now allows you to focus on a certain group of engines and download all their logs at once:
On download request, a zipped file will be generated, containing all the selected logs. Once the file is ready to download, you will also get a link to the file - which you can share with others or download directly.
We’ve added a new Test Data view for UI Functional Scriptless testing. Get a better overview about data from attached CSV files and define data parameters to be synthetically generated in place to support specific test scenario needs. There are 50+ functions available to get real looking, random, or dynamic data for your test, which you can perform calculations and conditional logic on top of this data. For more information, see our documentation.
Better visualize how every iteration will combine data from various CSV files or synthetically generated data according to different iteration options. Now you can get a built-in data preview with the New Test Iterations settings dialog available in UI Functional Scriptless testing.
We’ve added new browser versions including Chrome version 88, and Firefox version 85 - these versions are set as default.
The report shows the number of test sessions grouped by browser/version for a selected period.
Managing Mock Services is even easier with the ability to bulk start, stop and delete functionality. For more information, see our documentation.
From the Asset Catalog, you can view what Mock Services are running on a BlazeVSE and deploy new ones. For more information, see our documentation.
You are now able to specify a VSE as performance mode, and track metrics such as the total concurrent performance VSEs and the total functional transactions metrics. Additionally, we have added the ability to review reports and inspect MAR Mock Services deployed to a BlazeMeter VSE. For more information, see our documentation.