Blazemeter Logo

Changelog

What’s new for January 2022?

[Performance] JMeter versions 3.1 - 3.3 removed

JMeter versions 3.1, 3.2 and 3.3 were removed from Blazemeter.

Now there is no such version in the “JMeter version” dropdown.

Existing tests that have JMeter 3.1 - 3.3 versions selected will still run, you’ll see warnings about the deprecated version usage:

 

What’s new for January 2022?

[General] Refill Credits

If you are in a Basic or a Pro plan, you might have found yourself running out of credits before the end of your subscription while you still need to continue testing. Now, you no longer need to reach out to BlazeMeter support to renew your subscription. Instead, we’ve added the option to renew existing subscription from the Billing page:

The renew option is available any time and is effective immediately. Upon renewal, your credit balance will be rolled over, so that you can renew early without losing your credits. 

 

What’s new for January 2022?

[Performance] Started removing legacy JMeter versions

JMeter 3 has been removed from Blazemeter.

Now there is no such version in the “JMeter version” dropdown.

Existing tests that have JMeter 3 selected will still run, you’ll see warnings about the deprecated version usage:

 

What’s new for December 2021?

 

[GUI Functional] Reduced number of supported browser versions

From functional testing perspective, it doesn’t make sense to use old browser versions to run GUI Funcitonal tests. So we reduced the number of available in Blazemeter versions to last 6.

You’ll also start receiving warnings in tests and reports in case you have a browser in a test older than last 3 versions.

Please, find more info in the guides article.

 

What's New for November 2021?

[GUI Functional] Scriptless and Taurus actions correlation with test report

We’ve made it easier to analyze GUI Functional test reports by adding correlation with Scriptless/Taurus actions.

Before this improvement, the test report consisted of low level webdriver commands that were hard to read and, most importantly, difficult to analyze in case of failures.

Still, you are able to see low level webdriver commands by switching on the “All commands” toggle.

Below are several screenshots for different states of a test report:

[GUI Functional] Safari browser added

Now you can run GUI Functional tests in Safari browser.

Just add Safari in the “BROWSER SELECTION” section for a test:

 

Or set the “safari” for the “browserName” capability for your Remote Webdriver instance, in case you are using Blazemeter as a cloud provider for selenium grid infrastructure.

 

In case you are using Private Location, you’ll need to enable Safari at “Functionalities” tab in settings of your Location by selecting the “Default Firefox, Chrome, Microsoft Edge and Safari” option or by selecting specific browsers after choosing “Select versions” option:

[GUI Functional] Upgrade to Selenium 4

We are now using Selenium 4 for GUI Functional tests executed in Blazemeter.

Besides setting the ground for future support of new features like CDP, Selenium 4 provides better support for environments with proxy as well as a compliance with the w3c standards.

[intercom][Performance] Tagging

Tags are now supported on Performance tests and reports!

Tagging is the most convenient way to index your data - once you put relevant tags on your records you can always easily pull them and get your data aggregated. You can use tags for any purpose, such as branch, release, test APIs and application under test. Basically, for everything.   

 

 

Testers and Managers can tag performance tests and reports, create new tags, and use the Tags filter in the advanced search page to get the full list of results containing the selected tag(s).

 

 

We are also helping you stay on the safe side and avoid Tags overload - by defining Tags as case insensitive and allowing managers to control the list in the new Tags page on the Workspace setting.

 

For more info, refer to the Tags article in BlazeMeter’s guide: https://guide.blazemeter.com/hc/en-us/articles/4411747835409

 

[Mock Services] Indication of Template applied to Mock Service

Mock Services list now displays an indication when Mock Service Template is applied to Mock Service. This helps to quickly identify if your Mock Services follow specific behavior defined by Templates.

 

In addition, when updating a Mock Service that has a particular Template applied, you can optionally let BlazeMeter to update associated Mock Service Template as well.

 

[Mock Services] Add New Transaction from Mock Service Detail

Recently we have introduced ability to edit Transactions directly from Mock Service detail – that small but important change saves time, mouse clicks and screen transitions. On top of that, you can now add new Transactions directly from Mock Services detail which saves even more time and clicks.

 

 

[Mock Services] Enhancements in Inspection View

Mock Services Analytics Inspection View got a couple of useful UI enhancements.

 

First of all, the Request path column no longer contains the hostname part which was always of the same value. Now the Request path contains only the path part of the URL – which is the one that contains the most important content that varies per request. This enhancement helps to quickly identify differences in requests.

 

In addition, the full request URL could be easily copied to clipboard using a dedicated copy action. That helps to easily paste direct full URLs of the requests anywhere where needed.

 

[Mock Services] Add New Service Action

Service selection now contains “Add Service” action to enable quick and easy creation of new Services as needed.

 

[Performance] Advanced Search Options in Performance Testing Tab 

Sometimes searching for a test by the test name is just not enough. Perhaps you are looking for all the tests created by your team member who just left, or you’re looking for a test you created last month but you just can’t remember the test name.

 

With the new advanced search options, you can search for performance tests and reports by their attributes such as the creation date, executing user, locations used or the total number of users in the test. In addition, you can customize the fields displayed in the results and focus on the data you’re interested in. You can also export the results to CSV so that you can further manipulate the data and share the results with non-BlazeMeter users.

 

To navigate to the new search page, click on “More search options” in the search bar. You will also find a new “Go to advanced search” button in performance Tests and Reports menus. This way, if you didn’t find what you were looking for in the simple search, you always have the option to navigate to the advanced search page and get the extended, full results or refine your search.

 

Advanced Search Options in Performance Testing Tab

For more info, see the documentation.

 

[Performance] 900 labels are now supported in the Request Stats 

Historically, we have supported a maximum of 100 labels per engine, and a total of 300 labels in a report. The labels limit is now increased to 300 labels per engine, and to a total of 900 labels per report, allowing you to get detailed Requests Stats for all labels in the report.

 

For more information, see the documentation.

 

[API Monitoring] Enhanced Test Concurrency Limit Management

While you can run hundreds of tests in a single bucket in multiple locations worldwide, if you have a large number of tests that are running concurrently in a single location within a bucket, it can lead to slow performance, timeouts or failures. We have enhanced both the bucket Dashboard in the GUI and the Bucket Detail API to help you identify how close you are to hitting the test concurrency limit per location per bucket. This will enable you to take necessary steps to manage your tests more optimally.

 

For each bucket with locations that are running greater than 85% of the maximum limit for concurrent tests per location per bucket, a warning message indicating the location name and the percentage of utilization of test concurrency is displayed on the dashboard for that bucket as follows:

 

Enhanced Test Concurrency Limit Management

 

We have also enhanced the Bucket Detail API to also list the percentage of concurrent tests running in each location compared to the limit of concurrent tests per location per bucket. The API can be called with the list_utilizations_gt parameter that accepts a number from 1 to 100. For example, api.runscope.com/v1/buckets/<bucket_key>?list_utilizations_gt=80 will have the following response:

 

{

    "data": {

        "auth_token": null,

        "default": false,

        "key": "xxxxxxxx",

        "name": "Mobile Apps",

        "team": {

            "name": "Mobile Team",

            "uuid": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

        },

        "verify_ssl": true,

        "locations_utilization_%": {

            "remote": 87,

 "us california": 82, 

            "us iowa": 90

        }

    },

    "meta": {

        "status": "success"

    }

}

 

For more information, see the documentation.

 

[API Monitoring] Agents API Enhancements

For enterprises using a large number of Radar agents, we have enhanced the Agents API with more details, to enable better management of the agents.

 

The Agents API now also lists the IP address, hostname, host OS and the install directory location for all Radar Agents in a Team. The details included are:

  • Name of the Remote Agent
  • Remote Agent ID
  • Team ID
  • Agent Status (Up or Down at a minimum)
  • Location
  • Agent Version
  • Host Name
  • IP Address
  • Host OS

Agents API Enhancements

 

For more information, see the documentation.