BlazeMeter vs. LoadRunner Performance Test Results: How to Compare
Customers considering converting from LoadRunner to BlazeMeter might want to compare BlazeMeter performance testing results to LoadRunner test results. One natural point of focus is comparing transaction metrics for throughput, average response time, and additional load testing KPIs. This blog post will show you how to generate a report in BlazeMeter so you can compare it to LoadRunner, helping you confidently migrate from LoadRunner/Performance Center to BlazeMeter.
This process described in this post is intended for validating the similarity of the results obtained from a test run both in LoadRunner and in BlazeMeter. Once the results are evaluated and you feel comfortable with BlazeMeter reporting, you will no longer need to follow this process.
To be able to go through the steps of this post, you need to have a basic understanding of JMeter and be able to create your load testing scripts there. If you need help getting started, we offer free courses at the BlazeMeter University.
Transaction Metrics Reporting BlazeMeter vs. LoadRunner
The first thing we’ll do is understand is how transaction metrics are reported in each of the two products:
- LoadRunner transaction statistics (such as hits, average response time, etc.) are filterable by transaction status, such as ‘successful’ or ‘failed’.
- BlazeMeter transaction statistics by default are combined for all transactions, both successful and failure.
As you can see, there is a difference between the two. The steps in this article will show you how to generate a report in BlazeMeter that is similar to what LoadRunner provides in the Performance Center for “successful” transactions. This ensures we are comparing apples to apples.
Step 1: Installing Taurus
Once a test is run via BlazeMeter, we will use Taurus to import the stats for ‘successful’ transactions as a separate report that can be viewed within BlazeMeter. This is the report that should be used to compare to what LoadRunner/Performance Center provides.
Step 2: Setting Up the JMeter Test Script
The first step is to update the JMeter script with aggregate listeners. In this example we will cover setting up listeners for both ‘successful’ and ‘failed’ transactions.
Adding JMeter Listeners for Successful Transactions
Here is an example of the configuration of the aggregate report listener—where you will get a report for passed transactions only:
Adding JMeter Listeners for Transactions with Errors
To get statistics for erroneous transactions from JMeter, add another aggregate report listener. Define the name for a report as filename.jtl and select only Errors.
Here is an example of the configuration of the aggregate report listener for transactions that had errors:
Adding JMeter Listeners for All Transactions
To completely exhaust the logical options, choose neither. This will generate statistics for all transactions (regardless of success or error). This is the default behavior for BlazeMeter.
Step 3: Separating Successful and Failed Results
BlazeMeter lets you run the test and observe the statistics for all transactions. It will look like this:
Performance Test Summary
Performance Test Timeline Report
These statistics are for all transactions:
However, we want to separate the results so we can compare the test to LoadRunner. To do so, we will import the .jtl files into BlazeMeter.
The following steps explain how to organize the needed .jtl files from each engine and upload their information into BlazeMeter via Taurus.
Copy the “fail” and “pass” .jtl files that were generated by the test and in the artifacts zip file and save them to a folder on your system. For the purpose of this demonstration, we put them into a folder called “reports.”
Next, create a .yml file that will be run by BlazeMeter to import the statistics from your .jtl file. Below is an example that includes the .jtl file, authentication token information and the name of the report in BlazeMeter.
If you have multiple engines, upload this information from each engine and name each file in a way that helps you keep things clear (see the example at the end of this article for further information).
In this example, there was one engine and we had a “Fail” and a “Pass” .jtl file as well as the report.yml. Here is an example of what your ‘Report’ folder may contain. Again, this is in preparation for uploading to BlazeMeter.
Step 4: Upload Successful Transaction Stats to BlazeMeter with Taurus
Now we are ready to upload the contents of the previously generated pass and fail .jtl files into BlazeMeter. To do so, use Taurus and the bzt command with the external-results-loader executor.
Generate a Passed transaction report using the Taurus result loader. Remember, in the last step we defined the data file in the report.yml as PassAgg.jtl.
Now generate the report:
- Go to the folder path by command prompt/terminal
- Execute the command “bzt filename.yml -report”
Step 5: Review the Report in BlazeMeter
You just ran bzt to upload the PassAgg.jtl information to BlazeMeter. It is finally time to review this information. Again, this is just for successful transactions from the previous test.
In BlazeMeter, go to Summary > TimeLine Report > Request Stats > Errors > Logs for this report.
BlazeMeter Summary Report for Successful Results
BlazeMeter Timeline Report for Successful Results
BlazeMeter Request Stats for Successful Results
This is the key information that we really want to see. This is the information for comparing successful transactions from the BlazeMeter test against the statistics from the LoadRunner test.
As we removed the errors, you can see there are none in the report.
Logs for Successful Results
Step 6: Upload Error Transaction Stats to BlazeMeter
If you’re a testing marathoner and you want to double down and make sure the test was complete, this step is for you. The previous step’s deliverable was to provide the statistics for the successful transactions. This step is to show the other side of the coin, the statistics for the failed transactions.
To upload the statistics for errors (FailAgg.jtl) to BlazeMeter, we are going to follow the same process as before, but instead of uploading the .jtl file for successful transactions, we are going to upload the .jtl file for failed transactions. To do this, just update the filename in your .yml file:
Just like for successful transactions, run the command “bzt filename.yml -report” on the terminal/command prompt.
The following sequence of screenshots shows the progression through the various parts of the report: Summary > TimeLine Report Request Stats, Engine Health > Errors > Logs.
BlazeMeter Summary Report for Unsuccessful Results
BlazeMeter Timeline Report for Unsuccessful Results
Request Stats for Unsuccessful Results
Step 7: Compare the Results to LoadRunner
Now that you’ve successfully generated a performance test in BlazeMeter that only shows successful results, compare it to your LoadRunner test. You will see the results are the same.
Multiple Load Engines Use Case
If you have multiple .jtl files from different Load engines, you can download all the artifacts, assemble all the .jtl files in one folder and configure the .yml file as below:
Get Started with BlazeMeter
Ready to create your BlazeMeter test? Sign up for an account, if you don’t have one, and start testing now!