Michael Sage is Chief Evangelist for BlazeMeter. He has over 15 years experience as a solutions architect and consultant helping teams of all sizes with software delivery and performance management. Prior to joining BlazeMeter, Michael worked with industry-leading companies like Mercury Interactive, Hewlett-Packard, and New Relic. A native of Philadelphia, he’s made San Francisco his home for over 10 years.

Learn JMeter in 5 Hours

Start Learning
Slack

Test Your Website Performance NOW!

arrow Please enter a valid URL

A Load Tester’s Guide to JMeter and BlazeMeter

 

A common question we get from folks interested in our solution is how they can transition from LoadRunner and other legacy tools to JMeter and BlazeMeter. They see the benefits of open source, cross-platform software, and easier integration into evolving trends like Agile and Continuous Delivery. They recognize that those legacy tools have reached the end of the road, but they’re unsure how to make the leap into the future.

 

How can load testers take the skills and knowledge and practices they’ve developed over years or decades into the next generation?

 

We thought it might be helpful to have a handy guide for load testing professionals who are new to JMeter and BlazeMeter. We’ll take some of the familiar tasks and ideas from performance testing in general and shed a bit of light on how to approach those tasks in these next-generation tools.

 

(You can request a 1-on-1 live demo to discuss how ensure a smooth and easy transition from LoadRunner to JMeter)

 

Let’s start with creating scripts.

 

Recording



 

Among the main tasks a performance tester faces is creating the right test scripts. These may be simple lists of GET requests, or complex session-based interactions with lots of POST data and headers and cookies, or a series of calls to a REST API with JSON messages going back and forth.



 

A common approach is to have the testing software record key use cases or business processes that represent what the users will do with the app.

 

A business process for a retail site might be:

 

 

  1. Login
  
  2. Search for an item
  
  3. Browse results
  
  4. Add an item to a cart
  
  5. Begin checkout
  
  6. Provide shipping and payment info 
  
  7. Place order

 


In legacy tools, capturing these interactions is often done with some kind of recording and scripting software, for example Virtual User Generator in Load Runner. The recorder is baked right into the tool in a GUI-centric way, pushing a typical red “Record” button, setting a couple of options, and then simply interacting with the subsequent browser session. You may puzzle over the resulting code in the script, but the recording process is pretty easy.



 

BlazeMeter provides a GUI-based recorder as a Chrome extension which is quite a bit easier to use than the JMeter proxy, and better handles things like SSL, and that can export the script to JMeter’s JMX format for further editing and customization. You can even run a load test right there in the Chrome extension, which makes it a full-blown solution implemented as a sidebar in your browser, no other apps required.



 

 

Read more about the Chrome extension here

 

Assertions



 

Sometimes a server will return a status code of 200 OK, indicating a successful response to a web request, even though there’s an underlying application error and the content of that page is incorrect for the result that was expected. An example might be a user’s session timing out because of an expired token (we’ll cover dynamic values below), and the server simply returns the user to the login page with some text indicating the issue, but no HTTP status error.

 

The scripting practice we use to ensure that the right content is being returned for each step is typically called a verification, a checkpoint, or an assertion. In LoadRunner, this is typically a C function like web_reg_find().

 

In JMeter, you don’t need to write any C code since it’s all handled by a script element called a Response Assertion. You simply tell the Assertion where to check (such as the Body, or Headers, etc) and then provide either the raw text, a dynamic variable being used in the script, or a regular expression. There are quite a few options to ensure you get exactly what you need.

 

 

guide to testing with JMeter and BlazeMeter

 

Read more about JMeter Assertions here

 

Think Time



 

Another important consideration in driving realistic load traffic to the application under test is the idea of concurrent users and think time. A concurrent user is different from concurrent hits per second or some other representation of load. Concurrent User refers to the number of people engaged in some session with the application, which may involve as in our example above, logging in, searching and browsing, purchasing and so forth.

 

Since humans are much slower than computers, we need to account for things like visually scanning a page full of search results, or reading an item’s description, or even getting up from the computer in mid-session to get a cup of coffee or take a phone call.

 

So, to make our scripts more realistic, we need to add delays between requests. Different pages take longer than others, and different users are faster or slower than others. Power users might fly through a business process, while new users might take substantially longer to do the same operations.

 

In LoadRunner, this is called “think time”, and it can be captured during recording and replayed at different values, including ranges and percentages of the original captured number.



 

In JMeter, we use Timers. There are a few different options, from hardcoded simple values (always wait 500 milliseconds on this page), to more dynamic ranges of values (choose a random number of milliseconds between 500 and 3000).

 

The Uniform Random Timer is good one to start with.

 

 

Parameters and Correlation



 

As most load testers know, it’s the unique, dynamic, session-specific values that most modern web apps rely on for session and content management that are often the trickiest parts of scripting. It can be a hair-pulling exercise in frustration trying to figure out exactly what those values are and where they occur. When scripts don’t work but the app appears to be functioning properly, it’s almost always one of these dynamic values that is the root of the snag. They might be authentication tokens, framework-specific content identifiers, timers of some sort, or just about anything else that attempts to manage session state in the otherwise stateless HTTP exchange.



 

Legacy tools have various ways of handling these challenges, usually called Correlation Scans or something similar. The tool will attempt to identify these values by replaying the script, comparing the data to what was originally recorded, and highlighting the differences. Sometimes they work nicely, other times they offer up unimportant garbage. A good load tester will already have some idea where and what to inspect.

 

While it has no automatic correlation scanning tool, JMeter is actually a little bit easier to use in this regard than the legacy tools if you already understand correlation basics. It amounts to three steps that you iterate through for each value you need to parameterize.

 

First, you need to identify them.



 

With each request that you capture in a JMeter recording, the request parameters are presented clearly in a text box, which makes it pretty easy to identify obvious candidates, which are often labeled something like “auth-token” or have a value that’s a long string of random alphanumeric characters, like “h83ke0d8kgb8xow9cxynb84jSK”. Also, the errors returned by the app when the script replay fails often reveal the value, something like “auth-token expired”. With a little trial and error, or if you already know the app, it's pretty easy to identify them.

 

Second, once identified, the trick is to find which requests issues them in it’s response. An authentication token is probably issued in the response to the login request. A shopping cart identifier is probably issued after clicking an “Add to Cart” button, and so forth.

 

But it's not always the immediately preceding request that issues the value, and this can require quite a bit of careful investigation. JMeter’s View Results Tree is where you can inspect the responses and even test regular expressions.



 

 

The third step is to use a Regular Expression Extractor with a pattern that matches the expected content. In our simple example, this might appear in the text as <auth-token=“h83ke0d8kgb8xow9cxynb84jSK”>, in which case one easy regex would be “auth-token=(.*)”>. The parentheses tell JMeter to retain the value found inside them, and then it places that value into the Reference Name you supplied for the extractor, which we may call something like “authToken”.

 

 

The JMeter variable format would then be ${authToken}, so you finally go and replace the original captured value wherever it appears in subsequent requests with the variable, and you’re good to go.



 

Admittedly, this can be an exercise in frustration, and is probably one of the hardest challenges with scripts. But once you get the hang of it, when the script starts working as expected it’s a pretty nice feeling of accomplishment!



 

Logic and Flow Control

 

Whereas tools like LoadRunner are mostly driven programmatically, JMeter is mostly driven through the GUI. When you need to introduce logic into your script — decision branching, looping, even randomness — you can use JMeter’s Logic Controllers to so do.


 

Some examples are the Transaction Controller, which allows you to measure a group of requests as a single unit, or a Once Only Controller, which can be useful for scripts where an operation only needs to happen one time even though many others might be repeated, for example, a one-time login followed by a looping series of searches and results.

 

Virtual Users



 

As you already know, a virtual user is just a software representation of a real user. Basically, a virtual user performs the steps defined in the script as a single process, over and over, based on the script and scaling logic.

 

In LoadRunner, you have the option of running a virtual user as either a process or a thread. In JMeter, being a Java application, they are always run as threads. In fact, JMeter just calls them that. You add a Thread Group to your script that wraps around your requests, and then set the options in that thread group for the number of virtual users, how they should ramp up, and how long they should run.

 

 

There are many other topics and areas of overlap, but those are some of the main ones. Really, JMeter is quite rich with the features and functionality demanded by experienced performance engineers. It’s definitely worth investigating further if you’re looking for a replacement for LoadRunner and other tools that were born in a previous era. 

 

Interested in a demo from one of our performance engineers about adopting new-generation open source performance testing? Get a 1-on-1 live demo where you can examine anything from scripting, cloud and on-premise testing, CI & APM integrations and more. 

 

If you have any questions or comments, please leave them below. 

 

     
arrow Please enter a valid URL

Interested in writing for our Blog?Send us a pitch!

We're working on starting your first test

Testing 20 Virtual Users

Starting your test in around 2-4 minutes. Your report will appear once we've gathered the data.

0

Status: Preparing Your Test

Your email is required to complete the test. If you proceed, your test will be aborted.