Vincenzo Marrazzo is a Test Automation Specialist. He has over 14 years of experience in various contexts both with open-source technologies and commercial ones. His primary activities are Test Automation and Performance Test. Vincenzo currently works at Global Business Line (GBL) Engineering and R&D Italy of Capgemini.

Become a JMeter and Continuous Testing Pro

Start Learning
Slack

Test Your Website Performance NOW! |

arrowPlease enter a URL with http(s)
Jan 11 2021

How to Use JMeter as a Monitoring Tool

Application monitoring alerts developers when a production service is failing, together with information that helps detect why. To answer these questions, monitoring reports include resource consumption KPIs like CPU load and IO bytes, and business logic validation KPIs, like login on portal and checkout shop basket. This blog post will explain how you can adopt JMeter as part of your monitoring workflow.

 

I will:

  • Explain when to use JMeter for monitoring
  • Describe how to structure a JMeter monitoring script
  • Show examples of how to apply customized scripting based on Groovy 
  • Present the results in graphic tools
  • Provide scheduling tips

Why and When to Monitor with JMeter

JMeter was not built as a monitoring tool. However, if you are already using JMeter or if you have a validation script that covers part of the monitoring requirements, there are advantages to building a monitoring script with it.

 

JMeter is:

  • Flexible - JMeter has dozens of components ready to perform various actions.
  • Customizable - It’s possible to develop in Groovy when there isn’t a specific component out of the box.
  • Scalable - In the case of many monitoring actions, it is possible to split execution between more threads and/or instances

How to Monitor with JMeter: A Practical Session

In this section of the blog post I will show a JMeter monitoring example based on the OMDb API RESTful web service. This service is similar to Internet Movie Database, but is not affiliated with it.

 

Our JMeter script does the following:

  1. Extract a subset of known Star Wars movies for monitoring.
  2. Monitor the core part. We will verify:
    1. If the movie is discovered according to its release year
    2. The correctness of the poster picture available online
  3. Publish the total/pass/duration information on InfluxDB.

 

Since the OMDb service is free, the only required action is to have a registered account with “apikey”. It will be used in the JMeter script.

Creating the Monitoring JMeter Script 

In our script, the “Thread Group” component is used to separate different execution phases:

  1. setUp Thread Group - the preliminary activity for dataset retrieving
  2. Iterate Dataset Thread Group - a core script that iterates the monitoring dataset
  3. tearDown Thread Group - the final activity to publish the monitoring result

 

It will look like this:

 

 

Now let me show you how to build the script.

 

1. Install the JMeter plugin jmeter-listener if you don’t have it. Our JMeter script has a dependencies prerequisite related to installation of InfluxDB java client into the JMeter classpath. This dependency is available through this plugin.

 

2. Create a new script in the JMeter user interface.

 

3. Define the following “User Defined Variables”:

 

 

There variables mean the following:

  • load_msg_sec - a load barrier to avoid overloading the monitored system with a high rate of requests
  • setupDone - a control variable used to ensure setUp is executed only one time in the script
  • stopDone - a control variable used to ensure tearDown is executed only one time in the script
  • dumpFile - the input data file used to iterate the core of the script. The input subset file is a CSV where each row describes one entity to be checked.
  • InfluxDB parameters - they are necessary to publish data to the dedicated InfluxDB instances

 

4. Add the “HTTP Request Defaults” component and fill it with the data below.

 

 

Now, let’s centralize the HTTP Request configuration.

 

5. Add a “setUp Thread Group” component and configure it like you see in the image:

 

 

This configuration is used to ensure that only one thread is allocated. If an error is encountered, the entire JMeter script execution is stopped.

 

6. Add an “If Controller” component under the “setUp Thread Group” with the following configuration:

 

 

Using the “setupDone” variable value, this component avoids double execution of the setup that is part of the current script.

 

7. Add the “JSR223 Sampler” under the “If Controller”. Use the following code in Groovy:

 

import java.util.concurrent.atomic.AtomicInteger


// the sanity check step can be more complex ;-P
log.info("#### Input dataset is OK");

// track shared variables for final stats to InfluxDB
System.getProperties().put("total", new AtomicInteger(0))
System.getProperties().put("pass", new AtomicInteger(0))
System.getProperties().put("fail", new AtomicInteger(0))

// track setup time
System.getProperties().put("setupTime", System.currentTimeMillis())

// track that setup is completed correctly
vars.put("setupDone", "true")

 

This code allocates shared variables to track monitoring results, without encountering “race condition” issues caused by multi thread execution. For this reason, the shared variables are all AtomicInteger.

 

Let’s consider a simple data subset composed by three columns:

  • Year - when movie is released
  • Title - the distribution name
  • PosterChk - the checksum of the poster image available online

 

Please note: this example focuses on a limited subset for demonstrative purposes (see here). In a real monitoring process, you will need to dynamically extract from a third party system (e.g. query to DB, etc.).

 

8. Add the “tearDown Thread Group” component and configure it like in the image:

 

 

This configuration is used to ensure that only one thread is allocated. If an error is encountered, the entire JMeter script execution is stopped.

 

9. Add under “tearDown Thread Group” a “If Controller” component with configuration below.

 

 

Using the “stopDone” variable value, this component avoids double execution of the teardown part of the current script.

 

10. Add the “JSR223 Sampler” under the “If Controller”. Use the following code in Groovy:

 

import org.influxdb.InfluxDBFactory
import org.influxdb.InfluxDB
import org.influxdb.BatchOptions
import org.influxdb.dto.Point
import org.influxdb.dto.Query
import java.util.concurrent.TimeUnit

def stopTime = System.currentTimeMillis()

def serverURL = vars.get("influxdb_url")
def username = vars.get("influxdb_user")
def password = vars.get("influxdb_password")
def databaseName = vars.get("influxdb_db")

def influxDB = InfluxDBFactory.connect(serverURL, username, password)

influxDB.setDatabase(databaseName)

influxDB.query(new Query("CREATE DATABASE " + databaseName))
influxDB.setDatabase(databaseName);

def retentionPolicyName = "one_month_only"
def queryString = "CREATE RETENTION POLICY ${retentionPolicyName} ON ${databaseName} DURATION 4w REPLICATION 1 DEFAULT"
influxDB.query(new Query(queryString))
influxDB.setRetentionPolicy(retentionPolicyName)

// Enable batch writes to get better performance.
influxDB.enableBatch(BatchOptions.DEFAULTS)

def duration = stopTime - System.getProperties().get("setupTime")

// Write points to InfluxDB.
influxDB.write(Point.measurement("monitoring_omdb_api")
	.time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
	.tag("platform", "jmeter")
	.addField("total_test", System.getProperties().get("total").get())
	.addField("pass_test", System.getProperties().get("pass").get())
	.addField("fail_test", System.getProperties().get("fail").get())
	.addField("duration", duration)
	.build())

// track that tear down is completed correctly
vars.put("stopDone", "true")

 

The code above performs all the necessary activities to establish a connection to remote InfluxDB. It also publishes the results of the monitoring information acquired when monitoring the core script.

 

11. Add a “Thread Group” component with the following configuration:

 

 

This component will allocate a fixed number of threads that run in an infinite loop without taking special action in case of an error (see “Continue” selected).

 

12. Add a “Constant Throughput Timer” component to the thread group with the following configuration:

 

 

This component will calculate how many samples per second can be handled during execution. I will apply this limitation to all threads of the current Thread Group.

 

13. Add a “CSV Data Set Config” component to the thread group with the following configuration:

 

 

This configuration is used to attach the input file “dumpFile” with monitoring data as a CSV file and split each row into variables allocated separately for each executed thread.

 

14. Add an “HTTP Request” component to the thread group with the following configuration:

 

 

This configuration enriches the default configuration with the “year” data from the CSV file. Each thread can now perform the customized HTTP Request with the dedicated data assigned via the “CSV Data Set Config” component.

 

15. Add two “JSON Extractor” components under the “HTTP Request” component. These two components will extract two strings from the JSON returned by the monitored system. Below are described configurations:

 

 

 

The first one extracts the Title of movie from JSON via the JSON Path expression. It is saved in a thread local variable called “ret_title”.

 

 

The second one extracts the movie poster url from JSON and saves it in a thread local variable called “ret_poster_url”.

 

16. Add the “JSR223 Assertion” component under the “HTTP Request” component with the following code in Groovy:

 

System.getProperties().get("total").getAndIncrement()

def ret_title = vars.get("ret_title")
def ret_poster_url = vars.get("ret_poster_url")

def exp_title = vars.get("title")
def exp_poster_chk = vars.get("poster_chk")

if (ret_title.equals("NOT_FOUND") || ret_poster_url.equals("NOT_FOUND") ) {
    System.getProperties().get("fail").getAndIncrement()
    def errMsg = "There is missing data between title (${ret_title}) or poster url (${ret_poster_url})!"
    AssertionResult.setFailureMessage(errMsg)
    AssertionResult.setFailure(true)
} else {
    if ( ret_title.equals(exp_title) ) {
    
   	 try {
   		 def content = ret_poster_url.toURL().getBytes()
   		 def calculated_chk = new String(content).md5()
    
   		 if ( !calculated_chk.equals(exp_poster_chk) ) {
   			 System.getProperties().get("fail").getAndIncrement()
   			 def errMsg = "Mismatch expected checksum (${exp_poster_chk}) and obtained checksum (${calculated_chk})!"
   			 AssertionResult.setFailureMessage(errMsg)
   			 AssertionResult.setFailure(true)    
   		 }
   		 else {
   			 // all OK!
   			 System.getProperties().get("pass").getAndIncrement()
   		 }
   	 } catch (java.net.MalformedURLException ex) {
   		 // probably attached URL is wrongly formatted
   		 System.getProperties().get("fail").getAndIncrement()
   		 AssertionResult.setFailureMessage("Obtained url is malformed -> ${ret_poster_url}")
   		 AssertionResult.setFailure(true)
   	 }
    }
}

// just to log execution
log.info("#### Returned Title -> " + ret_title)
log.info("#### Returned Poster Url -> " + ret_poster_url)
 

This code:

  • validates the returned variable from the JSON payload
  • fetches the poster image and validate the checksum value, comparing it with what was provided in the CSV
  • tracks the shared variables result of this iteration

 

17. Add “Aggregate Report” and “View Result Tree” listeners. It’s important to highlight:

  • “Aggregate Report” can be left as always active, because it provides useful monitoring status information
  • “View Result Tree” should be used only for developing and debugging scripts, but in production must be deactivated to avoid resource consumption

 

Here is the final structure of the monitoring script:

 

Running the JMeter Monitoring Script

 

When the monitoring script has finished running, the output data must be published in a reporting system. We will use a “quick & dirty” method for our testing by using InfluxDB running into a container with the Docker command below:

 

docker run \
    -dit --rm \
    -p 8086:8086/tcp \
    -v <volume_folder>:/var/lib/influxdb \
    -v <volume_folder>/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
    -e INFLUXDB_HTTP_AUTH_ENABLED=true \
    -e INFLUXDB_ADMIN_USER=admin \
    -e INFLUXDB_ADMIN_PASSWORD=admin \
    --name influxdb-dev \
    influxdb:1.8.1
 

 

Once we execute this command, we will have a local InfluxDB service ready to be the endpoint of the monitoring data produced by our JMeter script!

 

To visualize our recorded monitoring data and I proposed using Grafana, also with a Docker container:

 

docker run \
    -dit --rm \
    -p 3000:3000/tcp \
    -v <grafana_volume>:/var/lib/grafana \
    --name grafana-dev \
    grafana/grafana:7.3.0-ubuntu 
 

JMeter Monitoring Report Results

The graphs are ready!

 

The execution results are published in InfluxDB with the "tearDown Thread" Group and the Groovy code inside the "JSR223 Sampler". Each publication consists of only one dataset in a defined measurement.

 

This dataset contains the:

  • Execution timestamp
  • Execution duration
  • Executed tests (number of rows in input table) 
  • Number of passed tests
  • Number of failed tests

 

All this data can be aggregated in Grafana in at least two graphs.

 

This graph shows trends monitoring trends over time. It will report the percent of pass/fail tests over time:

 

 

This graph shows monitoring duration trends over time. This graph is useful to track unexpected  execution time behaviour (e.g. Monitored system is working correctly with increased response time). 

 

Scheduling and Aggregating Results

 

Now that you have the monitoring in place, it’s important to schedule monitoring iterations for ongoing and continuous testing. You can use a simple crontab configuration or a more sophisticated tool like Jenkins.

 

There are three technical aspects to take into account when defining the scheduling strategy:

  1. Time resolution - the time frame from the start of one iteration to the start of the following iteration. This factor depends on the application under test. It can be imposed by a legal agreement for technical support (e.g. when an ATM loses account information). A single iteration must be performed without exceeding this time,  otherwise two iterations will disturb each other.
  2. Time occupation -  the duration (avg) of the monitoring iteration. This factor can depend on many other factors like tools (e.g. JMeter), technology (e.g. Java, Groovy) or implementation (e.g. existing component or customized one). Increasing machine performance will not always decrease the time occupation.
  3. Time variance -  the estimated time variance of the iterations’ average value. This is  important to ensure monitoring correctness. This factor is not simply to define. Typically the best way to approach it is by using historical memory on existing applications under test. When no history is available, plan a monitoring session with continuous refinery and log analysis.

 

Application monitoring alerts developers when a production service is failing, together with information that helps detect why.

 

This blog post describes how JMeter can be used as part of an automatic monitoring chain, thanks to its flexibility and scalability, which are applicable to various requirements. Moreover, the article describes that JMeter cannot cover every monitoring needs and that integration with other tools are required (e.g. crontab for scheduling, influx for aggregated reporting, etc.). Thanks to Groovy, a JMeter integration with an external service can be handled in the proper manner to answer monitoring requirements. 

 

After you complete building your JMeter script, upload it to BlazeMeter for scalability and more advanced integrations. Sign up for free.

   
arrowPlease enter a URL with http(s)

Interested in writing for our Blog?Send us a pitch!