Dmitri Tikhanski is a Contributing Writer to the BlazeMeter blog.

Become a JMeter and Continuous Testing Pro

Start Learning
Slack

Test Your Website Performance NOW! |

arrowPlease enter a URL with http(s)
Sep 29 2021

Open Source Load Testing Tools 2021

BlazeMeter Blog readers might remember the post regarding Open Source Load Testing Tools, which highlighted the main features of the most outstanding performance testing tools in 2013. 

They were:

 

However in the world of software development (and associated testing) things are changing quite fast. New tools appear (like k6), old tools lose popularity, and so now is probably a good time to revisit the open source performance tools list and see what is the current situation. 

As per Google Trends report Apache JMeter is still the most popular tool and there is growing interest in Locust and k6 .

 

 

Open SourceLoad Testing Tools Feature Comparison Matrix

So let’s see what JMeter, Gatling, Locust and k6 look like in 2021 

 

Feature

JMeter

Gatling

Locust

k6

OS

Any

Any

Any

Any

GUI

Yes

Recorder only

No

No

Test Recorder

HTTP

Siebel

Mainframe

Citrix

HTTP

No

HTTP

Programming/Extension Language

Groovy (recommended)

and any other language JSR223 specification supports

Scala

Python

JavaScript

Load Reports

Console, HTML, CSV, XML

Console, HTML

Console, HTML

CSV, JSON

Protocols

HTTP

FTP

JDBC

SOAP

LDAP

TCP

JMS

SMTP

POP3

IMAP

HTTP

MQTT

JMS

HTTP

HTTP

gRPC

System under test monitoring

With plugin

No

No

No

Clustered mode

Yes

No 

Yes

No

 

Open Source Test Tools Throughput Comparison

Thanks to  the Taurus automation framework, BlazeMeter now supports all these tools so we can compare their resource consumption using BlazeMeter engines and see how they behave. 

In the previous blog post it was 20 virtual users x 100,000 iterations. Given that Locust doesn’t support running requests with a fixed number of iterations, let’s run 20 virtual users for 1 minute with each tool and see how many requests will be executed and what is the associated CPU and memory footprint in the BlazeMeter Engine.

Anyone with the BlazeMeter Free Tier will be able to replicate the test execution and get the results. 

The test is really not complex: a single simple HTTP GET request to the host with Apache HTTP Server installed, hitting the default landing page. The load testing tool repeats the requests as fast as it can. Every tool is being run via Taurus so the test consists of:

  1. The specific script for the load testing tool.
  2. The Taurus YAML configuration file, instructing BlazeMeter how to run the script.

Apache JMeter

JMeter scripts are basically XML files so the script body looks kind of scary:

 

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0" jmeter="5.4.1">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.tearDown_on_shutdown">true</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      </elementProp>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Thread Group" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">5000</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">20</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration">60</stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
        <boolProp name="ThreadGroup.same_user_on_next_iteration">true</boolProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP Request" enabled="true">
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" enabled="true">
            <collectionProp name="Arguments.arguments"/>
          </elementProp>
          <stringProp name="HTTPSampler.domain">129.159.202.229</stringProp>
          <stringProp name="HTTPSampler.port"></stringProp>
          <stringProp name="HTTPSampler.protocol">http</stringProp>
          <stringProp name="HTTPSampler.contentEncoding"></stringProp>
          <stringProp name="HTTPSampler.path"></stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <boolProp name="HTTPSampler.follow_redirects">true</boolProp>
          <boolProp name="HTTPSampler.auto_redirects">false</boolProp>
          <boolProp name="HTTPSampler.use_keepalive">true</boolProp>
          <boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
          <stringProp name="HTTPSampler.embedded_url_re"></stringProp>
          <stringProp name="HTTPSampler.connect_timeout"></stringProp>
          <stringProp name="HTTPSampler.response_timeout"></stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>

 

However, you can also open the script in the JMeter GUI and it will make more sense:

 

 

Here is the Taurus YAML declarative script that overrides the values that are defined in JMeter Thread Group:

 

---
execution:
- executor: jmeter
  concurrency: 20
  hold-for: 1m 
  scenario:
    script: jmeter-script.jmx
 
provisioning: cloud
 
modules:
  cloud:
    test: JMeter 
    report-name: JMeter 20 users for 1 minute
    project: Load Testing Tools 2021

 

Summary Page

 

Request Stats Page

 

 

Engine Health Page

 


 

Gatling

Gatling scripts are Scala source files so they are kind of more readable than JMeter XML.

 

import io.gatling.core.Predef._
import io.gatling.http.Predef._
 
import scala.concurrent.duration._
 
class Gatling extends Simulation {
  val httpProtocol = http 
    .baseUrl("http://129.159.202.229/") 
 
  val scn = scenario("BasicSimulation") 
    .exec(
      http("http://129.159.202.229/") 
        .get("/")
    ) 
 
  setUp(
    scn.inject(
      constantConcurrentUsers(20).during(60.seconds), 
    ).protocols(httpProtocol)
  )
}

 

Here is the relevant Taurus YAML, no overrides, just telling Taurus what to run:

 

execution:
- executor: gatling
  scenario: gatling
 
scenarios:
  gatling:
    script: gatling-script.scala
    simulation: Gatling
 
provisioning: cloud
 
modules:
  cloud:
    test: Gatling 
    report-name: Gatling 20 users for 1 minute
    project: Load Testing Tools 2021 

 

Summary Page

 

 

Request Stats

 

 

Engine Health

 

 

Locust

 

Locust scripts are written in Python so they’re most probably the easiest to read and understand.

 

from gevent import sleep
from re import findall, compile
from locust import HttpUser, TaskSet, task, constant
 
class UserBehaviour(TaskSet):
    @task(1)
    def generated_task(self):
        self.client.get(timeout=30.0, url="/")
 
 
class GeneratedSwarm(HttpUser):
    tasks = [UserBehaviour]
    host = "http://129.159.202.229/"
    wait_time = constant(0)

 

And again the associated Taurus YAML configuration:

 

execution:
- executor: locust
  concurrency: 20
  hold-for: 1m
  scenario: example
 
scenarios:
  example:
    default-address: http://129.159.202.229/
    script: locust-script.py
 
provisioning: cloud
 
modules:
  cloud:
    test: Locust 
    report-name: Locust 20 users for 1 minute
    project: Load Testing Tools 2021     

 

Summary Page

 

 

Request Stats Page

 

 

Engine Health Page

 

 

K6 

 

K6 tests are written in JavaScript so again our simple test is very small:

 

import http from 'k6/http'
 
export default function () {
  http.get('http://129.159.202.229') 
}

 

Even the Taurus YAML file is bigger:

 

---
execution:
- executor: k6
  concurrency: 20
  hold-for: 1m 
  scenario: k6
 
scenarios:
  k6: 
    script: k6.js
 
provisioning: cloud
 
modules:
  cloud:
    test: k6 
    report-name: k6 20 users for 1 minute 
    project: Load Testing Tools 2021    

 

Unfortunately as of now BlazeMeter doesn’t support k6 results interpretation very well, so the metrics will be obtained from BlazeMeter Logs page.

 

 

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io
 
  execution: local
     script: /tmp/artifacts/k6.js
     output: csv (/tmp/artifacts/kpi.csv)
 
  scenarios: (100.00%) 1 scenario, 20 max VUs, 1m30s max duration (incl. graceful stop):
           * default: 20 looping VUs for 1m0s (gracefulStop: 30s)
 
 
running (0m01.0s), 20/20 VUs, 140 complete and 0 interrupted iterations
default   [   2% ] 20 VUs  0m01.0s/1m0s
 
lines like above were removed to keep the log file short
 
running (1m01.1s), 00/20 VUs, 11049 complete and 0 interrupted iterations
default  [ 100% ] 20 VUs  1m0s
 
     data_received..................: 124 MB 2.0 MB/s
     data_sent......................: 895 kB 15 kB/s
     http_req_blocked...............: avg=1.16ms   min=1.34µs   med=2.93µs   max=111.57ms p(90)=6.31µs   p(95)=9.12µs  
     http_req_connecting............: avg=1.15ms   min=0s       med=0s       max=111.49ms p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=109.13ms min=106.07ms med=106.84ms max=1.1s     p(90)=107.64ms p(95)=108.18ms
       { expected_response:true }...: avg=109.13ms min=106.07ms med=106.84ms max=1.1s     p(90)=107.64ms p(95)=108.18ms
     http_req_failed................: 0.00%   0           11049
     http_req_receiving.............: avg=201.07µs min=30.27µs  med=95.11µs  max=53.55ms  p(90)=247.15µs p(95)=482.01µs
     http_req_sending...............: avg=32.93µs  min=7.71µs   med=16.13µs  max=18.68ms  p(90)=39.59µs  p(95)=59.41µs 
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s       max=0s       p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=108.89ms min=105.98ms med=106.65ms max=1.1s     p(90)=107.41ms p(95)=107.86ms
     http_reqs......................: 11049  180.834133/s
     iteration_duration.............: avg=108.62ms min=106.15ms med=106.98ms max=244.45ms p(90)=107.82ms p(95)=108.55ms
     iterations.....................: 11049  180.834133/s
     vus............................: 20     min=20       max=20 
     vus_max........................: 20     min=20       max=20 
 

 

Test Tools Results Comparison

 

Tool

Requests

Response Time

Bandwidth

JMeter

5580

214

1019

Gatling

5573

213

1017

Locust

10544

112

1873

K6 

11049

109

2116

 

So far so good, we have two winners and two outsiders. However, I have one doubt regarding the Bandwidth reported by Locust. 

If we compare Engine Health pages for JMeter and for Locust tests:

 

 

We see that the throughput reported by Locust is 3x times higher than the throughput reported by BlazeMeter. In order to exclude possible BlazeMeter bugs let’s see the network metrics on the side of the system under test.

 

 

I fail to see a good reason regarding why Locust reports 2x more requests than JMeter does and at the same time gets 2x times less bytes of data. K6 results are in line with the system under test metrics. It’s yet another evidence that you need to pay attention to literally everything and not only focus on the KPIs which your load testing tool gives as it never tells the full story.

So the winner seems to be k6. However, I have one obvious question: how come that response time is 2x times less assuming that JMeter and Gatling are sending the same requests as k6 does? You’re welcome to share your thoughts in the comments section below. 

You can run all of these test scripts in BlazeMeter to achieve scalability and advanced reporting. Start now.

 

   
arrowPlease enter a URL with http(s)

Interested in writing for our Blog?Send us a pitch!