Your Ultimate Guide To Automated Visual Testing
January 3, 2024

Your Ultimate Guide To Automated Visual Testing

Test Automation

This blog was originally published June 10, 2021, and has since been updated with relevant information.

Visual testing compares the visible output of an app or website against a baseline image, automatically. It helps prevent visual discrepancies that go to production because functional testing often misses them. This blog post will explain why you should run visual tests, and how to do it with BlazeMeter and Applitools.

Back to top

What is Visual Testing?

Visual testing is a software testing method that determines the behavior and visual functionality of an application's user interface (UI) or graphical user interface (GUI). Also known as snapshot testing, visual testing in its most basic form compares differences in an image by looking at pixel variations.

Visual testing helps identify visual defects that traditional functional testing cannot catch to prevent them from slipping into production. Visual testing also can test for functional scenarios that are rendered on-screen. This can reduce the time it takes to create tests, as well as reduce test case maintenance.

Modern approaches have incorporated artificial intelligence, known as Visual AI, to view as a human eye would, and avoid false positives.

Back to top

Why We Need Visual Testing

Have you ever experienced a situation where you find a product, add it to your cart, then as you try to complete the ordering process you experience a UI defect that prohibits you from completing your order? Take the following example:
 

visual testing example

Back when we were traveling, one of my colleagues was trying to book a flight. As they were trying to continue from this screen, they struggled to click the “continue” button due to the UI defect. If you experienced this situation, what would you do?  

  • Would you try a different browser?  
  • Would you try on a different device, maybe mobile?  
  • What if this same defect exists on mobile as well? 
  • What happens if that flight was no longer available as you try to work around this defect? 
  • Would you then try a different airline? 
  • How much revenue do you think a slipped bug like this costs organizations every year?  

The answers to these questions are all dependent on your technical acumen and your frustration level, and I would argue that one dollar of lost revenue is too much. 

If I experienced this situation, I would probably try a different browser. I may even resize the browser, and then move on if none of these steps work around the issue. I will also screenshot the behavior and send it to the company, so they are aware. I will tell you - once you find these items you tend to notice them more and more and it can become a fun game.

Now back to the issue, I think we all know that major airlines have automation in place and this scenario was functionally tested many times. In their Selenium script, I am sure they used assertions to validate the key items like “Total Due Now”, “Continue Button”, and the “Terms and Conditions” were present among others. You know what, they would have passed.  

Could they have used additional assertions to find the issue above? The answer is they could, but how much additional logic would they need to catch the two elements overlapping? What if the screen size changes? Depending on how they identify the selectors for the elements - what happens if those selectors change? How much maintenance overhead are they going to add by trying to handle this potential behavior? Is it worth it? 

With the importance of the digital experience today it is absolutely worth it, as the UI bug would be revenue impacting. But how do you find these types of issues without adding a ton of overhead or using an army of manual testers?  That is where visual testing comes into play.

Back to top

Visual Testing vs Functional Testing

When it comes to visual testing, appearance and behavior of the UI is king. Visual issues like pixel to pixel comparison, rendering, alignment, layout, font change, and overlap are all things that functional testing cannot identify.

Think of the two types of testing as the what (visual) and the how (functional).

Functional testing is designed to see if the software is working as needed. This method focuses on verifying the app is working based on specific functional requirements and meets business needs. Functional testing validates the software's capabilities, features and communication with various components.

Back to top

When Should I Use Visual Testing?

Visual testing should be integrated into your existing test scripts and thus your existing CI/CD process. Visual testing helps ensure the UI of your app is working as intended for your users. Several benefits of visual testing include:

  • Detecting UI variations that do not match the baseline snapshots.
  • Finding issues or bugs within the UI.
  • Identifying visual defects on different browsers.
  • Creating specific use cases to test functionality.

When you implement Applitools, you simply augment your existing test cases with a single line / snippet of test code – and let the AI do the rest. 

Why BlazeMeter and Applitools Together?

BlazeMeter provides a continuous testing platform that enables your teams to test performance, API, and your front end for functional correctness. By combining Selenium UI testing with load and performance testing (e.g., JMeter load testing), you see exactly what your users see, when your site gets heavy traffic. Reporting is designed for team collaboration, so your team can shift left together. Now if we add Applitools to the Selenium Functional test, you will be able to not only understand functional correctness of the UI, but also we will be able to ensure that the UI is visually perfect prior to releasing to production. Not to mention this is done with a single test execution and free of manual testing of the UI which is time consuming and error prone.

Back to top

Running Functional and Visual Tests with BlazeMeter and Applitools

So now that we have learned a little about visual testing, let us see how we can take what we learned and use it in our functional automation. The following diagram depicts how the process flow changes when adding visual testing into our current functional tests:

Diagram of adding visual testing into our current functional tests

1: Testers run the functional test suite in BlazeMeter and the code typically repeats the following steps for multiple application states:

2.1: Simulate user actions (e.g. mouse click, keyboard entry) by using a driver such as Selenium

2.2: Call an Eyes SDK API to perform a visual checkpoint

2.2a: Eyes SDK uses the Driver to obtain the screenshot

2.2b: Eyes SDK then sends the image to the Eyes Server where it, and the other checkpoint images, are compared to the baseline images previously stored on the Server

3: After the images in the test have been processed, the Eyes Server replies with information such as whether any differences were found and a link to the Eyes site where the results can be viewed.

4: Testers use the Eyes Test Manager to view the test results, update the baselines, mark bugs and annotate regions that need special handling. After having viewed all of the results, testers save the baseline which then becomes the basis for comparison in the next test run.

Ok great, now that we understand the flow of events, let’s do it for real:

Executing a Functional Test in BlazeMeter

First, let's start by executing one of our existing BlazeMeter functional tests:

Here is our example Selenium script:

sample Selenium script

This is a simple test that navigates to the login page, asserts on the key items that are needed to functionally validate its operation. Then it clicks on the “Sign-In” button without entering any credentials, in order to validate the message a user receives to alert them of the behavior.

Let’s run the test:

mvn -Dtest=DemoGridTestTraditional test

After running the traditional functional test you can see the results in the BlazeMeter dashboard. In the summary view we see the status, number of scenarios, iterations, duration, which location the test is executed from and which browser was used.

BlazeMeter reporting dashboard

If you click on the “Details” view, you see the specific steps, the duration of each step and the recorded video of the scenario. The key takeaway is the result status of passed. All the coded assertions that were used to functionally validate the application were all successful.

Functional validation in BlazeMeter

Benefits of Running Functional Tests in BlazeMeter

Using BlazeMeter for Functional automation provides the following benefits:

  • Test your front end under load in the cloud, and scale up to 2 million virtual users. 
  • See combined reporting and quickly pinpoint problems. 
  • Simplicity - with the BlazeMeter's SaaS solution, you'll be testing in minutes.
  • Save time and maintenance on complex scripting, you can easily record a Selenium script directly in your browser with the BlazeMeter Chrome Extension.
  • Democratize your functional testing to enable your developers to shift testing left with Taurus.

Integrating Applitools Visual Testing

Now that we have successfully executed our BlazeMeter functional test and it passed, let's run this same scenario but add visual validation on top of the functional validation of the application.

First, we need to add the Applitools SDK to our project:

Applitools visual testing SDK

After successfully importing the SDK, we next need to import the eyes libraries into our script so that we can use the appropriate eyes methods in our script.

Imports:

Import the following methods from the SDK that will be used in the execution of your functional test.

Applitools import

Now that we have imported the eyes libraries, we need to use the appropriate methods to configure eyes to be leveraged during our test in the BeforeClass hook.

@BeforeClass

The following lines initialize the Eyes object, authenticate to the Applitools platform using the API key and name the test report inside of Applitools using the set batch method. Get more information on obtaining your Applitools API key.

Applitools API Key Visit

Now that we have configured eyes to be used in our test, we can now use the appropriate methods in our JUnit test case to take a screenshot, send it to the applitools platform and get the test results back.

@Test

For the test, we keep all the same Selenium commands to navigate through the application with the following additional commands:

  • eyes.open – opens a connection to tell Applitools this isa test for a given application. 
  • eyes.checkWindow – takes a full page screenshot of the current state of the UI
  • eyes.close – closes the connection to eyes and gets the results
Selenium commands for visual test

Notice that all of the functional assertions that were previously used to validate the functionality were replaced by a single visual assertion (eyes.checkWindow). This is another benefit of the visual validation approach that helps reduce test creation time and maintenance and can increase the stability of your functional tests.

Now that we modified our test to perform visual and functional validation, let's run it.  

mvn -Dtest=DemoGridVisual test

(a different test script was used for demonstration purposes only, the full test code is available here).

The test will run in BlazeMeter just as it previously did, but this time as it navigates to the various screens of the app the eyes SDK will take screenshots and upload them to the Applitools platform for comparison. In our example we will have two screenshots, one for the home page and two after clicking on the log-in button

The first time the test runs with Applitools visual testing integrated, it will create the baseline for the application/test/browser/OS/viewport combination.  Here is what that looks like in the Applitools Dashboard:

Applitools dashboard

You can see that the test has a status of “New”, for a new test has come into Applitools.  If you drill into a step, you will see that the thumbs up is automatically selected and the result is green for “passed” and this image will now be the baseline for comparison moving forward.  

baseline image for visual testing

Great job! We have successfully integrated visual validation into our BlazeMeter functional test and now have a baseline for our scenario!  

Now to show the true power of Applitools. Let’s run that same test again now that  development released a new build of the application.

mvn -Dtest=DemoGridVisual test

 

The test runs again in BlazeMeter, this time against the current build of the application. When we look at the results this time, we notice that the test has now failed.  

Failed test in BlazeMeter

If we click on the “Details” view to triage the issue further, we can see the standard assertions still passed. If they did not, BlazeMeter would have highlighted those failures.But if we look closer at the error, it shows the test has failed due to visual differences. 

 

Visual differences as reason why the test failed

Wait, development released a new version and all of our coded assertions passed but the test result failed visually?  How is that? Let’s dig in by clicking the Applitools URL shown in the exception:

When we click on the link, it takes us to the Applitools result for that specific test execution from BlazeMeter.  

Applitools URL for BlazeMeter execution

If we click on the test, we will see which specific screen(s) have visual differences. Here we notice that we have two steps, one for the Login Page, and one for the result of the Login Page after clicking the “Sign-In” button without entering credentials. In our case both steps show differences.

Screens with visual differences in Applitools

Now let us click on the 1st step (Login Page) to see why it failed visual validation.  

Login page

The dashboard shows a side-by-side view of the expected result (baseline) vs current result (checkpoint). We see all the visual differences highlighted in purple. The differences that were detected were: 

  • Missing image at the top.
  • A change to the placeholder text in the username field.
  • A new feature was added for the user to specify if they use the device often.
  • Changes to the logos used for the social media links.

If we continue to the second step, we can see the same differences, but we can also see that the message when clicking with no credentials overlaps the “Login Form” text.  

Differences in login form with no credentials

In Applitools, we can even triage the individual differences by clicking on the root cause analysis button. When we select RCA we can then identify what changed in the DOM/CSS for each detected difference. 

Changes in DOM/CSS

In the details of the RCA analysis, we can see that the placeholder text was changed from “Enter your username” to username@email.com. This capability not only enables faster feedback to development when changes impact the UI, but also saves the developer time by not having to reproduce the issue. Now that is powerful stuff!

If these changes were all anticipated due to a new feature, all we need to do is click the thumbs up button and the current checkpoint will now be the baseline moving forward. If these detected differences were actual defects we then would click thumbs down to reject the test and share the result with the developers to get the issues resolved.

Thumbs up-thumbs down function to differentiate between an anticipated update or a visual testing defect.
Back to top

Bottom Line

Congratulations, you have now performed functional and visual testing with BlazeMeter and Applitools!  If you think through this exercise, I am sure you have questions, but I would like to ask just one. How come these differences were not detected by our existing functional test?  The answer is, because our functional tests did not expect or “assert” that these changes in the application would change. 

With Applitools you can now detect changes (both intended and unintended) in your application UI when executing your functional automation. With this article, I hope that you can now leverage the power of  BlazeMeter Functional and Applitools, to provide an automated functional and visual testing solution that will help you build and release visually perfect applications confidently and at a faster rate.

What’s Next:

To get started with BlazeMeter functional testing you can sign up for a free account. 

START TESTING NOW

 

Back to top