Martin Kowalewski is a Sr. Solution Architect at Applitools. He holds over 15 years of experience in the tech industry across a variety of disciplines.  The last 7 years of his career have been focused around Functional, Performance, and API testing within the DevOps ecosystem.

Become a JMeter and Continuous Testing Pro

Start Learning
Slack

Test Your Website Performance NOW! |

arrowPlease enter a URL with http(s)
Jun 10 2021

Your Ultimate Guide To Automated Functional and Visual Testing

Visual testing compares the visible output of an app or website against a baseline image, automatically. It helps prevent visual discrepancies that go to production because functional testing often misses them. This blog post will explain why you should run visual tests, and how to do it with BlazeMeter and Applitools.

Why We Need Visual Testing

 

Have you ever experienced a situation where you find a product, add it to your cart, then as you try to complete the ordering process you experience a UI defect that prohibits you from completing your order? Take the following example:



visual testing


Back when we were traveling, one of my colleagues was trying to book a flight. As they were trying to continue from this screen, they struggled to click the “continue” button due to the UI defect. If you experienced this situation, what would you do?  

 

  • Would you try a different browser?  
  • Would you try on a different device, maybe mobile?  
  • What if this same defect exists on mobile as well? 
  • What happens if that flight was no longer available as you try to work around this defect? 
  • Would you then try a different airline? 
  • How much revenue do you think a slipped bug like this costs organizations every year?  

 

The answers to these questions are all dependent on your technical acumen and your frustration level, and I would argue that one dollar of lost revenue is too much. 

 

If I experienced this situation, I would probably try a different browser. I may even resize the browser, and then move on if none of these steps work around the issue. I will also screenshot the behavior and send it to the company, so they are aware. I will tell you - once you find these items you tend to notice them more and more and it can become a fun game .

 

Now back to the issue, I think we all know that major airlines have automation in place and this scenario was functionally tested many times. In their Selenium script, I am sure they used assertions to validate the key items like “Total Due Now”, “Continue Button”, and the “Terms and Conditions” were present among others. You know what, they would have passed.  

 

Could they have used additional assertions to find the issue above? The answer is they could, but how much additional logic would they need to catch the two elements overlapping? What if the screen size changes? Depending on how they identify the selectors for the elements - what happens if those selectors change? How much maintenance overhead are they going to add by trying to handle this potential behavior? Is it worth it? 

 

With the importance of the digital experience today it is absolutely worth it, as the UI bug would be revenue impacting. But how do you find these types of issues without adding a ton of overhead or using an army of manual testers?  That is where visual testing comes into play.

What is Visual Testing?

Visual testing is the automated process of comparing the visible output of an app or website against a baseline image. In its most basic form, visual testing, sometimes referred to as snapshot testing, compares differences in an image by looking at pixel variations.

Modern approaches have incorporated artificial intelligence, known as Visual AI, to view as a human eye would, and avoid false positives.

 

Why Should I Perform Visual Testing?

Visual testing helps identify visual defects that traditional functional testing cannot catch to prevent them from slipping into production. Visual testing also can test for functional scenarios that are rendered on-screen. This can reduce the time it takes to create tests, as well as reduce test case maintenance.

When Should I Use Visual Testing?

Visual testing should be integrated into your existing test scripts and thus your existing CI/CD process. When you implement Applitools, you simply augment your existing test cases with a single line / snippet of test code – and let the AI do the rest. 

 

Why BlazeMeter and Applitools Together?

BlazeMeter provides a continuous testing platform that enables your teams to test performance, API, and your front end for functional correctness. By combining Selenium GUI Functional Testing with load and performance testing (e.g. JMeter load testing), you see exactly what your users see, when your site gets heavy traffic. Reporting is designed for team collaboration, so your team can shift left together. Now if we add Applitools to the Selenium Functional test, you will be able to not only understand functional correctness of the UI, but also we will be able to ensure that the UI is visually perfect prior to releasing to production. Not to mention this is done with a single test execution and free of manual testing of the UI which is time consuming and error prone.

Running Functional and Visual Tests with BlazeMeter and Applitools.

So now that we have learned a little about visual testing, let us see how we can take what we learned and use it in our functional automation. The following diagram depicts how the process flow changes when adding visual testing into our current functional tests:


 

 

1: Testers run the functional test suite in BlazeMeter and the code typically repeats the following steps for multiple application states:

2.1: Simulate user actions (e.g. mouse click, keyboard entry) by using a driver such as Selenium

2.2: Call an Eyes SDK API to perform a visual checkpoint

2.2a: Eyes SDK uses the Driver to obtain the screenshot

2.2b: Eyes SDK then sends the image to the Eyes Server where it, and the other checkpoint images, are compared to the baseline images previously stored on the Server

3: After the images in the test have been processed, the Eyes Server replies with information such as whether any differences were found and a link to the Eyes site where the results can be viewed.

4: Testers use the Eyes Test Manager to view the test results, update the baselines, mark bugs and annotate regions that need special handling. After having viewed all of the results, testers save the baseline which then becomes the basis for comparison in the next test run.

 

Ok great, now that we understand the flow of events, let’s do it for real:

 

Executing a Functional Test in BlazeMeter:

 

First, let's start by executing one of our existing BlazeMeter functional tests:

 

Here is our example Selenium Script:

 

 

This is a simple test that navigates to the login page, asserts on the key items that are needed to functionally validate its operation. Then it clicks on the “Sign-In” button without entering any credentials, in order to validate the message a user receives to alert them of the behavior.

 

Let’s run the test:

mvn -Dtest=DemoGridTestTraditional test

 

 

After running the traditional functional test you can see the results in the BlazeMeter Dashboard. In the summary view we see the status, number of scenarios, iterations, duration, which location the test is executed from and which browser was used.

 

 

If you click on the “Details” view, you see the specific steps, the duration of each step and the recorded video of the scenario. The key takeaway is the result status of passed. All the coded assertions that were used to functionally validate the application were all successful.

 

 

 

Benefits of running Functional Tests in BlazeMeter

Using BlazeMeter for Functional automation provides the following benefits:

  • Test your front end under load in the cloud, and scale up to 2 million virtual users. 
  • See combined reporting and quickly pinpoint problems. 
  • Simplicity - with the BlazeMeter's SaaS solution, you'll be testing in minutes.
  • Save time and maintenance on complex scripting, you can easily record a Selenium script directly in your browser with the BlazeMeter Chrome Extension.
  • Democratize your functional testing to enable your developers to shift testing left with Taurus.

Integrating Applitools into Our Existing Functional Tests

Now that we have successfully executed our BlazeMeter functional test and it passed, let's run this same scenario but add visual validation on top of the functional validation of the application.

 

First, we need to add the Applitools SDK to our project:

 

After successfully importing the SDK, we next need to import the eyes libraries into our script so that we can use the appropriate eyes methods in our script.

Imports:

Import the following methods from the SDK that will be used in the execution of your functional test.

 

Now that we have imported the eyes libraries, we need to use the appropriate methods to configure eyes to be leveraged during our test in the BeforeClass hook.

 

@BeforeClass

The following lines initialize the Eyes object, authenticate to the Applitools platform using the API key and name the test report inside of Applitools using the set batch method. For more information on obtaining your Applitools API key visit here.


 

Now that we have configured eyes to be used in our test, we can now use the appropriate methods in our test case to take a screenshot, send it to the applitools platform and get the test results back.

@Test

For the test, we keep all the same Selenium commands to navigate through the application with the following additional commands:

  • eyes.open – opens a connection to tell Applitools this isa test for a given application. 
  • eyes.checkWindow – takes a full page screenshot of the current state of the UI
  • eyes.close – closes the connection to eyes and gets the results

 

 

Notice that all of the functional assertions that were previously used to validate the functionality were replaced by a single visual assertion (eyes.checkWindow). This is another benefit of the visual validation approach that helps reduce test creation time and maintenance and can increase the stability of your functional tests.

 

Now that we modified our test to perform visual and functional validation, let's run it.  

 

mvn -Dtest=DemoGridVisual test

(a different test script was used for demonstration purposes only, the full test code is available here).

 

The test will run in BlazeMeter just as it previously did, but this time as it navigates to the various screens of the app the eyes SDK will take screenshots and upload them to the Applitools platform for comparison. In our example we will have two screenshots, one for the home page and two after clicking on the log-in button

 

The first time the test runs with Applitools integrated, it will create the baseline for the application/test/browser/OS/viewport combination.  Here is what that looks like in the Applitools Dashboard:

 

 

 

You can see that the test has a status of “New”, for a new test has come into Applitools.  If you drill into a step, you will see that the thumbs up is automatically selected and the result is green for “passed” and this image will now be the baseline for comparison moving forward.  

 




 

Great job! We have successfully integrated visual validation into our BlazeMeter functional test and now have a baseline for our scenario!  

 

Now to show the true power of Applitools. Let’s run that same test again now that  development released a new build of the application.

 

mvn -Dtest=DemoGridVisual test

 

 

The test runs again in BlazeMeter, this time against the current build of the application. When we look at the results this time, we notice that the test has now failed.  

 

 

 

If we click on the “Details” view to triage the issue further, we can see the standard assertions still passed. If they did not, BlazeMeter would have highlighted those failures.But if we look closer at the error, it shows the test has failed due to visual differences

 

 

 

Wait, development released a new version and all of our coded assertions passed but the test result failed visually?  How is that? Let’s dig in by clicking the Applitools URL shown in the exception:

 

When we click on the link, it takes us to the Applitools result for that specific test execution from BlazeMeter.  

 

If we click on the test, we will see which specific screen(s) have visual differences. Here we notice that we have two steps, one for the Login Page, and one for the result of the Login Page after clicking the “Sign-In” button without entering credentials. In our case both steps show differences.

 

Now let us click on the 1st step (Login Page) to see why it failed visual validation.  

 

 

The dashboard shows a side-by-side view of the expected result (baseline) vs current result (checkpoint). We see all the visual differences highlighted in purple. The differences that were detected were: 

  • Missing image at the top
  • A change to the placeholder text in the username field,
  • A new feature was added for the user to specify if they use the device often.
  • Changes to the logos used for the social media links

 

If we continue to the second step, we can see the same differences, but in addition we also can see that the message when clicking with no credentials overlaps the “Login Form” text.  

 

 

 

In Applitools, we can even triage the individual diffs by clicking on the root cause analysis button. When we select RCA we can then identify what changed in the DOM/CSS for each detected difference. 

 

 

In the details of the RCA analysis, we can see that the placeholder text was changed from “Enter your username” to username@email.com. This capability not only enables faster feedback to development when changes impact the UI, but also saves the developer time by not having to reproduce the issue. Now that is powerful stuff!

 

 

If these changes were all anticipated due to a new feature, all we need to do is click the thumbs up button and the current checkpoint will now be the baseline moving forward. If these detected differences were actual defects we then would click thumbs down to reject the test and share the result with the developers to get the issues resolved.

 


 

Wrap Up

 

Congratulations, you have now performed visual and functional validation with BlazeMeter and Applitools!  If you think through this exercise, I am sure you have questions, but I would like to ask just one. How come these differences were not detected by our existing functional test?  The answer is, because our functional tests did not expect or “assert” that these changes in the application would change.  With Applitools you can now detect changes (both intended and unintended) in your application UI when executing your functional automation. With this article, I hope that you can now leverage the power of  BlazeMeter Functional and Applitools, to provide an automated functional and visual testing solution that will help you build and release visually perfect applications confidently and at a faster rate.

 

What’s Next:

To get started with BlazeMeter Functional testing you can Sign up for a free account.  If you want to add visual testing to your existing functional tests you can Sign up for a free account with Applitools and feel free to clone this repository to try it out on your own. 

 

   
arrowPlease enter a URL with http(s)

Interested in writing for our Blog?Send us a pitch!