As BlazeMeter’s CEO and founder, Alon Girmonsky brings over 20 years of technological expertise and innovation to BlazeMeter. Prior to founding BlazeMeter, he served as CTO for Taldor and was co-founder of iWeb Technologies (acquired by Global Media Online, 2002), a Young & Rubicam-backed NewMedia Company. Alon began his career in the technology sector as an officer in the software division of Israel's Defense Force Intelligence Unit.

Learn JMeter in 5 Hours

Start Learning
Slack

Run massively scalable performance tests on web, mobile, and APIs

Jul 30 2015

Automated Delivery Acceptance Test Nirvana

Powered by AWS CodePipeline, AWS CloudWatch and BlazeMeter

 

A view into the smooth continuous delivery workflow that will inevitably reduce time to release and increase test coverage during the release for developers. 

 

Background

 

In the past few years, the adoption of Continuous Integration has exploded. Software teams all over the world, including high-profile companies like Netflix and Facebook, are learning the benefits of tighter collaboration and the fast feedback that CI provides. It’s become clear that automating the building and testing of the application for every code change can surface defects immediately and with clear accountability. Since defects are found and fixed so quickly, the result of any successful build should become a shippable piece of software, or Release Candidate (RC).

 

That release candidate is not usually approved right then and there, however. Most often it is deployed to a staging environment for further testing. These tests that are typically more complex than those being run in the CI builds, and include various types of acceptance tests that have been traditionally difficult to automate. The benefits being realized with automation in CI fall down here due to cumbersome ticketing processes and the inherently slow pace of manual testing and deployment.

 

Continuous Delivery, which is the complete automation of the entire release workflow, takes the CI concept to it’s natural conclusion. If we automate the whole delivery pipeline, we reduce human errors and get great new features in front of users that much more quickly. Everyone benefits from stress-free, push-button releases and fast feedback at every stage.

 

While many teams are practicing some variation of CI, they have generally fallen short of being able to practice CD, largely because of the challenges of automating acceptance tests and the lack of a centralized orchestration platform to manage stages and transitions through the pipeline.

 

The CD Game Changer - Building an Automated Workflow With Build, Test and Deploy Steps

 

With AWS CodePipeline, users can now finally streamline delivery to staging and then to production.  AWS CodePipeline is an innovative new Continuous Delivery workflow engine that allows users to integrate, test and deploy code in staging and then move into production. Yes, production!

 

AWS CodePipeline enables a user to visually build workflows for different phases of Continuous Delivery: Source, Build, Test and Deploy. Each phase can have many steps and can run either sequentially or in parallel.

 

Add a JMeter Test at Every Step

 

With BlazeMeter - one of the 3rd party integrations supported by AWS CodePipeline - users can build a release workflow that includes automated testing for as many steps as they need in the workflow. With this new integration, users can set any stage in the workflow as a test, upload a JMeter script, set thresholds and configure AWS CloudWatch. A successful test will signal the process to continue, while a failed test will mark the process as failed and will stop the workflow until it is fixed.

 

 

 

Monitoring Infrastructure Key Performance Indicators (KPIs) During a Test

 

AWS provides a production environment as a service, which means many of the KPIs of the environment remain hidden. Enter AWS CloudWatch.  AWS CloudWatch is very valuable when used as part of any test that involves AWS services. It can provide tremendous visibility into the infrastructure that is provided as a service. However, it can become challenging to use AWS CloudWatch in conjunction with a test, especially an automated one.

 

To give an example, at BlazeMeter, when we use AWS, we end up using hundreds of different services (EBS, EC2, ELB, etc.). Some have more influence on production and some less. It becomes extremely challenging to isolate the services that affect production and those that don’t. 

 

 

The above is an example of what I see when I go into my AWS console. I see that I have 146,240 total metrics under AWS CloudWatch.

 

The challenge multiplies when we want to automate tests and correlate infrastructure KPIs for each and every test (isolate and store for further analysis).

 

BlazeMeter can help users find the needle in the haystack of metrics.  When defining a test, using a very intuitive UI, BlazeMeter enables users to select the unique services that are associated with a certain application.

 

 

A user would typically select a handful of services to monitor out of the hundreds that are available. BlazeMeter polls the selected services during a test and stores those KPIs alongside the test’s KPIs, to get a clear picture of how the services behaved during the test.

 

 

The AWS CloudWatch report above is stored with every test run and shows the selected KPIs, so the user can see how the infrastructure behaved during the test.

 

Setting Thresholds to Signal Test Success and Failure

 

Furthermore, the user can assign and set thresholds on each and every label/transaction in order to signal a successful or a failed test.

 

 

 

 

The graphic above shows that by the end of the test, both thresholds were met and the test stopped.

 

Continuous Delivery nirvana is reached when we put it all together.  A user can run automated tests with AWS CodePipeline, archive results for further review and explore AWS CloudWatch KPIs gathered during the test.

 

With BlazeMeter’s AWS CodePipeline integration, a user now has the ability to set various test stages using JMeter, Webdriver or plain API tests. When configuring AWS CloudWatch, a user can select the KPIs that are of interest (out of the many hundreds and sometime thousands of KPIs). The best part? It only takes two minutes to configure such a test. The result? A periodic test run triggered by AWS CodePipeline. Tests automatically fail in accordance with set thresholds. AWS CloudWatch KPIs are stored alongside test results. Every run generates a report and is available for further investigation at anytime after the test run.
 

 

The graphic above shows the trend of the test KPIs through the periodic runs, allowing the user to find abnormalities and further pinpoint problem areas in suspected test runs.

 

The process of automating a release has never been this easy. With AWS and BlazeMeter, the user now has the capability to:


● Add test stages at every phase of the release
● Configure the proper tests to run in various stages within the release cycle
● Test and gather all of the required information
● Refer back to the information for further investigation at any time.

 

BlazeMeter supports open source driven test automation through the AWS CodePipeline release process. With BlazeMeter, a user can write JMeter scripts for as many steps as they need, configure AWS CloudWatch and let AWS CodePipeline run its course.  The end result is a smooth continuous delivery workflow that will inevitably reduce time to release and increase test coverage during the release.

 

That’s what I call a happy delivery...

 

Get Started with AWS CodePipeline Today

 

Feel free to leave questions or comments below. 

 

 

Interested in writing for our Blog?Send us a pitch!

Your email is required to complete the test. If you proceed, your test will be aborted.