I was fortunate enough to attend both Jenkins World 2016 and Velocity New York 2016 last month. Both shows got me thinking about how the definition of “scale” in performance testing is in the midst of being turned on its head.
Up until recently, if you mentioned performance or load testing, the first thing most people would want to know is, “How big a test can you run?” For a long time, scale in terms of “size” was hard (or at least very expensive) to accomplish. This goes back to when performance testing was synonymous with HP LoadRunner, which had very expensive virtual user licenses and required tons of dedicated hardware.
Then, thanks to the cloud, suddenly size wasn’t so hard. Large load tests and the cloud were made for each other. Now there seems to be a new company popping up every week claiming to have 1,000,000 concurrent user capability and real time graphs.
The new frontier for scale in performance testing isn’t about how “big” your tests can be or how much data you can graph in real time, but about how much performance testing you can do, how fast it gets done and most importantly who is actually creating and executing the tests.
Why the change? Companies are running towards continuous delivery, which is about producing business value faster. They do this through shipping smaller releases more often and by spreading out the work across many small and autonomous teams. That process is focused less on “milestones” than on “flow” and the closer to continuous/instantaneous the better. Bottom line? Continuous delivery means that performance testing needs to happen during development, not after. You can’t wait two to six weeks for a test when releases happen multiple times per day.
These are exciting times and a LOT is changing all at once. DevOps is becoming the norm. QA is undergoing profound changes. Open-source tools are going mainstream with some large and historically conservative companies mandating consideration of open-source as part of any procurement request.
Up until recently, most larger organizations routed all performance testing through a “Center of Excellence” (COE). The COE had a few specialized performance engineers who held the “keys” and schedule for the testing infrastructure. As a precious resource, this team couldn’t help but function as a bottleneck and performance testing was carefully doled out to only the most critical projects. Engineering teams (the people actually designing and building apps) had a very high friction experience of transferring knowledge and waiting for tests to be developed. As a result, test coverage was pretty low.
As agile software development, continuous integration and continuous delivery began to take off, the reduction of handoffs and elimination of waiting for external resources became essential for software teams. Testing was no exception, with “agile testers” joining development teams and in some cases, developers writing their own tests. Functional testing made this transition first.
Now, performance testing is making this same move. What we see happening at our clients is a transformation of the performance testing COE from “Center of Excellence” to “Center of Enablement” – The COE team is transitioning from a scarce resource that does the testing, to a band of leaders facilitating the spread of performance testing tools organization wide:
Democratization puts performance testing in the hands of the many rather than the few. That leads to massive scale of another sort entirely: much more testing and far greater coverage.
Here is a slide I presented in the Developer Theatre at Jenkins World 2016. I didn’t intend to build a pyramid here but in hindsight the shape is perfect. Scaling up a performance testing practice in modern software delivery starts with growing the diversity and size of the performance testing population. From there it’s all about velocity (how quickly you can create and execute tests) and frequency (how often you iterate). Size still matters, but it’s the least challenging piece of the bunch.
A larger and more diverse testing population, many of whom are “coders” at heart and part of the DevOps revolution, means very good things for the open-source community as well. We all want to get our jobs done with greater speed and ease so we can do more important things at ever greater scale. Democratization means an ever larger pool of smart folks solving problems and sharing what they find.
BlazeMeter contributes tools to the open-source community that solve specific problems (such as the XMPP/Jabber plugin set) but we also do our part to support an ecosystem upon which other innovations (whether in code or process) can be built. The test automation framework called Taurus (Test Automation Running Smoothly) is one example. The https://jmeter-plugins.org/ site founded by our Chief Scientist and the JMeter Plugins Manager he recently wrote are another. The plugin contributions help ensure that plugins written by many authors are easily and quickly accessible to anyone.
We are far from alone in this, and that’s what makes the time we are living in so interesting. While we live to make continuous testing a reality, that’s just one part of delivering software faster and more reliably. The team at Capital One wanted a way to easily visualize how well that bigger picture was flowing. The result is the DevOps dashboard tool Hygieia they have contributed to us all. Now a performance testing bottleneck (or any other bottleneck) has nowhere to hide.
No matter where the contributions come from, the great thing about the new kind of “scaling up” going in our world is that performance testing is no longer some scarce resource that must be metered out carefully and planned for many weeks in advance. Performance can be with YOU, no matter where you are in the organization. That explains why our developers voted up the following shirt design for us to give out at Jenkins World and Velocity :-)
You might also find these useful:
Interested in writing for our Blog? Send us a pitch!