Interview with Michael Bolton: The Software Tester & the Unexpected, Part 2
Michael Bolton is a thought leader in the world of software testing, with over two decades of experience in the computer industry testing, developing, managing, and writing about software. As an international consultant and testing trainer, Michael gives workshops and conference presentations about testing methodologies and perceptions that specialize in Rapid and Exploratory Software Testing. Learn more
In part one of this interview, Michael Bolton discussed the current issues and challenges with performance testing, as well as factors that testers should keep in mind when load testing a website or application.
In this post, cloud computing blogger Ofir Nachmani will finish his interview, highlighting the tester's function in software development, useful tools, and unexpected results. Take a look:
ON: What does a tester need to achieve? Where do you see testers in the software development chain?
MB: It's my job to investigate a product so that my clients can decide if the product they’ve got is the product they want. Testers are investigators, and their objective should be to discover more than to verify; to be reporters, and not judges; to describe, not to make the business decisions. Testers do execute judgement over what might represent a problem to users, to developers, or to the business, then inform those who are responsible for making the decisions; our testing clients. Clients need information about problems and risks in order to make informed decisions about what they do next with their product and whether or not they deploy.
For example, let’s say we, the testers, observe that our service’s database is getting hammered with dozens of extra handshakes for each transaction. The designer and product owner may or may not find that to be a problem. However, it's possible that the pipe is not going to be big enough to handle all of the expected transactions, and should be scaled up as a result. The product owner would then want to know what happens after scaling up, which we investigate, as well.
Testers should not make the decision of whether or not a product is good to go. They are not the decision makers. They can only provide a piece of the puzzle that the business has to assemble to make release decisions.
ON: So a tester returns with numbers...What about the actual workflow or use case? How does reporting these support an actual product’s value?
MB: Testers tell a story about the product, and numbers are illustrations of that story. They're like pictures that come with a newspaper article, like the stats in a sports story. Maybe you’ve been to a football game or some other event, and then seen stories, statistics and pictures in newspapers and on TV afterwards. A good story describes the event from a number of perspectives, and useful stats and good pictures add depth and support to that story.
We have to be careful, though, to think critically about the stories that we’re telling and the stories that we’re being told. There’s a nice example in Nassim Taleb’s book, The Black Swan, a book that testers should read. On the day that Saddam Hussein was captured, a news headline reported that the price of U.S. Treasury bills had risen over worries that terrorism would not be curbed; half an hour later, when Treasury bills had fallen again (they fluctuate all the time) the explanation was that Hussein’s capture made risky assets more attractive. The same cause was being used to explain two opposite events―the rise and the fall. So it’s important to consider how we arrive at our conclusions, and how we might be fooled.
In the world of performance testing we look at certain numbers and certain patterns and use them to illustrate a story. I would argue that it's the job of a tester to remain skeptical of the numbers and of our explanations of them, especially when the news seems to be good. A single set of performance data based on a single model can fool us, and fail to alert us to potential problems and to risks that might be there. We need tools to provide data that should be looked at from a variety of different perspectives to help us analyze and visualize.
ON: Lets discuss tools. How can I know that I'm using the right testing tool?
MB: Instead of thinking of “the right tool”, try thinking about a diversified set of tools. Suppose you want to be alerted about problems in your home while you’re away: a smoke detector won't really help you out when a burglar is the issue; for that, you need a motion detector. However, neither of those is likely to alert you when there is a flood. And they won’t help you if there’s structural weakness in the building and it’s in danger of collapsing.
A good tool is one that helps you extend your ability to do something powerfully with a minimum amount of fuss. I tend to prefer lightweight, easily adaptable tools in combination, rather than one tool to rule them all. There are plenty of dimensions to performance testing―not just driving the product, but generating and varying, or randomizing, data; monitoring and probing the internals of the system; visualizing patterns and trends; aggregating, parsing and reporting results.
I like to use tools not only to alert me about the problems that I anticipated, but to help me anticipate problems I hadn't. Ultimately, I’m most interested in the surprises, and the unexpected.
We hope you all enjoyed learning about the world of testing as much as we did. Testers are a vital part of the software development process, and it is crucial that we all understand the important role they play. We would like to thank Michael for taking the time to have this great conversation with Ofir.
We invite you to also take a look at Ofir’s interview with Alex Podelki, a prominent thought leader in the world of load and performance testing
Are you a Software Tester? Check out our educational resource library for testers