Three Common QA Pitfalls To Avoid When Testing Applications for the Internet of Things

The Development of Performance Testing - from the Center of Excellence to Open Source

Collin Chau

Three Common QA Pitfalls To Avoid When Testing Applications for the Internet of Things

Most companies developing software for today’s application economy and the Internet of things have very similar Quality Assurance (QA) challenges. Whether you work for a small startup or a billion-dollar corporate giant, the scenario below will likely seem very familiar:

 

Management commissions five new software features for one of your flagship applications. Along the way though, several developers are pulled off the project to work on a separate high-profile initiative. By the midway point for the project, code is complete for only one of the five features. Rather than shift the release date or reduce the number of features – both of which have been publicly announced – management insists on making up for the delays by reducing the time allotted for quality assurance (QA). When your test team discovers a significant blocking bug just two days before the application’s launch, the entire team is thrown into a tizzy. Where QA is blamed for failing to uncover the problem sooner.

 

Take Proactive Steps to Keep Quality Front & Center


No doubt your own QA team has experienced similar challenges. The trick, though, is how to address them without compromising application quality and your team’s reputation. Let’s take a quick look at three of the issues in the test automation scenarios above and explore how they can be avoided.

 

1. Development time takes longer than estimated. Estimating is one of the hardest things software engineers do, and inevitably, guesswork is involved. There is a propensity to be overly optimistic – whether out of confidence or to avoid being quizzed about why things take so long. In addition, estimates are based on an expected level of resources. If team members are reassigned to other projects, all bets are off.  

 

Solution: Take steps to bring rigor to project estimates. Begin by understanding what has happened in the past, including the deviation in past estimates, the number of staff reassignments and the impact. Conduct post-mortem discussions and analyses to help the team make better-informed time estimates and to set deadlines that are more realistic.

 

2. QA and test time are reduced to accommodate schedule slips.  It’s easy to see why management would be reluctant to reduce features or to push back a release date if schedules are lagging. Typically, plans will have been communicated up the chain of command and may even have been announced to customers. Delivering anything less than what you promised can result in a breach of trust. In addition, there is the domino effect to consider. When you delay a project, team members can’t be released to begin the next project in line – creating a cascade of delays. So QA becomes a likely target for picking up the slack.

 

Solution: Have clearly defined QA entry and exit criteria that reduce the possibility of quality being sacrificed to speed. On the entry side, insist on testing small, incremental code changes well before development is complete so you can uncover and resolve issues much earlier. Also, focus on reusing and automating continuous regression tests across the entire development lifecycle. On the exit side of the equation, get upfront buy-in for the quality bar the project must meet before software is released to production. 

 

3. QA is blamed for not finding issues earlier. It isn’t QA’s job to test everything at every point, but rather to test critical factors in an organized and efficient manner given the time and resources at hand. That can be hard for a stressed management team to understand, though, when commitments are made, schedules are on the line and significant issues are found. 

 

Solution: Be transparent. No one should ever be mystified by what your QA team is doing at any point during a project. Your test plan should be reviewed with developers and management to ensure that testing approach, level of effort, and entry/exit criteria are known, understood and signed off on. Share progress reports and metrics at a regular cadence to indicate the number of bugs found, percent of testing complete, etc.  Be poised to push back calmly – and with data – if the blame game begins.

 

 

 

Continuously Test for Performance


One overarching solution to the QA challenges in today’s high-pressure, application-driven economy is continuous performance testing. And with open-source test technologies like Jenkins, Selenium and Apache JMeter™, it’s easier than you might think. These open-source tools are developer-friendly and known for their reliability and flexibility. They also are backed by a large developer community and a wide variety of utilities and plug-ins for various integrated development environments (IDE). Developers can easily define and orchestrate tests locally for their IDE of choice without any special training – failing fast and building better quality into each incremental code change with the use of a test automation framework like Taurus.

 

While these popular, open-source tools may be sufficient for most of your basic testing needs, your Test CoE does need to make them an integrated, enterprise ready part of your existing test environment. And that’s where CA BlazeMeter excels. It equips you to reuse test scripts and automate for regression tests across your continuous integration-continuous delivery pipeline, to address quality shortfalls earlier in the application development cycle. You can broker the right resources for each development team, test at scale on-demand, automate test flows across diverse projects and generate comprehensive reports. Request for your free demo today!

 

Learn more: www.blazemeter.com/shiftleft

 

Learn How BlazeMeter Can Help Your Company

Request a Demo

Interested in writing for our Blog? Send us a pitch!

DEVELOPER
X

Billing Information

Credit Card Information

Apply