4 Pitfalls to Avoid With Continuous, Automated Testing of Your Electronic Health Record (EHR) System
I once served as a healthcare IT consultant on a large software implementation team for an electronic health record (EHR) system. The task was to install a clinical documentation and medication management platform to serve the pediatrics department at a large university hospital. The software vendor was the market leader in the field.
During the implementation period, though, we discovered that standard test scripts provided by the firm simply weren’t suitable for a highly specialized healthcare environment. One example: On a medication management platform expected to provide clinical decision support, there were no configuration tests that made use of complex, multistage protocols or dosing-regimes for subspecialties like oncology, nephrology and cardiology.
Due to patient safety and liability concerns, our software implementation team had to develop its own test scripts to cover not only common workflows, but also the most complex scenarios a care team might encounter. Here are some of the pitfalls we identified as we worked, as well as a recommendation for how you can avoid them as you plan your EHR project.
1. Manual creation of complex test scripts
We had teams of nurses, doctors and pharmacists who spent over six months developing test scripts in an Excel worksheet for an early version of the EHR. Instructions were written in plain English, explaining every step of the interaction between the user and the EHR system. These manually written scripts might read like this:
”Move the mouse cursor to the upper, left-hand corner of the screen, open the dropdown ‘CATEGORY’ box and select ‘MEDICATION’ from the list. A dialog box will pop up. Use the keyboard and type the name of the drug: Amoxicillin.”
Some test scripts contained more than 200 instructions. Some used decision points to branch into other test scripts. Others introduced a best-practice approach for common tasks, like prescribing standard medications. Clearly, it would have been of great help if the EHR vendor had provided macro recorder functionality for test script development.
2. Lack of test automation and script reuse
The next phase was to manually apply the test scripts we developed and evaluate the software installation before go live. A single script, typically consisting of 75 to 100 steps, would often take several hours to complete – and it’s easy to see why.
Clinical testers worked with a two-screen computer setup, one monitor showing the EHR application, the other an Excel worksheet with manually written test scripts. They would constantly switch between the two monitors to read the instruction and then execute it on the other screen. For each step, testers confirmed success or failure. They were expected to capture comments in the Excel worksheet, as well as screenshots, if necessary.
Typically, each script would be tested several times by multiple users to ensure validity of the results. The whole process was tedious, making it a challenge to stay focused.
3. Costly delays
As you might expect, many of the tests we conducted failed in the early stages since the software code was not yet fully configured. The testing team would use a rating system that classified the errors into three categories: minor, moderate and severe. The last category was seen as seen as a “go-live breaker.”
The testing manager who oversaw the process would use the vendor’s reporting system to document errors and request fixes. And weekly stats were compiled for the vendor, hospital executives and our implementation team.
Based on this rigorous manual testing process, go-live dates were rescheduled several times. Temporary workarounds that might have kept us on schedule were considered unsafe since so many interactions had multiple, complex dependencies. As a result, delays were needed to accommodate bug fixes and new software releases that would ensure safe and efficient workflows.
4. Limited preproduction testing for cross-system dependencies
After developing test scripts and resolving bugs, our software moved to Quality Assurance (QA) within our Test Center of Excellence (CoE). The CoE evaluated performance against acceptance criteria and determined whether important details or actions were missed.
But important information was missing. What happens next? How do you determine whether all the cross-system dependencies are trouble-free before go-live? How do you ensure flawless performance as mainframe data and applications are tweaked over time to add new features and to ensure your mainframe performance meets predefined SLAs?
How automated continuous testing can help
To overcome the four pitfalls above, healthcare organizations can benefit from incorporating Agile principles – testing early and often in the development cycle at the preproduction phase. By managing defects earlier, you can avoid the risks and high costs of bad user experiences. Ideally, this preproduction testing should include:
- Evaluation of cross-system transactional dependencies involving patient data used by your EHR and maintained on your mainframe as a system of record.
- End-to-end testing of performance and load to ensure a quality user experience.
- Testing of incremental code changes, including the coding of DB2 database calls and modified application logic written in COBOL.
- Regression testing to validate your latest software build.
Select the right testing platform
Modern electronic medical records and patient management systems are complex and highly customizable, which makes it difficult to write reusable and standardized test scripts. With a flexible testing platform, though, you can record and replay test scripts for efficiency, while addressing your organization’s unique preferences and challenges.
With the right test automation framework, you can write test scripts in simple syntax from a local development environment of choice – equipping your team to test early and often in the software development lifecycle to evaluate both your software and the cross-system dependencies between your EHR and mainframe system of record.
Many industry-leading organizations now use CA BlazeMeter for the comprehensive, end-to-end, preproduction testing they need. With BlazeMeter, you can define and execute a full-blown script with about ten lines of text from your local machine, then seamlessly switch and run tests at enterprise scale in the cloud. And you can do it while keeping sensitive patient data and load generation behind your firewall – anonymized for cloud-based testing.
QA teams in your Test CoE benefit from a single, integrated test platform with a plug-in architecture. For example, they can leverage open-source JMeter plug-ins that emulate remote terminals transacting with your mainframe. They can reuse test scripts. They can easily scale test automation on-demand for multiple projects across a continuous delivery pipeline. And they can ensure a quality user experience.
Take the next step
Start your continuous testing journey today by requesting a free BlazeMeter demo. We also encourage you to join our Apache JMeter™ Training Academy to learn tips and techniques that can save time and help you work more efficiently in an open-source environment. To learn more, visit www.blazemeter.com/shiftleft.
You might also find these useful:
Interested in writing for our Blog? Send us a pitch!