Think through System Changes to Anticipate Quality Issues

Computer with a "Retired!" sign

Once there was a big, complex system that shared information among different business units. It was an old, slow legacy system that had been custom built for the organization, and it worked pretty well.

One day, one of the business units was sold, becoming a “business partner” rather than part of the family. The people who maintained the legacy system arranged to bill the business partner for its share of the ongoing operation and maintenance, and all was well. Data flowed and people were happy. 

Time passed. Management changed at the business partner. The new management wondered why it was paying to maintain a tired old system and decided to purchase a shiny new one off the shelf. The implementation took about eighteen months. 

The result was that a system that had previously been integrated—a custom system exchanging trusted data among integrated subcomponents—was now “disintegrated.” The partner’s new system had a few interface glitches that corrupted shared data. Although the new system had swoopy features that saved users time and provided new capability, there were a few complex functions that had previously been automated that now required manual data entry. Manual data entry took time, resulting in delays getting data from the business partner.

There is a cliché that a chain is only as strong as its weakest link. The metaphor may be overused, but it is a powerful one. The new subsystem disrupted operations profoundly. Data quality issues introduced by new interfaces could be attributed to development and testing practices in the new subsystem interfaces, which were developed hastily and sloppily, and in legacy system code written long ago that “trusted” the data from other parts of the system.

More interesting to me were the data timeliness issues observed and the extreme disruption introduced by the latency. The propagation delay could result in significant safety and liability issues. This wasn’t a medical system, but imagine that it was and that a consequence of the data delay slowed down updating a patient’s medical records to show what medications they were given, changing the update time from milliseconds to hours.

I’ve changed the details to protect client confidentiality, but this drama really did unfold, and this story is vitally important to test designers, analysts, programmers, and managers. 

When you replace or significantly modify components of a larger system, too frequently we focus on whether the code we are building functions correctly. This is important, but it’s also tactical and short-sighted. When changing part of a larger system, it’s easy to introduce errors and issues, because we are changing the way that those systems interact.

How do you identify and test assumptions? How will you discover whether delaying or accelerating the turnaround of a transaction will have unexpected effects?

Coding errors are only one aspect of software quality problems. Understanding the behavior of the existing system and the effects of changes we make is an often-overlooked source of error—and one that is much more difficult to anticipate.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.