Stop Making the Same Mistakes

I am amazed at our industry today. We continue to make the same mistakes we have been making for more than forty years and refuse to admit it. We keep putting “dresses on pigs,” as a friend of mine is fond of saying. We give a new name to the same old issue, hoping the new name will fix the problem.

Consider the lifecycle of the software development process. In the ’70s we set the project date, reverse engineered the schedule to fit the date, and then got to work on the requirements. Within the timeline, development was done sequentially but somewhat free-form and driven by the amount of code complete. We would reach the end of the project (after a year or two) with something that “sort of” worked, depending on your viewpoint. The problem was too much work and not enough time. Testing in particular was typically underestimated and compressed at the schedule's end of the schedule to compensate for changing requirements that endangered the due date.

Soon after that we modified the development model and became “structured programmers.” This altered slightly the way the code was written but did not fundamentally address the problems—artificial dates and constantly changing requirements—that inevitably exceeded the preset delivery date, and we continued to compress and reduce testing at the end of the schedule.

Then, we developed “rapid” development processes. We began to do rapid prototyping and rapid application development (RAD). Sometimes we even did a joint application design (JAD) session as part of the RAD process. This was all supposed to help us define the requirements and get to the target date with a product that worked. In truth, nothing really changed. The schedule still slipped, testing was compressed, and we still did not deliver a quality product.

In the last fifteen years, we have again changed the development process. We now have agile models. Like the rapid models that preceded them, the goal is to meet customers’ expectations and requirements and to reach the date with something that works. Again, we tend to fail in many instances because we still underestimate the testing effort within the schedule. Handling changing requirements is more effective, but there is still more work allocated than there is time available, especially when testing is included in the assessment.

We keep changing the names of the development processes we use, but we do not fix the fundamental error they all suffer from: the failure to set the date and control the scope of the project—including proper estimation of testing efforts.

The single most useful development model I have seen that will allow us to deliver rapidly is the timebox approach. If we set a date, we reduce the scope of the project to fit the date and resources available.

Time to market is an essential element of many projects; however, delivering substandard quality or less functionality than the customer expects is not acceptable. Customers and IT must work together to set dates and relevant scope limits for both development and testing if we truly want to be successful.

Dale Perry is presenting the tutorial Getting Started with Risk-Based Testing at STARWEST, from October 12–17, 2014.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.