Finding the Right Ratio of Software Testers to Developers for Your Team
Many organizations struggle with finding the optimum ratio of testers to developers. Some boast 1:1 while others limp along with 1:10 or worse. But what is the right ratio?
That’s easy: It depends.
If you are developing a completely new application, the minimum is one or two testers per five developers—and that assumes the developers are doing the unit and integration-level testing. If testers are doing it, then 1:1 is a must, because the testers are intimately involved in the development process as well as the downstream test phases. New applications especially require lots of retesting, as the requirements may be in flux so the functionality has yet to stabilize.
But if you are delivering a maintenance release—that is, changes to an already existing application—then believe it or not, testers should probably outnumber developers. The reason is that while developers may only be modifying or adding a fraction of the original code, the entire application is at risk of unintended impact. Hence the concept of regression testing, which asserts that you have to test both what you expect to change and what you don’t.
Here is an extreme example: An insurance company added functionality to their claims adjudication process, which was not only complex and high risk, but also was a key component of all fifty-six versions of their property insurance platform—one for every state, plus six territories. The risk ripple effect was far-reaching, requiring a test effort that dwarfed the development project, both in scope and schedule.
In the case of licensed software, the picture changes again. Some packages are plug and play, while others require extensive customizations. Plug and play may simply require compatibility testing with the target operating platform. Deploying customizations may require a small development effort relative to the total application functionality, yet the impact is potentially significant. Testers have to assure that the overall application functionality behaves as expected, which involves testing code that was developed by the vendor.
There is one more variable in this mix: integrations. Whether the application is new, existing, licensed, or customized, the degree to which it is integrated with other applications in the enterprise is an important factor in the test effort. Understanding the exchanges of information with other systems and assuring that any changes do not compromise them can be a monumental undertaking for the test organization. Some applications are like the spider at the center of a vast web, and seemingly innocuous changes—say, to the size of a field—can cascade throughout the entire enterprise.
The reality is that testing isn’t just running tests; it demands planning, test environment and data management, requirements analysis, test design, execution (and often re-execution), diagnosis, reporting, and defect management. And, ironically, the more defects the testers uncover the harder their job becomes. Defects have to be tracked, tests have to be re-executed, and reporting has to be repeated.
So the right ratio of testers to developers is not a pat percentage; it depends on the type of project, the type of application, the role of the tester, and the scope of potential impact.