Which Test Metrics Are Useful in Agile Projects?

Measuring tape

I am based in Florida, which was struck by Hurricane Matthew this weekend. As I watched the storm work its way up the coast, the volume of weather information generated by the minute was of critical value in tracking its progress, intensity, and path. Our weather forecasting and tracking capabilities continue to get better and better.

Just as our measurement and metrics capabilities improve as our technology and knowledge evolve, we must also refine and adapt our software quality and testing measures as we embrace an agile culture.

I am often asked, “What are the best measures for software quality and testing in an agile environment?” It’s impossible to answer this question directly, as the answer is always context-specific for each company, software application, and business objective. Generally, I believe we can develop our software quality and test measurement and metrics strategy similarly to the way we view our agile test automation strategy.

For those of you familiar with the test automation pyramid, you know that we aim to drive test automation to lower levels of the technology stack, putting more focus on automation at the unit and service levels and less at the user interface level. We can adopt this conceptual model when defining our software measurement and metrics plan for agile projects, too, and apply it to our strategy of what to measure.

Traditionally, our measures of a given application or system under test have included product quality (such as the number and rate of defects), test effectiveness (such as requirements, functional, or code coverage), test status (such as the number of tests run, passed, or blocked), test resources (such as time and cost) and test issues (such as qualitative risks or issues). For the most part, all five dimensions of our traditional dashboard are focused on the software application as a whole.

As we embrace agile, our measurement and metrics strategy shifts to lower levels of the application or system under test. For example:

  • Measuring code complexity can help us understand areas of the code that have higher risk or are candidates for refactoring
  • Reporting on code coverage provides insight into what areas of the code have been exercised
  • Running static code analysis provides information on whether the code meets established coding guidelines, standards, code constructs, and code vulnerabilities
  • The percent of (unit, service layer, and story) tests automated and amount of technical debt accumulated within our automated tests helps us understand the degree and quality of automation
  • Mapping unit and service layer tests to stories provides traceability information

Tracking the trends in these changes helps determine if further testing is required and the specific areas of the system that may need more testing. None of the above methods are sufficient individually, but when taken together, they can provide useful guidance on areas of remaining risks. While we may still choose to retain some of the traditional measures, a strategy of driving our measures to lower levels is helpful in an agile culture.

Up Next

About the Author

TechWell Insights To Go

(* Required fields)

Get the latest stories delivered to your inbox every month.