How to Improve Software Testing Through Measurement

There is a saying, “What you measure is what you get.” Yet another way to look at measurement is “What you measure is what you can improve.” In this blog, we’ll look at how measurement is used for multiple purposes in software testing.

What is software testing measurement?

Typical software testing measurement focuses on managing testing activities and defect resolution during a software project, whether the methodology is Agile, iterative or waterfall. Test execution metrics may include number of new test cases created, volume of tests executed, and pass/fail rates.

For defects, typical counts are provided for the number of defects opened, the number of defects resolved, and distributions of the number of defects by state or severity. Beyond measuring to manage, measurement should be used to baseline software quality and to drive improvement.

What can we learn from defects?

Defects provide a wealth of insight to software quality and software development activities. Production defects are a measurement of the software’s quality as experienced by the users of the software.

When the number of defects are stated relative to the size of the software (in terms of functional size or lines of code), a simple quality ratio can be calculated by dividing the number of recorded defects during a release by the software size.

This allows the software quality of an application to be compared to the software quality of other applications in the company’s portfolio. Trends in an application’s quality (i.e. defect rate) can be tracked over time, release to release, to answer the question “Is quality getting better or worse?” Defects can be weighted by severity to factor in their impact on the end-user, giving a more user-centric perspective into software quality.

Root Cause Analysis

In addition to measuring application quality, defects can pinpoint gaps in software delivery activities through root cause analysis (RCA). The purpose of RCA is to understand trends in common causes of defects, so improvements target eradicating these causes to prevent future defects. Why repeat the same mistake twice?

When a production defect occurs, there are two gaps. First, there is a gap in the software development activities that allowed the defect to be created in the first place. Secondly, there is a gap in the testing activities that allowed the defect to go undetected. The causes of both gaps can be identified by RCA.

Root cause analysis should be performed on both production defects and pre-production (known) defects that are deferred for resolution and become part of the technical debt backlog. A simple root cause category scheme should be used with 5 to 7 unique causes, such as requirement, code, data, configuration, etc. The number (or percentage) of defects by root cause can be reported as a metric. Improvements are then focused on the root cause with the highest percentage.

How to improve testing efficiency

Measurement can also be used to quantify improvement in testing effectiveness and efficiency. Testing efficiency can be improved by ‘shifting left’ test activities earlier in the development cycle through more unit testing and API/services testing and less UI testing.

To measure the value of ‘shifting left’, the number (or percentage) of defects discovered by each type of test can be tracked. The goal is to find a greater number of defects through unit and API/services tests, which are executed earlier in the process; measurement determines if ‘shift left’ is being realized. Test automation is another way to improve testing efficiency.

Measurement can be used to help quantify the movement toward more automation and less manual testing. Automation coverage can be measured by the number of automated tests compared to the total number of tests (both manual and automated).

The effectiveness of automation can be measured by the number of defects found by automation versus defects found by manual tests. An automated test suite that discovers very few defects may have become stale (i.e. testing features that never change) and the suite needs to be refreshed with new tests.

“What you measure is what you get”

Measuring provides focus and visibility, which impacts people’s behavior. If a team is measured on meeting deadlines, it will make project dates while growing the backlog of deferred defects (technical debt). If you measure productivity on the number of test cases prepared, you may get a test case for each data condition instead of one data-driven test case where conditions are defined by the data.

Yet, as noted at the beginning of this post, another way to look at measurement is “What you measure is what you can improve.” Measurement can highlight gaps for improvement and be the evidence of goals achieved.

Nancy Kastl

Nancy Kastl

Nancy Kastl is Director of the Testing Services Practice at SPR Consulting. She is an evangelist for the software quality and testing profession serving as President of the Chicago Quality Assurance Association (CQAA), instructor for the QAI Global Institute, and conference chairperson for the North America Quality Engineered Software and Testing (QUEST) conference.