A SHORT HISTORY OF THE COST PER DEFECT METRIC
May 5, 2009
The oldest metric for software quality economic study is that of “cost per defect.” While there may be earlier uses, the metric was certainly used within IBM by the late 1960’s for software; and probably as early as 1950’s for hardware.
As commonly calculated the cost-per-defect metric measures the hours associated with defect repairs and the numbers of defects repaired and then multiplies the results by burdened costs per hour.
The cost-per-defect-metric has developed into an urban legend, with hundreds of assertions in the literature that early defect detection and removal is cheaper than late defect detection and removal by more than 10 to 1. This is true mathematically, but there is a problem with the cost per defect calculations that will be discussed in the article. As will be shown, cost per defect is always cheapest where the greatest numbers of defects are found. As quality improves, cost per defect gets higher until zero defects are encountered, where the cost per defect metric goes to infinity.
More importantly the cost-per-defect metric tends to ignore the major economic value of improved quality: shorter development schedules and reduced development costs outside of explicit defect repairs.
Capers Jones, President, Capers Jones & Associates LLC
Copyright © 2009 by Capers Jones & Associates LLC. All rights reserved.
The cost-per-defect metric has been in continuous use since the 1970’s for examining the economic value of software quality. Hundreds of journal articles and scores of books include stock phrases, such as “it costs 100 times as much to fix a defect after release as during early development.”
Typical data for cost per defect varies from study to study but resembles the following pattern circa 2009:
Defects found during requirements =
Defects found during design =
Defects found during coding and testing =
Defects found after release =
While such claims are often true mathematically, there are three hidden problems with cost per defect that are usually not discussed in the software literature:
1. Cost per defect penalizes quality and is always cheapest where the greatest numbers of bugs are found.
2. Because more bugs are found at the beginning of development than at the end, the increase in cost per defect is artificial. Actual time and motion studies of defect repairs show little variance from end to end.
3. Even if calculated correctly, cost per defect does not measure the true economic value of improved software quality. Over and above the costs of finding and fixing bugs, high quality leads to shorter development schedules and overall reductions in development costs. These savings are not included in cost per defect calculations, so the metric understates the true value of quality by several hundred percent.
Let us consider these problem areas using examples that illustrate the main points..
Why Cost per Defect Penalizes Quality
The well-known and widely cited “cost per defect measure” unfortunately violates the canons of standard economics. Although this metric is often used to make quality economic claims, its main failing is that it penalizes quality and achieves the best results for the buggiest applications!
Furthermore, when zero-defect applications are reached there are still substantial appraisal and testing activities that need to be accounted for. Obviously the “cost per defect” metric is useless for zero-defect applications.
As with KLOC metrics discussed in another paper, the main source of error is that of ignoring fixed costs. Three examples will illustrate how “cost per defect” behaves as quality improves.
In all three cases, A, B, and C, we can assume that test personnel work 40 hours per week and are...
Please join StudyMode to read the full document