Over the last decades, a considerable amount of empirical knowledge about the efficiency of defect-detection techniques has been accumulated. Also a few surveys have summarized those studies with different focuses, usually for a specific type of technique. This work reviews the result of empirical studies and associates them with a model of software quality economics. This allows a better comparison of the different techniques and supports the application of the model in practice as several parameters can be approximated with typical average values. The main contributions are the provision of average values of several interesting quantities w.r.t. defect detection and the identification of areas that need further research because of the limited knowledge available.
The economics of software quality assurance (SQA) are a highly relevant topic in practice. Many estimates assign about half of the total development costs of software to SQA of which defect-detection techniques, i.e., analytical SQA, constitute the major part. Moreover, an understanding of the economics is essential for project management to answer the question how much quality assurance is enough. For example Rai, Song, and Troutt state, that a better understanding of the costs and benefits should be useful to decision-makers. However, the relationships regarding those costs and benefits are often complicated and the data is difficult to obtain. The costs for analytical SQA are significant. Many estimates say that analytical SQA constitutes about 50% of the total development costs. This figure is attributing to Myers . Jones  still assigns 30–40% of the development costs to quality assurance and defect removal. In a study from 2002, the National Institute of Standards and Technology of the United States  even 80% of the development costs are assigned to the detection and removal of defects. There is a huge opportunity for cost savings in this area and that is why we focus on analytical techniques, also called defect-detection techniques, in the following. A further point of view is the distribution of defects over the components of software. It is often observed that this distribution follows a Pareto principle with 20% of the components being responsible for 80% of the defects [4, 5]. This suggests that SQA should not be uniformly distributed over the components but be concentrated on the components that contain the most defects. Problem
The main practical problem is how we can optimally use defect-detection techniques to improve the quality of software. Hence, the two main issues are in which order and with what effort the techniques should be used. This paper concentrates on the sub problem that the collection of all relevant data for a well-founded answer to these questions is not always possible.
Because of that Rai et al. identify in  mathematical models of the economics of software quality assurance as an important research area. “A better understanding of the costs and benefits of SQA and improvements to existing quantitative models should be useful to decision-makers.” The distribution of effort over the components is also very hard to answer, as we cannot know beforehand which components contain the most defects. However, there are various approaches that try to predict the fault-proneness of components or classes based on several metrics. Hence, an approach that helps to distribute the optimal effort calculated using the cost model over the components would be helpful Contribution
Reliability is one of the major factors that affects users and customers and is therefore of major importance. Nevertheless, this does not mean that other quality aspects are totally ignored as a complete separation is not possible. For example, the effort for corrective maintenance also...
Please join StudyMode to read the full document