Software reliability modeling has, surprisingly to many, been around since the early 1970s with the pioneering works of Jelinski and Moranda, Shooman, and Coutinho. The theory behind software reliability is presented, and some of the major models that have appeared in the literature from both historical and applications perspectives are described. Emerging techniques for software reliability research field are also included. The following four key components in software reliability theory and modeling: historical background, theory, modeling, and emerging techniques are addressed. These items are discussed in a general way, rather than attempting to discuss a long list of details.
Software reliability modeling has, surprisingly to many, been around since the early 1970s with the pioneering works of Jelinski and Moranda (1972), Shooman (1972, 1973, 1976, 1977), and Coutinho (1973). We present the theory behind software reliability, and describe some of the major models that have appeared in the literature from both historical and applications perspectives. Emerging techniques for software reliability research field are also included. We address the following four key components in software reliability theory and modeling: historical background, theory, modeling, and emerging techniques. These topics are introduced in brief below
1. Historical Background
1.1. Basic Definitions
Software reliability is centered on a very important software attribute: reliability. Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment (ANSI, 1991). We notice the three major ingredients in the definition of software reliability: failure, time, and operational environment. We now define these terms and other related software reliability terminology.
A failure occurs when the user perceives that a software program ceases to deliver the expected service.
The user may choose to identify several severity levels of failures, such as catastrophic, major, and minor, depending on their impacts to the system service and the consequences that the loss of a particular service can cause, such as dollar cost, human life, and property damage. The definitions of these severity levels vary from system to system.
A fault is uncovered when either a failure of the program occurs, or an internal error (e.g., an incorrect state) is detected within the program. The cause of the failure or the internal error is said to be a fault. It is also referred as a “bug.”
In most cases the fault can be identified and removed; in other cases it remains a hypothesis that cannot be adequately verified (e.g., timing faults in distributed systems).
In summary, a software failure is an incorrect result with to the specification or unexpected software behavior perceived by the user at the boundary of the software system, while a software fault is the identified or hypothesized cause of the software failure.
When the distinction between fault and failure is not critical, “defect” can be used as a generic term to refer to either a fault (cause) or a failure (effect). Chillarege and co-workers (1992) provided a complete and practical classification of software defects from various perspectives.
The term “error” has two different meanings:
|1. |A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct| | |value or condition. Errors occur when some part of the computer software produces an undesired state. Examples include | | |exceptional conditions raised by the activation of existing software faults, and incorrect computer status due to an | | |unexpected external interference. This term is especially useful in fault-tolerant computing to describe an intermediate | | |stage in between faults and failures....