26-07-2012, 04:52 PM
System reliability modeling and simulation techniques
Software reliability.docx (Size: 42.72 KB / Downloads: 24)
Software reliability
Software reliability is a special aspect of reliability engineering. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. There are significant differences, however, in how software and hardware behave. Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state. However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences. There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews, unit tests, configuration management, software metrics and software models to be used during software development.
Software reliability
A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails, and try to quantify software reliability. Over 200 models have been developed since the early 1970s, but how to quantify software reliability still remains largely unsolved. As many models as there are and many more emerging, none of the models can capture a satisfying amount of the complexity of software; constraints and assumptions have to be made for the quantifying process. Therefore, there is no single model that can be used in all situations. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems.
Most software models contain the following parts: assumptions, factors, and a mathematical function that relates the reliability with the factors. The mathematical function is usually higher order exponential or logarithmic.
Software modelling techniques can be divided into two subcategories: prediction modelling and estimation modelling. Both kinds of modelling techniques are based on observing and accumulating failure data and analysing with statistical inference. The majordifference of the two models is shown in Table:
ISSUES PREDICTION MODELS ESTIMATION MODELS
DATA REFERENCE Uses historical data Uses data from the current software development effort
WHEN USED IN DEVELOPMENT CYCLE Usually made prior to development or test phases; can be used as early as concept phase Usually made later in life cycle(after some data have been collected); not typically used in concept or development phases
TIME FRAME Predict reliability at some future time Estimate reliability at either present or some future time
Software reliability simulation
A simulation model describes a system being characterized in terms of its artefacts, events, interrelationships, and interactions in such a way that one may perform experiments on the model, rather than on the system itself, ideally with indistinguishable results.
Simulation presents a particularly attractive computational alternative for investigating software reliability because it averts the need for overly restrictive assumptions and because it can model a wider range of reliability phenomena than mathematical analyses can cope with. Simulation does not require that test coverage be uniform, or that a particular fault to failure relationship exist, or that failures occur independently, if these are not actually the case.
But power and generality are ineffective where ignorance reigns. Scientific philosophy directs to seek the simplest models that explain poorly understood phenomena. For example, when we do not understand how fault attributes relate to consequent failures, we may as well simplify the model by assuming that faults produce independent failures, at least until our experiments prove otherwise.
But objective validation of even a simple reliability model may be problematic, because controlled experiments, while easy to simulate, will be impossible to conduct in practice. However, if we can build an overall model upon simple and plausible sub models that together integrate cleanly to simulate the phenomenon under study, then we may gain some aggregate trust from the combined levels of confidence we may have in the constituent sub models.