• Hermann Kopetz
Part of the Real-Time Systems Series book series (RTSS)


This chapter deals with assessment technologies. These technologies must convince a designer, user, or a certification authority that the developed computer system is safe to deploy and will fulfill its intended function in the planned real-world environment. In Sect. 12.1 we elaborate on the differences between validation and verification. Validation deals with the consistency between the informal model of the user’s intention and the behavior of the system-under-test (SUT), while verification deals with the consistency between a given (formal) specification and the SUT. The missing link between validation and verification are errors in the specification. The following section deals with the challenges of testing, the preferred validation technique. At the core of testing are the interference-free observability of results and the controllability of the inputs. The design for testability provides a framework that supports these characteristics. In most cases, only a tiny fraction of the input space can be examined by test cases. The proper selection of test cases should justify the assumption that, given the results of the test cases are correct, the system will operate correctly all over the input domain. In digital systems the validity of such an induction is doubtful, since digital inputs are not continuous but discrete – a single bit-flip can make a correct result erroneous. The decision whether the result of a test input is correct is delegated to a test oracle. The automation of test oracles is another challenge in the domain of testing. Model-based design, where a model of the plant and a model of the computer controller are interconnected to study the performance of closed-loop control systems is a promising route towards the automation of the test oracle. Given that a complete formal model of a design is available, formal methods can be deployed to check whether selected properties hold in all possible states of the model. In the last few years, the technique of model checking has matured such that it can handle systems of industrial size. The correct operation of the fault-masking mechanisms of a fault-tolerant system can only be assessed if the input space is extended to include the faults the system is supposed to tolerate. In the last section, the topics of physical fault-injection and software-based fault injection are covered. Since any physical sensor or actuator will eventually fail, fault-injection campaigns must establish the safe operation of a system even in the case that any particular sensor or actuator has failed.


Model Check Formal Method System Under Test Fault Injection Component User 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. [Arl03]
    Arlat, J. et al. (2003). Comparison of Physical and Software-Implemented Fault Injection Techniques. IEEE Trans. on Computers. Vol. 52(9). (pp. 1115-1133).CrossRefGoogle Scholar
  2. [Avr92]
    Aversky, D., Arlat, J., Crouzet, Y., & Laprie, J. C. (1992). Fault Injection for the Formal Testing of Fault Tolerance. Proc. of FTCS 22. IEEE Press. (pp. 345-354).Google Scholar
  3. [Bar01]
    Baresi, L. and M. Young. (2001). Test Oracles. University of Oregon, Dept. of Computer Science.Google Scholar
  4. [Ber07]
    Bertolino, A. (2007). Software Testing Research: Achievements, Challenges, Dreams. Proc. of FOSE 07. IEEE Press. (pp. 85-103).Google Scholar
  5. [Cla03]
    Clark, E., et al. (2003). Counterexample-Guided Abstraction Refinement for Symbolic Model Checking. Journal of the ACM. Vol. 50(5). (pp. 752-794).MathSciNetCrossRefGoogle Scholar
  6. [Dys98]
    Dyson, G.B. (1998). Darwin among the Machines—the Evolution of Global Intelligence. Basic Books.Google Scholar
  7. [Gau05]
    Gaudel, M.C. (2005). Formal Methods and Testing: Hypotheses, and Correctness Approximations. Proc. of Formal Methods 2005. LNCS 3582. Springer Verlag.Google Scholar
  8. [Jur04]
    Juristo, N., A.M. Moreno, & S. Vegas, (2004). Reviewing 25 Years of Testing Technique Experiments. In: Empirical Software Engineering, Vol. 9. Springer Verlag. (pp. 7-44)Google Scholar
  9. [Kar95]
    Karlson, J. et al. (1995). Integration and Comparison of Three Physical Fault-Injection Experiments. In: Predictably Dependable Computing Systems. Springer Verlag.Google Scholar
  10. [Lit93]
    Littlewood, B. & L. Strigini, (1993). Validation of ultra-high dependability for software-based systems. Comm. ACM. Vol. 36(11). (pp. 69-80).CrossRefGoogle Scholar
  11. [Mog06]
    Mogul, C. (2006). Emergent (Mis)behavior vs. Complex Software Systems. Proc. of EuroSys 2006. ACM Press.Google Scholar
  12. [Per10]
    Perez, J.M. (2010). Executable Time-Triggered Model (E-TTM) for the Development of Safety-Critical Embedded Systems. PhD. Thesis, Institut für Technische Informatik, TU Wien, Austria. (pp. 1-168).Google Scholar
  13. [Rus93]
    Rushby, J. (1993). Formal Methods and the Certification of Critical Systems (Research Report No. SRI-CSL-93-07). Computer Science Lab, SRI, Menlo Park, Cal.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Department of Computer Engineering Real Time Systems GroupVienna University of TechnologyWienAustria

Personalised recommendations