Runtime Verification for Ultra-Critical Systems

  • Lee Pike
  • Sebastian Niller
  • Nis Wegmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7186)


Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems, and we describe a case study in a Byzantine fault-tolerant airspeed sensor system.


Linear Temporal Logic Processing Node Pitot Tube Hardware Fault Host Language 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rushby, J.: Software verification and system assurance. In: IEEE Intl. Conf. on Software Engineering and Formal Methods (SEFM), pp. 3–10 (2009)Google Scholar
  2. 2.
    Klein, G., Andronick, J., Elphinstone, K., Heiser, G., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H., Winwood, S.: seL4: Formal verification of an OS kernel. Communications of the ACM (CACM) 53, 107–115 (2010)CrossRefGoogle Scholar
  3. 3.
    Butler, R.W., Finelli, G.B.: The infeasibility of quantifying the reliability of life-critical real-time software. IEEE Transactions on Software Engineering 19, 3–12 (1993)CrossRefGoogle Scholar
  4. 4.
    Leveson, N.G., Turner, C.S.: An investigation of the Therac-25 accidents. Computer 26, 18–41 (1993)CrossRefGoogle Scholar
  5. 5.
    Nuseibeh, B.: Soapbox: Ariane 5: Who dunnit? IEEE Software 14(3), 15–16 (1997)CrossRefGoogle Scholar
  6. 6.
    Bergin, C.: Faulty MDM removed. NASA, May 18 (2008),
  7. 7.
    Lamport, L., Shostak, R., Pease, M.: The Byzantine generals problem. ACM Transactions on Programming Languages and Systems 4, 382–401 (1982)zbMATHCrossRefGoogle Scholar
  8. 8.
    Australian Transport Safety Bureau: In-flight upset event 240Km North-West of Perth, WA Boeing Company 777-200, 9M-MRG August 1, 2005. ATSB Transport Safety Investigation Report (2007)Google Scholar
  9. 9.
    Macaulay, K.: ATSB preliminary factual report, in-flight upset, Qantas Airbus A330, 154 Km West of Learmonth, WA, October 7, 2008. Australian Transport Safety Bureau Media Release, November 14 (2008),
  10. 10.
    RTCA: Software considerations in airborne systems and equipment certification. RTCA, Inc., RCTA/DO-178B (1992)Google Scholar
  11. 11.
    Kim, M., Viswanathan, M., Ben-Abdallah, H., Kannan, S., Lee, I., Sokolsky, O.: Formally specified monitoring of temporal properties. In: 11th Euromicro Conference on Real-Time Systems, pp. 114–122 (1999)Google Scholar
  12. 12.
    Chen, F., Roşu, G.: Java-MOP: A Monitoring Oriented Programming Environment for Java. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 546–550. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  13. 13.
    Pike, L., Goodloe, A., Morisset, R., Niller, S.: Copilot: A Hard Real-Time Runtime Monitor. In: Barringer, H., Falcone, Y., Finkbeiner, B., Havelund, K., Lee, I., Pace, G., Roşu, G., Sokolsky, O., Tillmann, N. (eds.) RV 2010. LNCS, vol. 6418, pp. 345–359. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  14. 14.
    Fischmeister, S., Ba, Y.: Sampling-based program execution monitoring. In: ACM International Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pp. 133–142 (2010)Google Scholar
  15. 15.
    Bonakdarpour, B., Navabpour, S., Fischmeister, S.: Sampling-Based Runtime Verification. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 88–102. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Mikáĉ, J., Caspi, P.: Formal system development with Lustre: Framework and example. Technical Report TR-2005-11, Verimag Technical Report (2005),
  17. 17.
    Jones, S.P. (ed.): Haskell 98 Language and Libraries: The Revised Report (2002),
  18. 18.
    Hawkins, T.: Controlling hybrid vehicles with Haskell. Presentation. Commercial Users of Functional Programming, CUFP (2008),
  19. 19.
    Rushby, J.: Formalism in safety cases. In: Dale, C., Anderson, T. (eds.) Making Systems Safer: Proceedings of the Eighteenth Safety-Critical Systems Symposium, pp. 3–17. Springer, Bristol (2010), Google Scholar
  20. 20.
    Boyer, R.S., Moore, J.S.: Mjrty: A fast majority vote algorithm. In: Automated Reasoning: Essays in Honor of Woody Bledsoe, pp. 105–118 (1991)Google Scholar
  21. 21.
    Claessen, K., Hughes, J.: Quickcheck: A lightweight tool for random testing of haskell programs. ACM SIGPLAN Notices, 268–279 (2000)Google Scholar
  22. 22.
    Clarke, E., Kroning, D., Lerda, F.: A Tool for Checking ANSI-C Programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  23. 23.
    Aviation Today: More pitot tube incidents revealed. Aviation Today (February 2011),
  24. 24.
    Ladkin, P.B.: News and comment on the Aeroperu b757 accident; AeroPeru Flight 603, October 2, 1996 (2002), Online article RVS-RR-96-16,
  25. 25.
    Littlewood, B., Rushby, J.: Reasoning about the reliability of diverse two-channel systems in which one channel is ”possibly perfect”. Technical Report SRI-CSL-09-02, SRI (January 2010)Google Scholar
  26. 26.
    Sha, L.: Using simplicity to control complexity. IEEE Software, 20–28 (July/August 2001)Google Scholar
  27. 27.
    Krüger, I.H., Meisinger, M., Menarini, M.: Runtime Verification of Interactions: From MSCs to Aspects. In: Sokolsky, O., Taşıran, S. (eds.) RV 2007. LNCS, vol. 4839, pp. 63–74. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  28. 28.
    Chen, F., d’Amorim, M., Roşu, G.: Checking and correcting behaviors of java programs at runtime with Java-MOP. Electronic Notes in Theoretical Computer Science 144, 3–20 (2006)CrossRefGoogle Scholar
  29. 29.
    Bonakdarpour, B., Kulkarni, S.S.: SYCRAFT: A Tool for Synthesizing Distributed Fault-Tolerant Programs. In: van Breugel, F., Chechik, M. (eds.) CONCUR 2008. LNCS, vol. 5201, pp. 167–171. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  30. 30.
    Havelund, K.: Runtime Verification of C Programs. In: Suzuki, K., Higashino, T., Ulrich, A., Hasegawa, T. (eds.) TestCom/FATES 2008. LNCS, vol. 5047, pp. 7–22. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  31. 31.
    Axelsson, E., Claessen, K., Dvai, G., Horvth, Z., Keijzer, K., Lyckegrd, B., Persson, A., Sheeran, M., Svenningsson, J., Vajda, A.: Feldspar: a domain specific language for digital signal processing algorithms. In: 8th ACM/IEEE Int. Conf. on Formal Methods and Models for Codesign (2010)Google Scholar
  32. 32.
    Halbwachs, N., Raymond, P.: Validation of Synchronous Reactive Systems: From Formal Verification to Automatic Testing. In: Thiagarajan, P.S., Yap, R.H.C. (eds.) ASIAN 1999. LNCS, vol. 1742, pp. 1–12. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  33. 33.
    Sammapun, U., Lee, I., Sokolsky, O.: RT-MaC: runtime monitoring and checking of quantitative and probabilistic properties. In: 11th IEEE Intl. Conf. on Embedded and Real-Time Computing Systems and Applications, pp. 147–153 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Lee Pike
    • 1
  • Sebastian Niller
    • 2
  • Nis Wegmann
    • 3
  1. 1.Galois, Inc.USA
  2. 2.National Institute of AerospaceUSA
  3. 3.University of CopenhagenDenmark

Personalised recommendations