Testing the IPC Protocol for a Real-Time Operating System

  • Achim D. Brucker
  • Oto Havle
  • Yakoub Nemouchi
  • Burkhart Wolff
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9593)


In this paper, we adapt model-based testing techniques to concurrent code, namely for test generations of an (industrial) OS kernel called PikeOS. Since our data-models are complex, the problem is out of reach of conventional model-checking techniques. Our solution is based on symbolic execution implemented inside the interactive theorem proving environment Isabelle/HOL extended by a plugin with test generation facilities called HOL-TestGen.

As a foundation for our symbolic computing techniques, we refine the theory of monads to embed interleaving executions with abort, synchronization, and shared memory to a general but still optimized behavioral test framework.

This framework is instantiated by a model of PikeOS inter-process communication system-calls. Inheriting a micro-architecture going back to the L4 kernel, the system calls of the IPC-API are internally structured by atomic actions; according to a security model, these actions can fail and must produce error-codes. Thus, our tests reveal errors in the enforcement of the security model.


Test program generation Symbolic test case generations Black box testing Testing operating systems Certification CC Concurrency Interleaving 



This work was partially supported by the Euro-MILS project funded by the European Union’s Programme [FP7/2007-2013] under grant agreement number ICT-318353.


  1. 1.
    Hierons, R.M., Bowen, J.P., Harman, M. (eds.): FORTEST. LNCS, vol. 4949. Springer, Heidelberg (2008)Google Scholar
  2. 2.
    Brucker, A.D., Wolff, B.: Test-sequence generation with Hol-TestGen with an application to firewall testing. In: Gurevich, Y., Meyer, B. (eds.) TAP 2007. LNCS, vol. 4454, pp. 149–168. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  3. 3.
    Brucker, A.D., Wolff, B.: hol-TestGen: an interactive test-case generation framework. In: Chechik, M., Wirsing, M. (eds.) FASE 2009. LNCS, vol. 5503, pp. 417–420. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  4. 4.
    Brucker, A.D., Wolff, B.: On theorem prover-based testing. Formal Aspects Comput. 25, 683–721 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Brucker, A.D., Feliachi, A., Nemouchi, Y., Wolff, B.: Test program generation for a microprocessor. In: Veanes, M., Viganò, L. (eds.) TAP 2013. LNCS, vol. 7942, pp. 76–95. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  6. 6.
    Ponce de León, H., Haar, S., Longuet, D.: Conformance relations for labeled event structures. In: Brucker, A.D., Julliand, J. (eds.) TAP 2012. LNCS, vol. 7305, pp. 83–98. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  7. 7.
  8. 8.
    Feliachi, A., Gaudel, M.-C., Wenzel, M., Wolff, B.: The \(Circus\) testing theory revisited in Isabelle/HOL. In: Groves, L., Sun, J. (eds.) ICFEM 2013. LNCS, vol. 8144, pp. 131–147. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  9. 9.
    Gill, A.: Introduction to the Theory of Finite-State Machines. McGraw-Hill, New York (1962)zbMATHGoogle Scholar
  10. 10.
    Härtig, H., Hohmuth, M., Liedtke, J., Schönberg, S., Wolter, J.: The performance of microkernel-based systems. In: SOSP (1997)Google Scholar
  11. 11.
    Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., Sewell, T., Tuch, H., Winwood, S.: seL4: formal verification of an OS kernel. In: SOSP, pp. 207–220 (2009)Google Scholar
  12. 12.
    Liedtke, J.: On \(\mu \)-kernel construction. SOSP 29(5), 237–250 (1995)Google Scholar
  13. 13.
    Lynch, N., Tuttle, M.: An introduction to input/output automata. CWI-Quart. 2(3), 219–246 (1989)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Common criteria for information technology security evaluation.
  15. 15.
    Musuvathi, M., Qadeer, S., Ball, T.: Chess: a systematic testing tool for concurrent software. Technical report MSR-TR-2007-149, Microsoft Research (2007)Google Scholar
  16. 16.
    Nipkow, T., Paulson, L.C., Wenzel, M. (eds.): Isabelle/HOL. LNCS, vol. 2283. Springer, Heidelberg (2002)zbMATHGoogle Scholar
  17. 17.
    Shan Lu, W.J., Zhou, Y.: A study of interleaving coverage criteria. In: ESEC-FSE Companion, pp. 533–536 (2007)Google Scholar
  18. 18.
  19. 19.
    SYSGO: PikeOS Fundamentals. SYSGO (2013)Google Scholar
  20. 20.
    SYSGO: PikeOS Kernel. SYSGO (2013)Google Scholar
  21. 21.
    Tretmans, J.: Model based testing with labelled transition systems. In: Hierons, R.M., Bowen, J.P., Harman, M. (eds.) FORTEST. LNCS, vol. 4949, pp. 1–38. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  22. 22.
    Wadler, P.: Comprehending monads. Math. Struct. Comput. Sci. 2(4), 461–493 (1992)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Zhu, H., Hall, P.A.V., May, J.H.R.: Software unit test coverage and adequacy. ACM Comput. Surv. (CSUR) 29(4), 366–427 (1997)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Achim D. Brucker
    • 1
  • Oto Havle
    • 3
  • Yakoub Nemouchi
    • 2
  • Burkhart Wolff
    • 2
  1. 1.SAP SEKarlsruheGermany
  2. 2.LRI, Université Paris Sud, CNRS, Centrale SupélecUniversité SaclayOrsayFrance
  3. 3.SYSGO AGKlein-WinternheimGermany

Personalised recommendations