Advertisement

Zero Overhead Runtime Monitoring

  • Daniel Wonisch
  • Alexander Schremmer
  • Heike Wehrheim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8137)

Abstract

Runtime monitoring aims at ensuring program safety by monitoring the program’s behaviour during execution and taking appropriate action before a program violates some property. Runtime monitoring is in particular important when an exhaustive formal verification fails. While the approach allows for a safe execution of programs, it may impose a significant runtime overhead.

In this paper, we propose a novel technique combining verification and monitoring which incurs no overhead during runtime at all. The technique proceeds by using the inconclusive result of a verification run as the basis for transforming the program into one where all potential points of failure are replaced by HALT statements. The new program is safe by construction, behaviourally equivalent to the original program (except for unsafe behaviour), and has the same performance characteristics.

Keywords

Safety Property Original Program Concrete State Successor Node Runtime Overhead 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barringer, H., Rydeheard, D.E., Havelund, K.: Rule Systems for Run-time Monitoring: from Eagle to RuleR. J. Log. Comput. 20(3), 675–706 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bauer, A., Leucker, M., Schallhart, C.: Runtime Verification for LTL and TLTL. ACM Trans. Softw. Eng. Methodol. 20(4), 14 (2011)CrossRefGoogle Scholar
  3. 3.
    Beyer, D., Keremoglu, M., Wendler, P.: Predicate Abstraction with Adjustable-Block Encoding. In: FMCAD 2010, pp. 189–197 (2010)Google Scholar
  4. 4.
    Beyer, D., Chlipala, A.J., Henzinger, T.A., Jhala, R., Majumdar, R.: The Blast query language for software verification. In: Giacobazzi, R. (ed.) SAS 2004. LNCS, vol. 3148, pp. 2–18. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: a technique to pass information between verifiers. In: Tracz, W., Robillard, M.P., Bultan, T. (eds.) SIGSOFT FSE, p. 57. ACM (2012)Google Scholar
  6. 6.
    Beyer, D., Henzinger, T.A., Majumdar, R., Rybalchenko, A.: Path invariants. In: Ferrante, J., McKinley, K.S. (eds.) PLDI, pp. 300–309. ACM (2007)Google Scholar
  7. 7.
    Beyer, D., Keremoglu, M.E.: CPAchecker: A tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  8. 8.
    Bodden, E., Lam, P., Hendren, L.: Clara: A framework for partially evaluating finite-state runtime monitors ahead of time. In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 183–197. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Chen, F., D’Amorim, M., Roşu, G.: A formal monitoring-based framework for software development and analysis. In: Davies, J., Schulte, W., Barnett, M. (eds.) ICFEM 2004. LNCS, vol. 3308, pp. 357–372. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Das, M., Lerner, S., Seigle, M.: ESP: Path-Sensitive Program Verification in Polynomial Time. In: PLDI, pp. 57–68 (2002)Google Scholar
  11. 11.
    Dhurjati, D., Das, M., Yang, Y.: Path-sensitive dataflow analysis with iterative refinement. In: Yi, K. (ed.) SAS 2006. LNCS, vol. 4134, pp. 425–442. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  12. 12.
    Dwyer, M.B., Purandare, R.: Residual dynamic typestate analysis exploiting static analysis: results to reformulate and reduce the cost of dynamic analysis. In: Automated Software Engineering (ASE), pp. 124–133. ACM (2007)Google Scholar
  13. 13.
    Erlingsson, Ú., Schneider, F.B.: IRM Enforcement of Java Stack Inspection. In: IEEE Symposium on Security and Privacy, pp. 246–255 (2000)Google Scholar
  14. 14.
    Hallé, S., Tremblay-Lessard, R.: A case for “Piggyback” runtime monitoring. In: Margaria, T., Steffen, B. (eds.) ISoLA 2012, Part I. LNCS, vol. 7609, pp. 295–311. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    Kim, M., Viswanathan, M., Kannan, S., Lee, I., Sokolsky, O.: Java-MaC: A Run-Time Assurance Approach for Java Programs. Formal Methods in System Design 24(2), 129–155 (2004)CrossRefzbMATHGoogle Scholar
  16. 16.
    Meredith, P.O., Jin, D., Griffith, D., Chen, F., Rosu, G.: An overview of the MOP runtime verification framework. STTT 14(3), 249–289 (2012)CrossRefGoogle Scholar
  17. 17.
    Strom, R.E., Yemini, S.: Typestate: A programming language concept for enhancing software reliability. IEEE Trans. Software Eng. 12(1), 157–171 (1986)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Daniel Wonisch
    • 1
  • Alexander Schremmer
    • 1
  • Heike Wehrheim
    • 1
  1. 1.University of PaderbornGermany

Personalised recommendations