Collaborative Runtime Verification with Tracematches

  • Eric Bodden
  • Laurie Hendren
  • Patrick Lam
  • Ondřej Lhoták
  • Nomair A. Naeem
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4839)

Abstract

Perfect pre-deployment test coverage is notoriously difficult to achieve for large applications. With enough end users, many more test cases will be encountered during an application’s deployment than during testing. The use of runtime verification after deployment would enable developers to detect and report on unexpected situations. Unfortunately, the prohibitive performance cost of runtime monitors prevents their use in deployed code.

In this work we study the feasibility of collaborative runtime verification, a verification approach which distributes the burden of runtime verification onto multiple users. Each user executes a partially instrumented program and therefore suffers only a fraction of the instrumentation overhead.

We focus on runtime verification using tracematches. Tracematches are a specification formalism that allows users to specify runtime verification properties via regular expressions with free variables over the dynamic execution trace. We propose two techniques for soundly partitioning the instrumentation required for tracematches: spatial partitioning, where different copies of a program monitor different program points for violations, and temporal partitioning, where monitoring is switched on and off over time. We evaluate the relative impact of partitioning on a user’s runtime overhead by applying each partitioning technique to a collection of benchmarks that would otherwise incur significant instrumentation overhead.

Our results show that spatial partitioning almost completely eliminates runtime overhead (for any particular benchmark copy) on many of our test cases, and that temporal partitioning scales well and provides runtime verification on a “pay as you go” basis.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Allan, C., Avgustinov, P., Christensen, A.S., Hendren, L., Kuzins, S., Lhoták, O., de Moor, O., Sereni, D., Sittampalam, G., Tibble, J.: Adding Trace Matching with Free Variables to AspectJ. In: Object-Oriented Programming, Systems, Languages and Applications, pp. 345–364. ACM Press, New York (2005)Google Scholar
  2. 2.
    Avgustinov, P., de Moor, O., Tibble, J.: On the semantics of matching trace monitoring patterns. In: Seventh Workshop on Runtime Verification, Vancouver, Canada, March. LNCS, (2007)Google Scholar
  3. 3.
    Avgustinov, P., Tibble, J., Bodden, E., Lhoták, O., Hendren, L., de Moor, O., Ongkingco, N., Sittampalam, G.: Efficient trace monitoring. Technical Report abc-2006-1, (March 2006), http://www.aspectbench.org/
  4. 4.
    Avgustinov, P., Tibble, J., de Moor, O.: Making trace monitors feasible. In: ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications (2007)Google Scholar
  5. 5.
    Blackburn, S.M., Garner, R., Hoffman, C., Khan, A.M., McKinley, K.S., Bentzur, R., Diwan, A., Feinberg, D., Frampton, D., Guyer, S.Z., Hirzel, M., Hosking, A., Jump, M., Lee, H., Moss, J.E.B., Phansalkar, A., Stefanović, D., VanDrunen, T., von Dincklage, D., Wiedermann, B.: The DaCapo benchmarks: Java benchmarking development and analysis. In: OOPSLA 2006: Proceedings of the 21st annual ACM SIGPLAN conference on Object-Oriented Programing, Systems, Languages, and Applications, pp. 169–190. ACM Press, New York, USA (2006)CrossRefGoogle Scholar
  6. 6.
    Bodden, E., Hendren, L., Lhoták, O.: A staged static program analysis to improve the performance of runtime monitoring. In: 21st European Conference on Object-Oriented Programming, Berlin, Germany, July 30th-August 3rd. LNCS, vol. 4609, pp. 525–549. Springer, Heidelberg (2007)Google Scholar
  7. 7.
    Chen, F., Rosu, G.: MOP: An efficient and generic runtime verification framework. In: ACM Conference on Object-Oriented Programming Systems, Languages and Applications (OOPSLA) (2007)Google Scholar
  8. 8.
    Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W.: An overview of AspectJ. In: Knudsen, J.L. (ed.) ECOOP 2001. LNCS, vol. 2072, pp. 327–353. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  9. 9.
    Liblit, B., Aiken, A., Zheng, A., Jordan, M.: Bug isolation via remote program sampling. In: Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation, San Diego, California, pp. 141–154 (June 2003)Google Scholar
  10. 10.
    Martin, M., Livshits, B., Lam, M.: Finding application errors using PQL: a program query language. In: Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages and Applications, pp. 365–383 (2005)Google Scholar
  11. 11.
    Patil, H., Fischer, C.: Low-cost, concurrent checking of pointer and array accesses in C programs. Softw. Pract. Exper 27(1), 87–110 (1997)CrossRefGoogle Scholar
  12. 12.
    Pavlopoulou, C., Young, M.: Residual test coverage monitoring. In: ICSE 1999. Proceedings of the 21st International Conference on Software Engineering, pp. 277–284. IEEE Computer Society Press, Los Alamitos, CA, USA (1999)Google Scholar
  13. 13.
    Grieskamp, W.: Microsoft Research. In: Personal communication, (January 007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Eric Bodden
    • 1
  • Laurie Hendren
    • 1
  • Patrick Lam
    • 1
  • Ondřej Lhoták
    • 2
  • Nomair A. Naeem
    • 2
  1. 1.McGill University, Montréal, QuébecCanada
  2. 2.University of Waterloo, Waterloo, OntarioCanada

Personalised recommendations