Advertisement

Rule-based runtime verification revisited

  • Klaus HavelundEmail author
AD-RV

Abstract

Runtime verification (RV) consists in part of checking execution traces against user-provided formalized specifications. Throughout the last decade many new systems have emerged, most of which support specification notations based on state machines, regular expressions, temporal logic, or grammars. The field of artificial intelligence (AI) has for an even longer period of time studied rule-based production systems, which at a closer look appear to be relevant for RV, although seemingly focused on slightly different application domains, such as, for example, business processes and expert systems. The core algorithm in many of these systems is the Rete algorithm. We have implemented a rule-based system, named LogFire, for runtime verification, founded on the Rete algorithm, as an internal DSL in the Scala programming language (in essence a library). Using Scala’s support for defining DSLs allows to write rules elegantly as part of Scala programs. This combination appears attractive from a practical point of view. Our contribution is part conceptual in arguing that such rule-based frameworks originating from AI are suited for RV. Our contribution is technical by implementing an internal rule DSL in Scala; by illustrating how specification patterns can easily be encoded that generate rules, and by adapting and optimizing the Rete algorithm for RV purposes. An experimental evaluation is performed comparing to six other trace analysis systems. LogFire is currently being used to process telemetry from the Mars Curiosity rover at NASA’s Jet Propulsion Laboratory.

Keywords

Runtime verification Rule-based systems Rete algorithm Internal DSL Scala 

Notes

Acknowledgments

We thank Howard Barringer (University of Manchester, UK) for numerous fruitful discussions. Also thanks to Giles Reger (University of Manchester, UK) and Patrick Meredith (Mop project, University of Illinois at Urbana-Champaign, IL, USA) for their input on how to write specifications optimally in Mop, and to Patrick Meredith for his general support in using the Mop system. Thanks to Mark Proctor, Davide Sottara, and Edson Tirelli (the Drools project) for their support in using Drools. Thanks to Rajeev Joshi (Jet Propulsion Laboratory, California, USA) for his help in parsing and analyzing MSL logs, including helping formulating properties over these logs. Rajeev Joshi also came up with the suggestion that abstraction over logs could be useful. As it turns out, LogFire (more generally: any Rete-based system) offers this functionality. The work was carried out at Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The work was furthermore supported by NSF Grant CCF-0926190.

References

  1. 1.
    AspectJ website. http://www.eclipse.org/aspectj. Accessed 14 March 2014
  2. 2.
    Clips website. http://clipsrules.sourceforge.net. Accessed 14 March 2014
  3. 3.
  4. 4.
    Drools functional programming extensions website. https://community.jboss.org/wiki/FunctionalProgrammingInDrools. Accessed 14 March 2014
  5. 5.
    Drools website. http://www.jboss.org/drools. Accessed 14 March 2014
  6. 6.
    Graphviz website. http://www.graphviz.org. Accessed 14 March 2014
  7. 7.
    Jess website. http://www.jessrules.com/jess. Accessed 14 March 2014
  8. 8.
    Mars Science Laboratory (MSL) mission website. http://mars.jpl.nasa.gov/msl. Accessed 14 March 2014
  9. 9.
    Rooscaloo website. http://code.google.com/p/rooscaloo. Accessed 14 March 2014
  10. 10.
    RuleR website. http://www.cs.man.ac.uk/~howard/LPA.html. Accessed 14 March 2014
  11. 11.
    Scaladoc website. https://wiki.scala-lang.org/display/SW/Scaladoc. Accessed 14 March 2014
  12. 12.
    Scalatest website. http://www.scalatest.org. Accessed 14 March 2014
  13. 13.
    Website for various runtime verification tools, including: Eagle, RuleR, LogScope, TraceContract, and LogFire. http://www.havelund.com/tools. Accessed 14 March 2014
  14. 14.
    Allan, C., Avgustinov, P., Christensen, A.S., Hendren, L., Kuzins, S., Lhoták, O., de Moor, O., Sereni, D., Sittamplan, G., Tibble, J.: Adding trace matching with free variables to AspectJ. In: OOPSLA’05. ACM Press, New York (2005)Google Scholar
  15. 15.
    Barringer, H., Falcone, Y., Havelund, K., Reger, G., Rydeheard, D.: Quantified event automata-towards expressive and efficient runtime monitors. In: Proceedings of 18th International Symposium on Formal Methods (FM’12), Paris, volume 7436 of LNCS. Springer, Berlin (2012)Google Scholar
  16. 16.
    Barringer, H., Fisher, M., Gabbay, D.M., Gough, G., Owens, R.: Metatem: an introduction. Form. Asp. Comput. 7(5), 533–549 (1995)CrossRefzbMATHGoogle Scholar
  17. 17.
    Barringer, H., Goldberg, A., Havelund, K., Sen, K.: Rule-based runtime verification. In: VMCAI, volume 2937 of LNCS, pp. 44–57. Springer, Berlin (2004)Google Scholar
  18. 18.
    Barringer, H., Groce, A., Havelund, K., Smith, M.: Formal analysis of log files. J. Aerosp. Comput. Inf. Commun. 7(11), 365–390 (2010)CrossRefGoogle Scholar
  19. 19.
    Barringer, H., Havelund, K.: TraceContract: A Scala DSL for trace analysis. In: Proceedings of 17th International Symposium on Formal Methods (FM’11), Limerick, volume 6664 of LNCS, pp. 57–72. Springer, Berlin (2011)Google Scholar
  20. 20.
    Barringer, H., Havelund, K., Kurklu, E., Morris, R.: Checking flight rules with TraceContract: application of a scala DSL for trace analysis. In: Scala Days 2011, Stanford University, California (2011)Google Scholar
  21. 21.
    Barringer, H., Havelund, K., Rydeheard, D., Groce, A.: Rule systems for runtime verification: a short tutorial. In: Proceedings of the 9th International Workshop on Runtime Verification (RV’09), volume 5779 of LNCS, pp. 1–24. Springer, Berlin (2009)Google Scholar
  22. 22.
    Barringer, H., Rydeheard, D., Havelund, K.: Rule systems for run-time monitoring: from Eagle to RuleR. In: Proceedings of the 7th International Workshop on Runtime Verification (RV’07), volume 4839 of LNCS, pp. 111–125. Springer, Vancouver (2007)Google Scholar
  23. 23.
    Barringer, H., Rydeheard, D.E., Havelund, K.: Rule systems for run-time monitoring: from eagle to ruler. J. Log. Comput. 20(3), 675–706 (2010)CrossRefzbMATHMathSciNetGoogle Scholar
  24. 24.
    Basin, D.A., Klaedtke, F., Müller, S.: Policy monitoring in first-order temporal logic. In: Touili T., Cook B., Jackson, P. (eds.) Computer Aided Verification, Proceedings, of 22nd International Conference, CAV 2010, Edinburgh, volume 6174 of Lecture Notes in Computer Science, pp. 1–18. Springer, Berlin (2010)Google Scholar
  25. 25.
    Bauer, A., Küster, J.-C., Vegliach, G.: From propositional to first-order monitoring. In: Proceedings of 4th International Conference on Runtime Verification, RV’13, Rennes, volume 8174 of LNCS, pp. 59–75. Springer, Berlin (2013)Google Scholar
  26. 26.
    Bauer, A., Leucker, M., Schallhart, C.: The good, the bad, and the ugly, but how ugly is ugly? In: Proceedings of the 7th International Workshop on Runtime Verification (RV’07), Vancouver, volume 4839 of LNCS, pp. 126–138. Springer, Berlin (2007)Google Scholar
  27. 27.
    Bodden, E.: MOPBox: A library approach to runtime verification. In: Proceedings of 2nd International Conference on Runtime Verification, RV’11, San Francisco, volume 7186 of LNCS, pp. 365–369. Springer, Berlin (2011)Google Scholar
  28. 28.
    Chen, F., Roşu, G.: Parametric trace slicing and monitoring. In: Proceedings of the 15th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’09), volume 5505 of LNCS, pp. 246–261 (2009)Google Scholar
  29. 29.
    D’Amorim, M., Havelund, K.: Event-based runtime verification of Java programs. In: Workshop on Dynamic Program Analysis (WODA’05), volume 30(4) of ACM Sigsoft Software Engineering, Notes, pp. 1–7 (2005)Google Scholar
  30. 30.
    Doorenbos, R.B.: Production matching for large learning systems. Ph.D. thesis, Carnegie Mellon University, Pittsburgh (1995)Google Scholar
  31. 31.
    Drusinsky, D.: The temporal rover and the ATG rover. In: SPIN Model Checking and Software Verification, volume 1885 of LNCS, pp. 323–330. Springer, Berlin (2000)Google Scholar
  32. 32.
    Drusinsky, D.: Modeling and Verification using UML Statecharts, p. 400. Elsevier (2006). ISBN-13: 978-0-7506-7949-7Google Scholar
  33. 33.
    Dwyer, M.B., Avrunin, G.S., Corbett, J.C.: Patterns in property specifications for finite-state verification. In: Boehm, B.W., Garlan, D., Kramer, J. (eds.) Proceedings of the 1999 International Conference on Software Engineering, ICSE’ 99, Los Angeles, pp. 411–420. ACM, New York (1999)Google Scholar
  34. 34.
    Falcone, Y., Fernandez, J.-C., Mounier, L.: Runtime verification of safety-progress properties. In: Proceedings of the 9th International Workshop on Runtime Verification (RV’09), volume 5779 of LNCS, pp. 40–59. Springer, Berlin (2009)Google Scholar
  35. 35.
    Falcone, Y., Fernandez, J.-C., Mounier, L.: What can you verify and enforce at runtime? J. Softw. Tools Technol. Transf. 14(3), 349–382 (2012)CrossRefGoogle Scholar
  36. 36.
    Falcone, Y., Havelund, K., Reger, G.: A tutorial on runtime verification. In: Broy, M., Peled, D., Kalus, G. (eds.) Engineering Dependable Software Systems, volume 34 of NATO Science for Peace and Security Series - D: Information and Communication Security, pp. 141–175. IOS Press, Amsterdam (2013)Google Scholar
  37. 37.
    Forgy, C.: Rete: a fast algorithm for the many pattern/many object pattern match problem. Artif. Intell. 19, 17–37 (1982)CrossRefGoogle Scholar
  38. 38.
    Fowler, M., Parsons, R.: Domain-Specific Languages. Addison-Wesley, Boston (2010)Google Scholar
  39. 39.
    Fusco, M.: Hammurabi: a Scala rule engine. In: Scala Days 2011, Stanford University, California (2011)Google Scholar
  40. 40.
    Garillot, F., Werner, B.: Simple types in type theory: deep and shallow encodings. In: Proceedings of 20th International Conference on Theorem Proving in Higher Order Logics (TPHOLs’07), Kaiserslautern, volume 4732 of LNCS, pp. 368–382. Springer, Berlin (2007)Google Scholar
  41. 41.
    Gerth, R., Peled, D., Vardi, M.Y., Wolper, P.: Simple on-the-fly automatic verification of linear temporal logic. In: Dembinski, P., Sredniawa, M. (eds.) Protocol Specification Testing and Verification (PSTV), vol. 38, pp 3–18. Chapman and Hall, Dordrecht (1995)Google Scholar
  42. 42.
    Goubault-Larrecq, J., Olivain, J.: A smell of ORCHIDS. In: Proceedings of of the 8th International Workshop on Runtime Verification (RV’08), volume 5289 of LNCS, pp. 1–20. Springer, Budapest (2008)Google Scholar
  43. 43.
    Groce, A., Havelund, K., Smith, M.H.: From scripts to specifications: the evolution of a flight software testing effort. In: 32nd International Conference on Software Engineering (ICSE’10), Cape Town, pp. 129–138. ACM SIG, Dordrecht (2010)Google Scholar
  44. 44.
    Hallé, S., Villemaire, R.: Runtime enforcement of web service message contracts with data. IEEE Trans. Serv. Comput. 5(2), 192–206 (2012)CrossRefGoogle Scholar
  45. 45.
    Havelund, K.: Runtime verification of C programs. In: Proceedings of the 1st TestCom/FATES conference, volume 5047 of LNCS, Tokyo. Springer, Berlin (2008)Google Scholar
  46. 46.
    Havelund, K.: What does AI have to do with RV? (extended abstract). In: Margaria, T., Steffen, B. (eds.) Proceedings of 5th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation. Track: Runtime Verification: the Application Perspective (organized by Ylies Falcone and Lenore Zuck), Heraclion, volume 7610 of LNCS. Springer, Berlin (2012)Google Scholar
  47. 47.
    Havelund, K.: A scala DSL for Rete-based runtime verification. In: Proceedings of 4th International Conference on Runtime Verification, RV’13, Rennes, volume 8174 of LNCS, pp. 322–327. Springer, Berlin (2013) Google Scholar
  48. 48.
    Havelund, K., Goldberg, A.: Verify your runs. In: Verified Software: Theories, Tools, Experiments, VSTTE 2005, pp. 374–383 (2008)Google Scholar
  49. 49.
    Havelund, K., Roşu, G.: Efficient monitoring of safety properties. Softw. Tools Technol. Transf. 6(2), 158–173 (2004)Google Scholar
  50. 50.
    Havelund, K., Rosu, G.: Monitoring programs using rewriting. In: 16th ASE conference, San Diego, pp. 135–143, 2001Google Scholar
  51. 51.
    Herzeel, C., Gybels, K., Costanza, P.: Escaping with future variables in HALO. In: Proceedings of of the 7th International Workshop on Runtime Verification (RV’07), volume 4839 of LNCS, pp. 51–62. Springer, Berlin (2007)Google Scholar
  52. 52.
    Holzmann, G.J., Joshi, R.: Model-driven software verification. In: Model Checking Software: the 11th International SPIN Workshop, Barcelona, volume 2989 of LNCS, pp. 76–91. Springer, Berlin (2004)Google Scholar
  53. 53.
    Joshi, R.: Resources for analyzing MSL logs, personal communication (2013)Google Scholar
  54. 54.
    Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., Griswold, W.G.: An overview of AspectJ. In: Knudsen, J.L. (ed.) Proceedings of of the 15th European Conference on Object-Oriented Programming, volume 2072 of LNCS, pp. 327–353. Springer, Berlin (2001)Google Scholar
  55. 55.
    Lee, I., Kannan, S., Kim, M., Sokolsky, O., Viswanathan, M.: Runtime assurance based on formal specifications. In: PDPTA, pp. 279–287. CSREA Press, Las Vegas (1999)Google Scholar
  56. 56.
    Leucker, M., Schallhart, C.: A brief account of runtime verification. J. Log. Algebr. Progr. 78(5), 293–303 (2008)CrossRefGoogle Scholar
  57. 57.
    Luckham, D.: The power of events: an introduction to complex event processing in distributed enterprise systems. Addison-Wesley, Boston (2002)Google Scholar
  58. 58.
    Meredith, P., Jin, D., Griffith, D., Chen, F., Roşu, G.: An overview of the mop runtime verification framework. STTT 14(3), 249–289 (2012)CrossRefGoogle Scholar
  59. 59.
    Perlin, M.: Topologically traversing the rete network. Appl. Artif. Intell. 4(3), 155–177 (1990)CrossRefGoogle Scholar
  60. 60.
    Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science, pp. 46–57. IEEE Computer Society, Los Alamitos (1977)Google Scholar
  61. 61.
    Stolz, V.: Temporal assertions with parameterized propositions. In: Proceedings of the 7th International Workshop on Runtime Verification (RV’07), volume 4839 of LNCS, pp. 176–187. Springer, Vancouver (2007)Google Scholar
  62. 62.
    Stolz, V., Bodden, E.: Temporal assertions using AspectJ. In: Proceedings of the 5th International Workshop on Runtime Verification (RV’05), volume 144(4) of ENTCS, pp. 109–124. Elsevier, Amsterdam (2006)Google Scholar
  63. 63.
    Stolz, V., Huch, F.: Runtime verification of concurrent Haskell programs. In: Proceedings of the 4th International Workshop on Runtime Verification (RV’04), volume 113 of ENTCS, pp. 201–216. Elsevier, Amsterdam (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Laboratory for Reliable Software, Jet Propulsion LaboratoryCalifornia Institute of TechnologyCaliforniaUSA

Personalised recommendations