Advertisement

A Taxonomy for Classifying Runtime Verification Tools

  • Yliès FalconeEmail author
  • Srđan KrstićEmail author
  • Giles RegerEmail author
  • Dmitriy TraytelEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11237)

Abstract

Over the last 15 years Runtime Verification (RV) has grown into a diverse and active field, which has stimulated the development of numerous theoretical frameworks and tools. Many of the tools are at first sight very different and challenging to compare. Yet, there are similarities. In this work, we classify RV tools within a high-level taxonomy of concepts. We first present this taxonomy and discuss the different dimensions. Then, we survey RV tools and classify them according to the taxonomy. This paper constitutes a snapshot of the current state of the art and enables a comparison of existing tools.

References

  1. 1.
    IC1402 Runtime Verification beyond Monitoring (ARVI). https://www.cost-arvi.eu/
  2. 2.
    Azzopardi, S., Colombo, C., Ebejer, J.P., Mallia, E., Pace, G.: Runtime verification using VALOUR. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 10–18. EasyChair (2017)Google Scholar
  3. 3.
    Barringer, H., Falcone, Y., Havelund, K., Reger, G., Rydeheard, D.: Quantified event automata: towards expressive and efficient runtime monitors. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 68–84. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-32759-9_9CrossRefGoogle Scholar
  4. 4.
    Bartocci, E., Bonakdarpour, B., Falcone, Y.: First international competition on software for runtime verification. In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 1–9. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11164-3_1CrossRefGoogle Scholar
  5. 5.
    Bartocci, E., Falcone, Y. (eds.): Lectures on Runtime Verification. LNCS, vol. 10457. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75632-5CrossRefGoogle Scholar
  6. 6.
    Bartocci, E., et al.: First international competition on runtime verification: rules, benchmarks, tools, and final results of CRV 2014. STTT, 1–40 (2017)Google Scholar
  7. 7.
    Bartocci, E., Falcone, Y., Francalanza, A., Reger, G.: Introduction to runtime verification. In: Bartocci, E., Falcone, Y. (eds.) Lectures on Runtime Verification. LNCS, vol. 10457, pp. 1–33. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75632-5_1CrossRefGoogle Scholar
  8. 8.
    Basin, D.A., Bhatt, B.N., Traytel, D.: Almost event-rate independent monitoring of metric temporal logic. In: Legay, A., Margaria, T. (eds.) TACAS 2017. LNCS, vol. 10206, pp. 94–112. Springer, Heidelberg (2017).  https://doi.org/10.1007/978-3-662-54580-5_6CrossRefGoogle Scholar
  9. 9.
    Basin, D.A., Harvan, M., Klaedtke, F., Zălinescu, E.: MONPOLY: monitoring usage-control policies. In: Khurshid, S., Sen, K. (eds.) RV 2011. LNCS, vol. 7186, pp. 360–364. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-29860-8_27CrossRefGoogle Scholar
  10. 10.
    Basin, D.A., Klaedtke, F., Müller, S., Zalinescu, E.: Monitoring metric first-order temporal properties. J. ACM 62(2), 15:1–15:45 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Basin, D.A., Klaedtke, F., Zalinescu, E.: The MonPoly monitoring tool. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 19–28. EasyChair (2017)Google Scholar
  12. 12.
    Basin, D.A., Krstić, S., Traytel, D.: Almost event-rate independent monitoring of metric dynamic logic. In: Lahiri, S., Reger, G. (eds.) RV 2017. LNCS, vol. 10548, pp. 85–102. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67531-2_6CrossRefGoogle Scholar
  13. 13.
    Basin, D.A., Krstić, S., Traytel, D.: AERIAL: almost event-rate independent algorithms for monitoring metric regular properties. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 29–36. EasyChair (2017)Google Scholar
  14. 14.
    Bonakdarpour, B., Navabpour, S., Fischmeister, S.: Time-triggered runtime verification. Form. Methods Syst. Des. 43(1), 29–60 (2013)CrossRefGoogle Scholar
  15. 15.
    Cassar, I., Francalanza, A., Attard, D.P., Aceto, L., Ingólfsdóttir, A.: A suite of monitoring tools for Erlang. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 41–47. EasyChair (2017)Google Scholar
  16. 16.
    Colombo, C., Falcone, Y.: First international summer school on runtime verification. In: Falcone, Y., Sánchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 17–20. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46982-9_2CrossRefGoogle Scholar
  17. 17.
    Colombo, C., Pace, G.J.: Runtime verification using LARVA. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 55–63. EasyChair (2017)Google Scholar
  18. 18.
    Colombo, C., Pace, G.J., Schneider, G.: Dynamic event-based runtime monitoring of real-time and contextual properties. In: Cofer, D., Fantechi, A. (eds.) FMICS 2008. LNCS, vol. 5596, pp. 135–149. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-03240-0_13CrossRefGoogle Scholar
  19. 19.
    Colombo, C., Pace, G.J., Schneider, G.: LARVA — safer monitoring of real-time java programs (tool paper). In: Hung, D.V., Krishnan, P. (eds.) SEFM 2009, pp. 33–37. IEEE Computer Society (2009)Google Scholar
  20. 20.
    Decker, N., Harder, J., Scheffel, T., Schmitz, M., Thoma, D.: Runtime monitoring with union-find structures. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 868–884. Springer, Heidelberg (2016).  https://doi.org/10.1007/978-3-662-49674-9_54CrossRefGoogle Scholar
  21. 21.
    Decker, N., Leucker, M., Thoma, D.: jUnitRV–adding runtime verification to jUnit. In: Brat, G., Rungta, N., Venet, A. (eds.) NFM 2013. LNCS, vol. 7871, pp. 459–464. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-38088-4_34CrossRefGoogle Scholar
  22. 22.
    Decker, N., Leucker, M., Thoma, D.: Monitoring modulo theories. STTT 18(2), 205–225 (2016)CrossRefGoogle Scholar
  23. 23.
    Delahaye, M., Kosmatov, N., Signoles, J.: Common specification language for static and dynamic analysis of C programs. In: Shin, S.Y., Maldonado, J.C. (eds.) SAC 2013, pp. 1230–1235. ACM (2013)Google Scholar
  24. 24.
    Delgado, N., Gates, A.Q., Roach, S.: A taxonomy and catalog of runtime software-fault monitoring tools. IEEE Trans. Softw. Eng. 30(12), 859–872 (2004)CrossRefGoogle Scholar
  25. 25.
    Dou, W., Bianculli, D., Briand, L.: A model-driven approach to offline trace checking of temporal properties with OCL. Technical report SnT-TR-2014-5, Interdisciplinary Centre for Security, Reliability and Trust (2014). http://hdl.handle.net/10993/16112
  26. 26.
    Dou, W., Bianculli, D., Briand, L.: TemPsy-Check: a tool for model-driven trace checking of pattern-based temporal properties. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 64–70. EasyChair (2017)Google Scholar
  27. 27.
    Drabek, C., Weiss, G.: DANA - description and analysis of networked applications. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 71–80. EasyChair (2017)Google Scholar
  28. 28.
    Falcone, Y.: You should better enforce than verify. In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 89–105. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-16612-9_9CrossRefGoogle Scholar
  29. 29.
    Falcone, Y., Havelund, K., Reger, G.: A tutorial on runtime verification. In: Broy, M., Peled, D.A., Kalus, G. (eds.) Engineering Dependable Software Systems, NATO SPS D: Information and Communication Security, vol. 34, pp. 141–175. IOS Press, Amsterdam (2013)Google Scholar
  30. 30.
    Falcone, Y., Ničković, D., Reger, G., Thoma, D.: Second international competition on runtime verification. In: Bartocci, E., Majumdar, R. (eds.) RV 2015. LNCS, vol. 9333, pp. 405–422. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23820-3_27CrossRefGoogle Scholar
  31. 31.
    Hallé, S.: When RV meets CEP. In: Falcone, Y., Sánchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 68–91. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46982-9_6CrossRefGoogle Scholar
  32. 32.
    Hallé, S., Khoury, R.: Event stream processing with BeepBeep 3. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 81–88. EasyChair (2017)Google Scholar
  33. 33.
    Havelund, K.: Rule-based runtime verification revisited. STTT 17(2), 143–170 (2015)CrossRefGoogle Scholar
  34. 34.
    Havelund, K., Leucker, M., Reger, G., Stolz, V.: A shared challenge in behavioural specification (Dagstuhl seminar 17462). Dagstuhl Rep. 7(11), 59–85 (2017)Google Scholar
  35. 35.
    Havelund, K., Reger, G.: Runtime verification logics a language design perspective. In: Aceto, L., Bacci, G., Bacci, G., Ingólfsdóttir, A., Legay, A., Mardare, R. (eds.) Models, Algorithms, Logics and Tools. LNCS, vol. 10460, pp. 310–338. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-63121-9_16CrossRefGoogle Scholar
  36. 36.
    Havelund, K., Reger, G., Thoma, D., Zălinescu, E.: Monitoring events that carry data. In: Bartocci, E., Falcone, Y. (eds.) Lectures on Runtime Verification. LNCS, vol. 10457, pp. 61–102. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-75632-5_3CrossRefGoogle Scholar
  37. 37.
    Jin, D., Meredith, P.O., Lee, C., Rosu, G.: JavaMOP: efficient parametric runtime monitoring framework. In: Glinz, M., Murphy, G.C., Pezzè, M. (eds.) ICSE 2012, pp. 1427–1430. IEEE Computer Society (2012)Google Scholar
  38. 38.
    Leucker, M., Schallhart, C.: A brief account of runtime verification. J. Log. Algebr. Program. 78(5), 293–303 (2009)CrossRefGoogle Scholar
  39. 39.
    Luo, Q., et al.: RV-Monitor: efficient parametric runtime verification with simultaneous properties. In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 285–300. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11164-3_24CrossRefGoogle Scholar
  40. 40.
    Meredith, P.O., Jin, D., Griffith, D., Chen, F., Rosu, G.: An overview of the MOP runtime verification framework. STTT 14(3), 249–289 (2012)CrossRefGoogle Scholar
  41. 41.
    Milewicz, R., Vanka, R., Tuck, J., Quinlan, D., Pirkelbauer, P.: Lightweight runtime checking of C programs with RTC. Comput. Lang. Syst. Str. 45, 191–203 (2016)Google Scholar
  42. 42.
    Moosbrugger, P., Rozier, K.Y., Schumann, J.: R2U2: monitoring and diagnosis of security threats for unmanned aerial systems. Form. Methods Syst. Des. 51(1), 31–61 (2017)CrossRefGoogle Scholar
  43. 43.
    Navabpour, S., et al.: RiTHM: a tool for enabling time-triggered runtime verification for C programs. In: Meyer, B., Baresi, L., Mezini, M. (eds.) ESEC/FSE 2013, pp. 603–606. ACM (2013)Google Scholar
  44. 44.
    Rapin, N.: ARTiMon monitoring tool, the time domains. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 106–122. EasyChair (2017)Google Scholar
  45. 45.
    Reger, G.: An overview of MarQ. In: Falcone, Y., Sánchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 498–503. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46982-9_34CrossRefGoogle Scholar
  46. 46.
    Reger, G.: A report of RV-CuBES 2017. In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 1–9. EasyChair (2017)Google Scholar
  47. 47.
    Reger, G., Cruz, H.C., Rydeheard, D.: MarQ: monitoring at runtime with QEA. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 596–610. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-46681-0_55CrossRefGoogle Scholar
  48. 48.
    Reger, G., Hallé, S., Falcone, Y.: Third international competition on runtime verification. In: Falcone, Y., Sánchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46982-9_3CrossRefGoogle Scholar
  49. 49.
    Reger, G., Havelund, K. (eds.): RV-CuBES 2017. An International Workshop on Competitions, Usability, Benchmarks, Evaluation, and Standardisation for Runtime Verification Tools, Kalpa Publications in Computing, vol. 3. EasyChair (2017)Google Scholar
  50. 50.
    Reinbacher, T., Rozier, K.Y., Schumann, J.: Temporal-logic based runtime observer pairs for system health management of real-time systems. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 357–372. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-54862-8_24CrossRefGoogle Scholar
  51. 51.
    Schumann, J., Moosbrugger, P., Rozier, K.Y.: Runtime analysis with R2U2: a tool exhibition report. In: Falcone, Y., Sánchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 504–509. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46982-9_35CrossRefGoogle Scholar
  52. 52.
    Serebryany, K., Iskhodzhanov, T.: ThreadSanitizer: data race detection in practice. In: Proceedings of the Workshop on Binary Instrumentation and Applications, WBIA 2009, pp. 62–71. ACM, New York (2009).  https://doi.org/10.1145/1791194.1791203, http://doi.acm.org/10.1145/1791194.1791203
  53. 53.
    Signoles, J., Kosmatov, N., Vorobyov, K.: E-ACSL, a runtime verification tool for safety and security of C programs (tool paper). In: Reger, G., Havelund, K. (eds.) RV-CuBES 2017. Kalpa Publications in Computing, vol. 3, pp. 164–173. EasyChair (2017)Google Scholar
  54. 54.
    Stepanov, E., Serebryany, K.: MemorySanitizer: fast detector of uninitialized memory use in C++. In: Proceedings of the 2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), San Francisco, CA, USA, pp. 46–55 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIGGrenobleFrance
  2. 2.Institute of Information Security, Department of Computer ScienceETH ZürichZurichSwitzerland
  3. 3.University of ManchesterManchesterUK

Personalised recommendations