Third International Competition on Runtime Verification

CRV 2016
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10012)


We report on the Third International Competition on Runtime Verification (CRV-2016). The competition was held as a satellite event of the 16th International Conference on Runtime Verification (RV’16). The competition consisted of two tracks: offline monitoring of traces and online monitoring of Java programs. The intention was to also include a track on online monitoring of C programs but there were too few participants to proceed with this track. This report describes the format of the competition, the participating teams, the submitted benchmarks and the results. We also describe our experiences with transforming trace formats from other tools into the standard format required by the competition and report on feedback gathered from current and past participants and use this to make suggestions for the future of the competition.


Specification Language Linear Temporal Logic Trace File Trace Format Memory Utilisation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



Thanks to Klaus Havelund, Julien Signoles, Torben Scheffel, Domenico Bianculli, Daniel Thoma and Felix Klaedtke for providing the feedback discussed in Sect. 7. The Laboratoire d’informatique formelle from Université du Québec à Chicoutimi lent the server for hosting the wiki and running the benchmarks. This article is based upon work from COST Action ARVI IC1402, supported by COST (European Cooperation in Science and Technology).


  1. 1.
    Barringer, H., Falcone, Y., Havelund, K., Reger, G., Rydeheard, D.: Quantified event automata: towards expressive and efficient runtime monitors. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 68–84. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-32759-9_9 CrossRefGoogle Scholar
  2. 2.
    Bartocci, E., Bonakdarpour, B., Falcone, Y.: First international competition on software for runtime verification. In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 1–9. Springer, Heidelberg (2014). doi: 10.1007/978-3-319-11164-3_1 Google Scholar
  3. 3.
    Bartocci, E., Bonakdarpour, B., Falcone, Y., Colombo, C., Decker, N., Klaedtke, F., Havelund, K., Joshi, Y., Milewicz, R., Reger, G., Rosu, G., Signoles, J., Thoma, D., Zalinescu, E., Zhang, Y.: First international competition on runtime verification. Int. J. Softw. Tools Technol. Trans. (STTT) (submitted)Google Scholar
  4. 4.
    Colombo, C., Pace, G.J., Schneider, G.: Dynamic event-based runtime monitoring of real-time and contextual properties. In: Cofer, D., Fantechi, A. (eds.) FMICS 2008. LNCS, vol. 5596, pp. 135–149. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-03240-0_13 CrossRefGoogle Scholar
  5. 5.
    Colombo, C., Pace, G.J., Schneider, G.: LARVA - safer monitoring of real-time Java programs (tool paper). In: Proceedings of the 2009 Seventh IEEE International Conference on Software Engineering and Formal Methods, SEFM 2009, pp. 33–37, 2009. IEEE Computer Society, Washington (2009)Google Scholar
  6. 6.
    Decker, N., Harder, J., Scheffel, T., Schmitz, M., Thoma, D.: Runtime monitoring with union-find structures. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 868–884. Springer, Heidelberg (2016). doi: 10.1007/978-3-662-49674-9_54 CrossRefGoogle Scholar
  7. 7.
    Delahaye, M., Kosmatov, N., Signoles, J.: Common specification language for static, dynamic analysis of cprograms. In: Proceedings of SAC 2013: The 28th Annual ACM Symposium on Applied Computing, pp. 1230–1235. ACM (2013)Google Scholar
  8. 8.
    Falcone, Y., Havelund, K., Reger, G.: A tutorial on runtimeverification. In: Broy, M., Peled, D. (eds.) SummerSchool Marktoberdorf 2012 - Engineering Dependable Software Systems. IOS Press (2013) (to appear)Google Scholar
  9. 9.
    Falcone, Y., Ničković, D., Reger, G., Thoma, D.: Second international competition on runtime verification. In: Bartocci, E., Majumdar, R. (eds.) Runtime Verification. LNCS, vol. 9333, pp. 405–422. Springer, Cham (2015)CrossRefGoogle Scholar
  10. 10.
    Hallé, S.: When RV meets CEP. In: Falcone, Y., Sanchez, C. (eds.) RV 2016. LNCS, vol. 10012, pp. 68–91. Springer, Heidelberg (2016)Google Scholar
  11. 11.
    Havelund, K., Reger, G.: Specification of parametric monitors. In: Drechsler, R., Kühne, U. (eds.) Formal Modeling, Verification of Cyber-Physical Systems, pp. 151–189. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  12. 12.
    Havelund, K., Reger, G.: What is a trace? a run time verification perspective. In: 7th International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (ISoLA 2016) (accepted)Google Scholar
  13. 13.
    Leucker, M., Schallhart, C.: A brief account of run time verification. J. Logic Algebr. Programm. 78(5), 293–303 (2008)CrossRefzbMATHGoogle Scholar
  14. 14.
    Piel, A.: Reconnaissance de comportements complexes partraitement en ligne de flux d’événements. (Online event flowprocessing for complex behaviour recognition). Ph.D. thesis, Paris 13 University, Villetaneuse, Saint-Denis, Bobigny, France (2014)Google Scholar
  15. 15.
    Reger, G.: Automata based monitoring and mining of execution traces. Ph.D. thesis, University of Manchester (2014)Google Scholar
  16. 16.
    Reger, G., Cruz, H.C., Rydeheard, D.: MarQ: monitoring at runtime with QEA. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 596–610. Springer, Heidelberg (2015). doi: 10.1007/978-3-662-46681-0_55 Google Scholar
  17. 17.
    Reger, G., Rydeheard, D.: From first-order temporal logic to parametric trace slicing. In: Bartocci, E., Majumdar, R. (eds.) RV 2015. LNCS, vol. 9333, pp. 216–232. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-23820-3_14 CrossRefGoogle Scholar
  18. 18.
    Varvaressos, S., Lavoie, K., Massé, A.B., Gaboury, S.,Hallé, S.: Automated bug finding in video games: a case study for runtime monitoring. In: Proceedings of the 2014 IEEE International Conference on Software Testing, Verification, and Validation, ICST 2014, pp. 143–152. IEEE Computer Society, Washington (2014)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.University of ManchesterManchesterUK
  2. 2.Université du Québec à ChicoutimiSaguenayCanada
  3. 3.Univ. Grenoble Alpes, Inria, LIGGrenobleFrance

Personalised recommendations