Advertisement

Rigorous examination of reactive systems

The RERS challenges 2012 and 2013
  • Falk Howar
  • Malte Isberner
  • Maik Merten
  • Bernhard Steffen
  • Dirk Beyer
  • Corina S. Păsăreanu
Introduction

Abstract

The goal of the RERS challenge is to evaluate the effectiveness of various verification and validation approaches on reactive systems, a class of systems that is highly relevant for industrial critical applications. The RERS challenge brings together researchers from different areas of software verification and validation, including static analysis, model checking, theorem proving, symbolic execution, and testing. The challenge provides a forum for experimental comparison of different techniques on specifically designed verification tasks. These benchmarks are automatically synthesized to exhibit chosen properties, and then enhanced to include dedicated dimensions of difficulty, such as conceptual complexity of the properties (e.g., reachability, safety, liveness), size of the reactive systems (a few hundred lines to millions of lines), and complexity of language features (arrays and pointer arithmetic). The STTT special section on RERS describes the results of the evaluations and the different analysis techniques that were used in the RERS challenges 2012 and 2013.

Keywords

Program analysis Model checking Verification  Model-based testing Competition Reactive system Event–condition–action system 

Notes

Acknowledgments

We would like to thank Rustan Leino and Jaco van de Pol for their helpful comments, and Maren Geske for her assistance in implementing the challenge infrastructure.

References

  1. 1.
    Almeida, E.E., Luntz, J.E., Tilbury, D.M.: Event-condition–action systems for reconfigurable logic control. IEEE Trans. Autom. Sci. Eng. 4(2), 167–181 (2007)CrossRefGoogle Scholar
  2. 2.
    Angluin, D.: Learning regular sets from queries and counterexamples. Inf. Comput. 75(2), 87–106 (1987)CrossRefzbMATHMathSciNetGoogle Scholar
  3. 3.
    Bauer, O., Geske, M., Isberner, M.: Analyzing program behavior through active automata learning. Int. J. Softw. Tools Technol. Transf. doi: 10.1007/s10009-014-0333-2 (2014)
  4. 4.
    Benatallah, B., Sheng, Q.Z., Dumas, M.: The Self–Serv environment for web-services composition. Internet Comput. IEEE 7(1), 40–48 (2003)CrossRefGoogle Scholar
  5. 5.
    Beyer, D.: Competition on software verification (SV-COMP). In: Proceedings of TACAS, LNCS 7214, pp. 504–524. Springer (2012)Google Scholar
  6. 6.
    Beyer, D.: Second competition on software verification. In: Proceedings od TACAS, LNCS 7795, pp. 594–609. Springer (2013)Google Scholar
  7. 7.
    Beyer, D.: Status report on software verification. In: Proceedings of TACAS, LNCS 8413, pp. 373–388. Springer (2014)Google Scholar
  8. 8.
    Beyer, D., Henzinger, T. A., Majumdar, R., Rybalchenko, A.: Path invariants. In: Proceedings of PLDI, pp. 300–309. ACM (2007)Google Scholar
  9. 9.
    Beyer, D., Stahlbauer, A.: BDD-based software model checking with CPAchecker. In: Proceedings of MEMICS, LNCS 7721, pp. 1–11. Springer (2013)Google Scholar
  10. 10.
    Beyer, D., Stahlbauer, A.: BDD-based software verification: applications to event-condition–action systems. Int. J. Softw. Tools Technol. Transf. doi: 10.1007/s10009-014-0334-1 (2014)
  11. 11.
    Bianco, A., de Alfaro, L.: Model checking of probabilistic and nondeterministic systems. In: Proceedings of FSTTCS, LNCS 1026, pp. 499–513. Springer (1995)Google Scholar
  12. 12.
    Blom, S.C.C., van de Pol, J.C., Weber, L.T., Smin, M.: Distributed and symbolic reachability. In: Proceedings of CAV, LNCS 6174, pp. 354–359. Springer (2010)Google Scholar
  13. 13.
    Boyer, J., Mili, H.: IBM WebSphere ILOG JRules. In: Agile Business Rule Development, pp. 215–242. Springer (2011)Google Scholar
  14. 14.
    Browne, P.: JBoss Drools Business Rules: Capture, Automate, and Reuse Your Business Processes in a Clear English Language that Your Computer Can Understand. Packt Publishing (2009)Google Scholar
  15. 15.
    Broy, M., Jonsson, B., Katoen, J.-P., Leucker, M., Pretschner, A. (editors): Model-based testing of reactive systems. In: LNCS 3472. Springer (2005)Google Scholar
  16. 16.
    Clarke, E.M., Grumberg, O., Peled, D.: Model Checking. MIT Press, Cambridge, USA (2001)Google Scholar
  17. 17.
    Cok, D. R., Griggio, A., Bruttomesso, R., Deters, M.: The 2012 SMT competition. In: Proceedings of SMT, pp. 131–142 (2012)Google Scholar
  18. 18.
    Colón, M., Sankaranarayanan, S., Sipma, H.B.: Linear invariant generation using non-linear constraint solving. In: Proceedings of CAV, LNCS 2725, pp. 420–432. Springer (2003)Google Scholar
  19. 19.
    Cuoq, P., Signoles, J., Baudin, P., Bonichon, R., Canet, G., Correnson, L., Monate, B., Prevosto, V., Puccetti, A.: Experience report: OCaml for an industrial-strength static analysis framework. In: Proceedings of ICFP, pp. 281–286. ACM (2009)Google Scholar
  20. 20.
    Dwyer, M.B., Avrunin, G.S., Corbett, J.C.: Patterns in property specifications for finite-state verification. In: Proceedings of ICSE, pp. 411–420. ACM (1999)Google Scholar
  21. 21.
    Ernst, M.D., Cockrell, J., Griswold, W.G., Notkin, D.: Dynamically discovering likely program invariants to support program evolution. IEEE Trans. Softw. Eng. 27(2), 99–123 (2001)CrossRefGoogle Scholar
  22. 22.
    Gulwani, S., Srivastava, S., Venkatesan, R.: Constraint-based invariant inference over predicate abstraction. In: Proceedings of VMCAI, pp. 120–135 (2009)Google Scholar
  23. 23.
    Havelund, K., Roşu, G.: Monitoring Java programs with Java PathExplorer. ENTCS 55(2), 200–217 (2001)Google Scholar
  24. 24.
    Hayes-Roth, F.: Rule-based systems. Commun. ACM 28(9), 921–932 (1985)CrossRefGoogle Scholar
  25. 25.
    Holzmann, G.J., Smith, M.H.: Software model checking: extracting verification models from source code. Softw. Test. Verif. Reliab. 11(2), 65–79 (2001)CrossRefGoogle Scholar
  26. 26.
    Howar, F., Isberner, M., Merten, M., Steffen, B., and Beyer, D.: The RERS grey-box challenge 2012: analysis of event-condition-action systems. In: Proceedings of ISoLA, LNCS 7609, pp. 608–614. Springer (2012)Google Scholar
  27. 27.
    Huisman, M., Klebanov, V., Monahan, R.: On the organisation of program-verification competitions. In: Proceedings of COMPARE, CEUR Workshop Proceedings 873, pp. 50–59. CEUR-WS.org (2012)Google Scholar
  28. 28.
    King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)CrossRefzbMATHGoogle Scholar
  29. 29.
    Leucker, M., Schallhart, C.: A brief account of runtime verification. J. Logic Alg. Progr. 78(5), 293–303 (2009)CrossRefzbMATHGoogle Scholar
  30. 30.
    Lidman, J., Quinlan, D.J., Liao, C., McKee, S.A.: ROSE:FTTransform—a source-to-source translation framework for exascale fault-tolerance research. In: Proceedings of FTXS. IEEE (2012)Google Scholar
  31. 31.
    McCarthy, D., Dayal, U.: The architecture of an active database management system. In: Proceedings of ICMD, pp. 215–224. ACM (1989)Google Scholar
  32. 32.
    Morse, J., Cordeiro, L., Nicole, D., Fischer, B.: Context-bounded model checking of LTL properties for ANSI-C software. In: Proceedings of SEFM, LNCS 7041, pp. 302–317. Springer (2011)Google Scholar
  33. 33.
    Morse, J., Cordeiro, L., Nicole, D., Fischer, B.: Applying symbolic bounded model checking to the: RERS greybox challenge, p. 2014. J. Softw. Tools Technol. Transf. Int. doi: 10.1007/s10009-014-0335-0 (2014)
  34. 34.
    Nielson, F., Nielson, H.R., Hankin, C.: Principles of Program Analysis. Springer, New York, USA (1999) Google Scholar
  35. 35.
    Schordan, M., Prantl, A.: Combining static analysis and state transition graphs for verification of event-condition-action systems in the RERS 2012 and 2013 challenges. Int. J. Softw. Tools Technol. Transf. doi: 10.1007/s10009-014-0338-x (2014)
  36. 36.
    Steffen, B., Howar, F., Isberner, M., Naujokat, S., Margaria, T.: Tailored generation of concurrent benchmarks. Int. J. Softw. Tools Technol. Transf. doi: 10.1007/s10009-014-0339-9 (2014)
  37. 37.
    Steffen, B., Howar, F., Merten, M.: Introduction to active automata learning from a practical perspective. In: Proceedings of SFM, LNCS 6659, pp. 256–296. Springer (2011)Google Scholar
  38. 38.
    Steffen, B., Isberner, M., Naujokat, S., Margaria, T., Geske, M.: Property-driven benchmark generation: synthesizing programs of realistic structure. Int. J. Softw. Tools Technol. Transf. doi: 10.1007/s10009-014-0336-z (2014)
  39. 39.
    Sutcliffe, G., Suttner, C.: The state of CASC. AI Commun. 19(1), 35–48 (2006)zbMATHMathSciNetGoogle Scholar
  40. 40.
    van de Pol, J., Ruys, T. C., te Brinke, S.: Thoughtful brute force attack of the RERS 2012 and 2013 challenges. Int. J. Softw. Tools Technol. Transf. doi: 10.1007/s10009-014-0324-3 (2014)

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Falk Howar
    • 1
  • Malte Isberner
    • 2
  • Maik Merten
    • 2
  • Bernhard Steffen
    • 2
  • Dirk Beyer
    • 3
  • Corina S. Păsăreanu
    • 1
  1. 1.Carnegie Mellon Silicon Valley/NASA AmesMountain ViewUSA
  2. 2.TU DortmundDortmundGermany
  3. 3.University of PassauPassauGermany

Personalised recommendations