A Model Checker Collection for the Model Checking Contest Using Docker and Machine Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10877)


This paper introduces mcc4mcc, the Model Checker Collection for the Model Checking Contest, a tool that wraps multiple model checking solutions, and applies the most appropriate one based on the characteristics of the model it is given. It leverages machine learning algorithms to carry out this selection, based on the results gathered from the 2017 edition of the Model Checking Contest, an annual event in which multiple tools compete to verify different properties on a large variety of models. Our approach brings two important contributions. First, our tool offers the opportunity to further investigate on the relation between model characteristics and verification techniques. Second, it lays out the groundwork for a unified way to distribute model checking software using virtual containers.


  1. 1.
    André, É., Lembachar, Y., Petrucci, L., Hulin-Hubard, F., Linard, A., Hillah, L., Kordon, F.: Cosyverif: an open source extensible verification environment. In: 2013 18th International Conference on Engineering of Complex Computer Systems, Singapore, 17–19 July 2013, pp. 33–36. IEEE Computer Society (2013)Google Scholar
  2. 2.
    Bernstein, D.: Containers and cloud: from LXC to Docker to Kubernetes. IEEE Cloud Comput. 1(3), 81–84 (2014)CrossRefGoogle Scholar
  3. 3.
    Berthomieu, B., Vernadat, F.: Time Petri nets analysis with TINA. In: Third International Conference on the Quantitative Evaluation of Systems (QEST 2006), Riverside, California, USA, 11–14 September 2006, pp. 123–124. IEEE Computer Society (2006)Google Scholar
  4. 4.
    López Bóbeda, E., Colange, M., Buchs, D.: StrataGEM: a generic Petri net verification framework. In: Ciardo, G., Kindler, E. (eds.) PETRI NETS 2014. LNCS, vol. 8489, pp. 364–373. Springer, Cham (2014). Scholar
  5. 5.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth, Belmont (1984)zbMATHGoogle Scholar
  6. 6.
    Broemeling, L.D.: Bayesian Analysis of Linear Models. Statistics: A Series of Textbooks and Monographs. Taylor & Francis, London (1984). Scholar
  7. 7.
    Chiola, G., Franceschinis, G., Gaeta, R., Ribaudo, M.: GreatSPN 1.7: graphical editor and analyzer for timed and stochastic Petri nets. Perform. Eval. 24(1–2), 47–68 (1995)CrossRefGoogle Scholar
  8. 8.
    Ciardo, G., Miner, A.S.: SMART: the stochastic model checking analyzer for reliability and timing. In: 1st International Conference on Quantitative Evaluation of Systems (QEST 2004), Enschede, The Netherlands, 27–30 September 2004, pp. 338–339. IEEE Computer Society (2004)Google Scholar
  9. 9.
    Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, Cambridge (2010)zbMATHGoogle Scholar
  10. 10.
    Duret-Lutz, A., Lewkowicz, A., Fauchille, A., Michaud, T., Renault, É., Xu, L.: Spot 2.0 - a framework for LTL and \(\omega \)-automata manipulation. In: Artho, C., Legay, A., Peled, D. (eds.) ATVA 2016. LNCS, vol. 9938, pp. 122–129. Springer, Cham (2016). Scholar
  11. 11.
    Heiner, M., Rohr, C., Schwarick, M.: MARCIE – model checking and reachability analysis done efficiently. In: Colom, J.-M., Desel, J. (eds.) PETRI NETS 2013. LNCS, vol. 7927, pp. 389–399. Springer, Heidelberg (2013). Scholar
  12. 12.
    Hillah, L.M., Kordon, F.: Petri Nets Repository: a tool to benchmark and debug Petri Net tools. In: van der Aalst, W., Best, E. (eds.) PETRI NETS 2017. LNCS, vol. 10258, pp. 125–135. Springer, Cham (2017). Scholar
  13. 13.
    Hostettler, S., Marechal, A., Linard, A., Risoldi, M., Buchs, D.: High-level Petri net model checking with AlPiNA. Fundam. Inf. 113(3–4), 229–264 (2011)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Jensen, J.F., Nielsen, T., Oestergaard, L.K., Srba, J.: TAPAAL and reachability analysis of P/T nets. In: Koutny, M., Desel, J., Kleijn, J. (eds.) Transactions on Petri Nets and Other Models of Concurrency XI. LNCS, vol. 9930, pp. 307–318. Springer, Heidelberg (2016). Scholar
  15. 15.
    Kant, G., Laarman, A., Meijer, J., van de Pol, J., Blom, S., van Dijk, T.: LTSmin: high-performance language-independent model checking. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 692–707. Springer, Heidelberg (2015). Scholar
  16. 16.
    Kordon, F., et al.: Report on the model checking contest at Petri nets 2011. In: Jensen, K., van der Aalst, W.M., Ajmone Marsan, M., Franceschinis, G., Kleijn, J., Kristensen, L.M. (eds.) Transactions on Petri Nets and Other Models of Concurrency VI. LNCS, vol. 7400, pp. 169–196. Springer, Heidelberg (2012). Scholar
  17. 17.
    Kubat, M.: Neural Networks: A Comprehensive Foundation by Simon Haykin. Macmillan, Basingstoke (1994). ISBN 0-02-352781-7. Knowl. Eng. Rev. 13(4), 409–412 (1999)Google Scholar
  18. 18.
    Linard, A., Buchs, D.: Ardoises: collaborative & interactive editing using layered data. In: 17th International Conference on Application of Concurrency to System Design, ACSD 2017, Zaragoza, Spain, June 25–30, 2017, pp. 136–145. IEEE Computer Society (2017)Google Scholar
  19. 19.
    Mitchell, T.M.: Machine learning. McGraw Hill Series in Computer Science. McGraw-Hill, New York (1997)zbMATHGoogle Scholar
  20. 20.
    Negus, C.: Docker Containers, 2nd edn. Addison-Wesley Professional, Boston (2015)Google Scholar
  21. 21.
    Pan, Z., He, Q., Jiang, W., Chen, Y., Dong, Y.: Nestcloud: towards practical nested virtualization. In: 2011 International Conference on Cloud and Service Computing, CSC 2011, Hong Kong, 12–14 December 2011, pp. 321–329. IEEE Computer Society (2011)Google Scholar
  22. 22.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  23. 23.
    Racordon, D., Buchs, D.: Verifying multi-core schedulability with data decision diagrams. In: Crnkovic, I., Troubitsyna, E. (eds.) SERENE 2016. LNCS, vol. 9823, pp. 45–61. Springer, Cham (2016). Scholar
  24. 24.
    Russell, S.J., Norvig, P.: Artificial Intelligence - A Modern Approach. Prentice Hall Series in Artificial Intelligence, 2nd edn. Prentice Hall, Upper Saddle River (2003)zbMATHGoogle Scholar
  25. 25.
    Schmidt, K.: LoLA a low level analyser. In: Nielsen, M., Simpson, D. (eds.) ICATPN 2000. LNCS, vol. 1825, pp. 465–474. Springer, Heidelberg (2000). Scholar
  26. 26.
    Shakhnarovich, G., Darrell, T., Indyk, P.: Nearest-neighbor methods in learning and vision. IEEE Trans. Neural Netw. 19(2), 377 (2008)CrossRefGoogle Scholar
  27. 27.
    Thierry-Mieg, Y.: Symbolic model-checking using ITS-tools. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 231–237. Springer, Heidelberg (2015). Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Software Modeling and Verification (SMV) Group, Faculty of ScienceUniversity of GenevaGenevaSwitzerland

Personalised recommendations