Advertisement

Automated Backend Selection for ProB Using Deep Learning

  • Jannik DunkelauEmail author
  • Sebastian Krings
  • Joshua Schmidt
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11460)

Abstract

Employing formal methods for software development usually involves using a multitude of tools such as model checkers and provers. Most of them again feature different backends and configuration options. Selecting an appropriate configuration for a successful employment becomes increasingly hard. In this article, we use machine learning methods to automate the backend selection for the ProB model checker. In particular, we explore different approaches to deep learning and outline how we apply them to find a suitable backend for given input constraints.

Keywords

Formal methods Model checking Automated configuration Deep learning 

References

  1. 1.
    Abrial, J.R.: The B-Book: Assigning Programs to Meanings. Cambridge University Press, Cambridge (1996)CrossRefGoogle Scholar
  2. 2.
    Back, R.J., Wright, J.: Refinement Calculus: A Systematic Introduction. Springer (2012)Google Scholar
  3. 3.
    Back, R.: On correct refinement of programs. J. Comput. Syst. Sci. 23(1), 49–68 (1981)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bobot, F., Filliâtre, J.C., Marché, C., Paskevich, A.: Why3: shepherd your herd of provers. In: Boogie 2011: First International Workshop on Intermediate Verification Languages, Wrocław, pp. 53–64, August 2011. https://hal.inria.fr/hal-00790310
  5. 5.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefGoogle Scholar
  6. 6.
    Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees (1984)Google Scholar
  7. 7.
    Bridge, J.P.: Machine learning and automated theorem proving. Technical report, University of Cambridge, Computer Laboratory (2010)Google Scholar
  8. 8.
    Cansell, D., Méry, D.: Foundations of the B method. Comput. Inform. 22(3–4), 221–256 (2012)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Carlsson, M., Ottosson, G., Carlson, B.: An open-ended finite domain constraint solver. In: Glaser, H., Hartel, P., Kuchen, H. (eds.) PLILP 1997. LNCS, vol. 1292, pp. 191–206. Springer, Heidelberg (1997).  https://doi.org/10.1007/BFb0033845CrossRefGoogle Scholar
  10. 10.
    Carlsson, M., et al.: SICStus Prolog User’s Manual, vol. 3. Swedish Institute of Computer Science Kista, Sweden (1988)Google Scholar
  11. 11.
    ClearSy: Atelier B, user and reference manuals. Aix-en-Provence, France (2016). http://www.atelierb.eu/
  12. 12.
    de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78800-3_24CrossRefGoogle Scholar
  13. 13.
    Filliâtre, J.-C., Paskevich, A.: Why3—where programs meet provers. In: Felleisen, M., Gardner, P. (eds.) ESOP 2013. LNCS, vol. 7792, pp. 125–128. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-37036-6_8CrossRefGoogle Scholar
  14. 14.
    Fraenkel, A.A., Bar-Hillel, Y., Levy, A.: Foundations of Set Theory, vol. 67. Elsevier, Burlington (1973)zbMATHGoogle Scholar
  15. 15.
    Fraenkel, A.: Zu den Grundlagen der Cantor-Zermeloschen Mengenlehre. Mathematische Annalen 86(3), 230–237 (1922)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Goller, C.: Learning search-control heuristics for automated deduction systems with folding architecture networks. In: ESANN, pp. 45–50 (1999)Google Scholar
  17. 17.
    Goller, C., Kuchler, A.: Learning task-dependent distributed representations by backpropagation through structure. In: Proceedings of International Conference on Neural Networks (ICNN 1996), vol. 1, pp. 347–352. IEEE (1996)Google Scholar
  18. 18.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.orgzbMATHGoogle Scholar
  19. 19.
    Goutte, C., Gaussier, E.: A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In: Losada, D.E., Fernández-Luna, J.M. (eds.) ECIR 2005. LNCS, vol. 3408, pp. 345–359. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31865-1_25CrossRefGoogle Scholar
  20. 20.
    Healy, A.: Predicting SMT solver performance for software verification. Master’s thesis, National University of Ireland Maynooth (2016)Google Scholar
  21. 21.
    Healy, A., Monahan, R., Power, J.F.: Evaluating the use of a general-purpose benchmark suite for domain-specific SMT-solving. In: Proceedings of the 31st Annual ACM Symposium on Applied Computing, SAC 2016, pp. 1558–1561. ACM, New York (2016)Google Scholar
  22. 22.
    Japkowicz, N.: The class imbalance problem: Significance and strategies. In: Proceedings of the International Conference on Artificial Intelligence (2000)Google Scholar
  23. 23.
    Krings, S., Bendisposto, J., Leuschel, M.: From failure to proof: the ProB disprover for B and Event-B. In: Calinescu, R., Rumpe, B. (eds.) SEFM 2015. LNCS, vol. 9276, pp. 199–214. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-22969-0_15CrossRefGoogle Scholar
  24. 24.
    Krings, S., Leuschel, M.: SMT solvers for validation of B and Event-B models. In: Ábrahám, E., Huisman, M. (eds.) IFM 2016. LNCS, vol. 9681, pp. 361–375. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-33693-0_23CrossRefGoogle Scholar
  25. 25.
    Krings, S., Schmidt, J., Brings, C., Frappier, M., Leuschel, M.: A translation from alloy to B. In: Butler, M., Raschke, A., Hoang, T.S., Reichl, K. (eds.) ABZ 2018. LNCS, vol. 10817, pp. 71–86. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-91271-4_6CrossRefGoogle Scholar
  26. 26.
    Kubat, M., Matwin, S., et al.: Addressing the curse of imbalanced training sets: one-sided selection. In: ICML, Nashville, USA, vol. 97, pp. 179–186 (1997)Google Scholar
  27. 27.
    LeCun, Y., Kavukcuoglu, K., Farabet, C.: Convolutional networks and applications in vision. In: Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 253–256. IEEE (2010)Google Scholar
  28. 28.
    Letz, R., Schumann, J., Bayerl, S., Bibel, W.: Setheo: a high-performance theorem prover. J. Autom. Reasoning 8(2), 183–212 (1992)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Leuschel, M., Bendisposto, J., Dobrikov, I., Krings, S., Plagge, D.: From animation to data validation: the ProB constraint solver 10 years on. In: Boulanger, J.L. (ed.) Formal Methods Applied to Complex Systems: Implementation of the B Method, pp. 427–446. Wiley ISTE, Hoboken (2014). Chapter 14Google Scholar
  30. 30.
    Leuschel, M., Butler, M.: ProB: an automated analysis toolset for the B method. Int. J. Softw. Tools Technol. Transfer 10(2), 185–203 (2008)CrossRefGoogle Scholar
  31. 31.
    Leuschel, M., Butler, M.: ProB: a model checker for B. In: Araki, K., Gnesi, S., Mandrioli, D. (eds.) FME 2003. LNCS, vol. 2805, pp. 855–874. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45236-2_46CrossRefGoogle Scholar
  32. 32.
    Lewis, D.D., Catlett, J.: Heterogeneous uncertainty sampling for supervised learning. In: Proceedings of the Eleventh International Conference on Machine Learning, pp. 148–156 (1994)CrossRefGoogle Scholar
  33. 33.
    Loreggia, A., Malitsky, Y., Samulowitz, H., Saraswat, V.A.: Deep Learning for Algorithm Portfolios. In: AAAI, pp. 1280–1286 (2016)Google Scholar
  34. 34.
    Plagge, D., Leuschel, M.: Validating B,Z and TLA\(^{+}\) Using ProB and Kodkod. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 372–386. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-32759-9_31CrossRefGoogle Scholar
  35. 35.
    Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)Google Scholar
  36. 36.
    Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65(6), 386 (1958)CrossRefGoogle Scholar
  37. 37.
    Rosenblatt, F.: Principles of neurodynamics. perceptrons and the theory of brain mechanisms. Technical report, Cornell Aeronautical Lab, Inc., Buffalo, NY (1961)Google Scholar
  38. 38.
    Rumelhart, D., Hinton, G., Williams, R.: Learning representations by back-propagating errors. Nature 323(6088), 533–538 (1986)CrossRefGoogle Scholar
  39. 39.
    Schulz, S.: E-A Brainiac theorem prover. AI Commun. 15(2, 3), 111–126 (2002)zbMATHGoogle Scholar
  40. 40.
    Sokolova, M., Japkowicz, N., Szpakowicz, S.: Beyond accuracy, F-score and ROC: a family of discriminant measures for performance evaluation. In: Sattar, A., Kang, B. (eds.) AI 2006. LNCS (LNAI), vol. 4304, pp. 1015–1021. Springer, Heidelberg (2006).  https://doi.org/10.1007/11941439_114CrossRefGoogle Scholar
  41. 41.
    Torlak, E., Jackson, D.: Kodkod: a relational model finder. In: Grumberg, O., Huth, M. (eds.) TACAS 2007. LNCS, vol. 4424, pp. 632–647. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-71209-1_49CrossRefGoogle Scholar
  42. 42.
    Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1(1), 67–82 (1997)CrossRefGoogle Scholar
  43. 43.
    Wolpert, D.H., Macready, W.G., et al.: No free lunch theorems for search. Technical Report SFI-TR-95-02-010, Santa Fe Institute (1995)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Heinrich-Heine-UniversityDüsseldorfGermany
  2. 2.Niederrhein University of Applied SciencesMönchengladbachGermany

Personalised recommendations