Advertisement

Reflections on Bernhard Steffen’s Physics of Software Tools

  • Hubert GaravelEmail author
  • Radu Mateescu
Chapter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11200)

Abstract

Many software tools have been developed to implement the concepts of formal methods, sometimes with great success, but also with an impressive tool mortality and an apparent dispersion of efforts. There has been little analysis so far of such tool development as a whole, in order to make it more coherent, efficient, and useful to the society. Recently, however, Bernhard Steffen published a paper entitled “The Physics of Software Tools: SWOT Analysis and Vision” that precisely proposes such a global vision. We highlight the key ideas of this paper and review them in light of our own experience in designing and implementing the CADP toolbox for the specification and analysis of concurrent systems.

Notes

Acknowledgements

We are grateful to Lian Apostol and Wendelin Serwe, who proofread this manuscript, and to the anonymous reviewers for their helpful comments and suggestions.

References

  1. 1.
    Arenas, A.E., Bicarregui, J., Margaria, T.: The FMICS view on the verified software repository. J. Integr. Des. Process Sci. (IDPT) 10(4), 47–54 (2006)Google Scholar
  2. 2.
    Behrmann, G., David, A., Larsen, K.G., Pettersson, P., Yi, W.: Developing UPPAAL over 15 Years. Softw. Pract. Experience 41(2), 133–142 (2011)CrossRefGoogle Scholar
  3. 3.
    Bergstra, J.A., Ponse, A., Smolka, S.A. (eds.): Handbook of Process Algebra. Elsevier, Amsterdam (2001)zbMATHGoogle Scholar
  4. 4.
    Bicarregui, J., Hoare, C.A.R., Woodcock, J.C.P.: The verified software repository: a step towards the verifying compiler. Formal Aspects Comput. 18(2), 143–151 (2006)CrossRefGoogle Scholar
  5. 5.
    Bourke, T., Brun, L., Dagand, P.E., Leroy, X., Pouzet, M., Rieg, L.: A formally verified compiler for Lustre. In: Cohen, A., Vechev, M.T. (eds.) Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI 2017), Barcelona, Spain. pp. 586–601. ACM, June 2017Google Scholar
  6. 6.
    Bouzafour, A., Renaudin, M., Garavel, H., Mateescu, R., Serwe, W.: Model-checking synthesizable system verilog descriptions of asynchronous circuits. In: Krstic, M., Jones, I.W. (eds.) Proceedings of the 24th IEEE International Symposium on Asynchronous Circuits and Systems (ASYNC 2018), Vienna, Austria. IEEE, May 2018Google Scholar
  7. 7.
    Braun, V., Kreileder, J., Margaria, T., Steffen, B.: The ETI online service in action. In: Cleaveland, R. (ed.) TACAS 1999. LNCS, vol. 1579, pp. 439–443. Springer, Heidelberg (1999).  https://doi.org/10.1007/3-540-49059-0_31CrossRefGoogle Scholar
  8. 8.
    Braun, V., Margaria, T., Weise, C.: Integrating tools in the ETI platform. Int. J. Softw. Tools Technol. Transf. (STTT) 1–2(1), 31–48 (1997)CrossRefGoogle Scholar
  9. 9.
    Cleaveland, R., Roscoe, A.W., Smolka, S.A.: Process algebra and model checking. Handbook of Model Checking, pp. 1149–1195. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-10575-8_32CrossRefzbMATHGoogle Scholar
  10. 10.
    Evrard, H., Lang, F.: Automatic distributed code generation from formal models of asynchronous processes interacting by multiway rendezvous. J. Log. Algebraic Meth. Program. 88, 121–153 (2017)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Finney, K.: Mathematical notation in formal specification: too difficult for the masses? IEEE Trans. Softw. Eng. 22(2), 158–159 (1996)CrossRefGoogle Scholar
  12. 12.
    Garavel, H.: Compilation of LOTOS abstract data types. In: Vuong, S.T. (ed.) Proceedings of the 2nd International Conference on Formal Description Techniques FORTE 1989, Vancouver B.C., Canada, pp. 147–162. North-Holland, December 1989Google Scholar
  13. 13.
    Garavel, H.: OPEN/CÆSAR: an open software architecture for verification, simulation, and testing. In: Steffen, B. (ed.) TACAS 1998. LNCS, vol. 1384, pp. 68–84. Springer, Heidelberg (1998).  https://doi.org/10.1007/BFb0054165CrossRefGoogle Scholar
  14. 14.
    Garavel, H., Graf, S.: Formal methods for safe and secure computers systems. BSI Study 875, Bundesamt für Sicherheit in der Informationstechnik, Bonn, Germany, December 2013Google Scholar
  15. 15.
    Garavel, H., Lang, F.: SVL: a scripting language for compositional verification. In: Kim, M., Chin, B., Kang, S., Lee, D. (eds.) Proceedings of the 21st IFIP WG 6.1 International Conference on Formal Techniques for Networked and Distributed Systems (FORTE 2001), Cheju Island, Korea. pp. 377–392. Kluwer Academic Publishers, August 2001. full version available as INRIA Research Report RR-4223Google Scholar
  16. 16.
    Garavel, H., Lang, F., Mateescu, R.: Compiler construction using LOTOS NT. In: Horspool, R.N. (ed.) CC 2002. LNCS, vol. 2304, pp. 9–13. Springer, Heidelberg (2002).  https://doi.org/10.1007/3-540-45937-5_3CrossRefzbMATHGoogle Scholar
  17. 17.
    Garavel, H., Lang, F., Mateescu, R., Serwe, W.: CADP 2011: a toolbox for the construction and analysis of distributed processes. Int. J. Softw. Tools Technol. Transf. (STTT) 15(2), 89–107 (2013)CrossRefGoogle Scholar
  18. 18.
    Garavel, H., Lang, F., Mounier, L.: Compositional verification in action. In: Howar, F., Barnat, J. (eds.) FMICS 2018. LNCS, vol. 11119, pp. 189–210. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00244-2_13CrossRefGoogle Scholar
  19. 19.
    Garavel, H., Lang, F., Serwe, W.: From LOTOS to LNT. In: Katoen, J.-P., Langerak, R., Rensink, A. (eds.) ModelEd, TestEd, TrustEd. LNCS, vol. 10500, pp. 3–26. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68270-9_1CrossRefGoogle Scholar
  20. 20.
    Garavel, H., Serwe, W.: The unheralded value of the multiway rendezvous: illustration with the production cell benchmark. In: Hermanns, H., Höfner, P. (eds.) Proceedings of the 2nd Workshop on Models for Formal Analysis of Real Systems (MARS 2017), Uppsala, Sweden. Electronic Proceedings in Theoretical Computer Science, vol. 244, pp. 230–270, April 2017CrossRefGoogle Scholar
  21. 21.
    Garavel, H., Tabikh, M.-A., Arrada, I.-S.: Benchmarking implementations of term rewriting and pattern matching in algebraic, functional, and object-oriented languages – The 4th rewrite engines competition. In: Rusu, V. (ed.) WRLA 2018. LNCS, vol. 11152, pp. 1–25. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-99840-4_1CrossRefGoogle Scholar
  22. 22.
    Groote, J.F., Willemse, T.A.C.: Parameterised boolean equation systems. Theor. Comput. Sci. 343, 332–369 (2005)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Hartmanns, A., Hermanns, H.: In the quantitative automata zoo. Sci. Comput. Program. 112, 3–23 (2015)CrossRefGoogle Scholar
  24. 24.
    Isberner, M., Howar, F., Steffen, B.: The open-source LearnLib – A framework for active automata learning. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 487–495. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-21690-4_32CrossRefGoogle Scholar
  25. 25.
    ISO/IEC: LOTOS - A Formal Description Technique Based on the Temporal Ordering of Observational Behaviour. International Standard 8807, International Organization for Standardization - Information Processing Systems - Open Systems Interconnection, Geneva, September 1989Google Scholar
  26. 26.
    Jackson, D., Wing, J.: Lightweight formal methods. IEEE Comput. 29, 21–22 (1996)Google Scholar
  27. 27.
    Jourdan, J.H., Laporte, V., Blazy, S., Leroy, X., Pichardie, D.: A formally-verified C static analyzer. In: Rajamani, S.K., Walker, D. (eds.) Proceedings of the 42nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2015), Mumbai, India, pp. 247–259. ACM, January 2015Google Scholar
  28. 28.
    Kant, G., Laarman, A., Meijer, J., van de Pol, J., Blom, S., van Dijk, T.: LTSmin: high-performance language-independent model checking. In: Baier, C., Tinelli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 692–707. Springer, Heidelberg (2015).  https://doi.org/10.1007/978-3-662-46681-0_61CrossRefGoogle Scholar
  29. 29.
    Kordon, F., et al.: MCC’2017 – the seventh model checking contest. In: Koutny, M., Kristensen, L.M., Penczek, W. (eds.) Transactions on Petri Nets and Other Models of Concurrency XIII. LNCS, vol. 11090, pp. 181–209. Springer, Heidelberg (2018).  https://doi.org/10.1007/978-3-662-58381-4_9CrossRefGoogle Scholar
  30. 30.
    Krishnamurthi, S.: Artifact evaluation for software conferences. SIGPLAN Not. 48(4S), 17–21 (2013)CrossRefGoogle Scholar
  31. 31.
    Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 585–591. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-22110-1_47CrossRefGoogle Scholar
  32. 32.
    Loveland, D.W.: Automated theorem proving: a quarter century review. In: Bledsoe, W.W., Loveland, D.W. (eds.) Automated Theorem Proving - After 25 Years, Contemporary Mathematics, vol. 29, pp. 1–45. American Mathematical Society (1984)Google Scholar
  33. 33.
    Mader, A.: Verification of modal properties using boolean equation systems. In: VERSAL 8, Bertz Verlag, Berlin (1997)Google Scholar
  34. 34.
    Margaria, T., Braun, V., Kreileder, J.: Interacting with ETI: a user session. Int. J. Softw. Tools for Technol. Transf. (STTT) 1–2(1), 49–63 (1997)CrossRefGoogle Scholar
  35. 35.
    Margaria, T., Nagel, R., Steffen, B.: jETI: a tool for remote tool integration. In: Halbwachs, N., Zuck, L.D. (eds.) TACAS 2005. LNCS, vol. 3440, pp. 557–562. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31980-1_38CrossRefzbMATHGoogle Scholar
  36. 36.
    Margaria, T., Steffen, B.: LTL guided planning: revisiting automatic tool composition in ETI. In: Proceedings of the 31st IEEE/NASA Software Engineering Workshop (SEW 2007), Columbia, USA, pp. 214–226. IEEE Computer Society Press, March 2007Google Scholar
  37. 37.
    Marsso, L., Mateescu, R., Serwe, W.: TESTOR: a modular tool for on-the-fly conformance test case generation. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10806, pp. 211–228. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-89963-3_13CrossRefGoogle Scholar
  38. 38.
    Mateescu, R.: Local model-checking of an alternation-free value-based modal mu-calculus. In: Bossi, A., Cortesi, A., Levi, F. (eds.) Proceedings of the 2nd International Workshop on Verification, Model Checking and Abstract Interpretation (VMCAI 1998), Pisa, Italy. University Ca’ Foscari of Venice, September 1998Google Scholar
  39. 39.
    Mateescu, R., Garavel, H.: XTL: a meta-language and tool for temporal logic model-checking. In: Margaria, T. (ed.) Proceedings of the International Workshop on Software Tools for Technology Transfer (STTT 1998), Aalborg, Denmark, pp. 33–42. BRICS, July 1998Google Scholar
  40. 40.
    Mazzanti, F., Ferrari, A.: Ten diverse formal models for a CBTC automatic train supervision system. In: Gallagher, J.P., van Glabbeek, R., Serwe, W. (eds.) Proceedings of the 3rd Workshop on Models for Formal Analysis of Real Systems and the 6th International Workshop on Verification and Program Transformation (MARS/VPT 2018), Thessaloniki, Greece. Electronic Proceedings in Theoretical Computer Science, vol. 268, pp. 104–149, April 2018CrossRefGoogle Scholar
  41. 41.
    Mazzanti, F., Ferrari, A., Spagnolo, G.O.: Towards formal methods diversity in railways: an experience report with seven frameworks. Int. J. Softw. Tools Technol. Transf. (STTT) 20(3), 263–288 (2018)CrossRefGoogle Scholar
  42. 42.
    Merten, M., Steffen, B., Howar, F., Margaria, T.: Next generation LearnLib. In: Abdulla, P.A., Leino, K.R.M. (eds.) TACAS 2011. LNCS, vol. 6605, pp. 220–223. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-19835-9_18CrossRefGoogle Scholar
  43. 43.
    Naujokat, S., Lybecait, M., Kopetzki, D., Steffen, B.: CINCO: a simplicity-driven approach to full generation of domain-specific graphical modeling tools. Int. J. Softw. Tools Technol. Transf. (STTT) 20(3), 327–354 (2018)CrossRefGoogle Scholar
  44. 44.
    Raffelt, H., Steffen, B., Berg, T., Margaria, T.: LearnLib: a framework for extrapolating behavioral models. Int. J. Softw. Tools Technol. Transf. (STTT) 11(5), 393–407 (2009)CrossRefGoogle Scholar
  45. 45.
    Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018), Stockholm, Sweden, pp. 2651–2659, July 2018Google Scholar
  46. 46.
    Rudin, H., West, C.H., Zafiropulo, P.: Automated protocol validation: one chain of development. Comput. Netw. 2, 373–380 (1978)Google Scholar
  47. 47.
    Rushby, J.: Disappearing formal methods. In: Proceedings of the 5th IEEE International Symposium on High-Assurance Systems Engineering (HASE 2000), Albuquerque, NM, USA, pp. 95–96. IEEE Computer Society, November 2000Google Scholar
  48. 48.
    Sifakis, J.: System design in the era of IoT - meeting the autonomy challenge. In: Bliudze, S., Bensalem, S. (eds.) Proceedings of the 1st International Workshop on Methods and Tools for Rigorous System Design (MeTRiD 2018), Thessaloniki, Greece. Electronic Proceedings in Theoretical Computer Science, vol. 272, pp. 1–22, April 2018Google Scholar
  49. 49.
    Steffen, B.: The physics of software tools: SWOT analysis and vision. Int. J. Softw. Tools Technol. Transf. (STTT) 19(1), 1–7 (2017)CrossRefGoogle Scholar
  50. 50.
    Steffen, B., Margaria, T., Braun, V.: The electronic tool integration platform: concepts and design. Int. J. Softw. Tools Technol. Transf. (STTT) 1–2(1), 9–30 (1997)CrossRefGoogle Scholar
  51. 51.
    van Weerdenburg, M.: An account of implementing applicative term rewriting. Electron. Not. Theor. Comput. Sci. 174(10), 139–155 (2007)CrossRefGoogle Scholar
  52. 52.
    West, C.H.: General technique for communications protocol validation. IBM J. Res. Dev. 22(4), 393–404 (1978)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Univ. Grenoble Alpes, Inria, Cnrs, LigGrenobleFrance

Personalised recommendations