The physics of software tools: SWOT analysis and vision

Position Paper

Abstract

This paper reviews the seemingly inevitable trend that software tools are no longer just a means for supporting the design, construction, and analysis of (large-scale) systems, but become so complex that each of them turns into a reality of their own, with its own “physics”, that needs to be studied in its own right. The true effects of combining methodologies as diverse as classical static analysis, model checking, SAT and SMT solving, and dynamic methods such as simulation, runtime verification, testing, and learning, with their dedicated means of optimizations in terms of, e.g., BDD coding, parallelization, and various forms of abstraction and reduction, are very dependent on the particular tools and typically hardly predictable. Corresponding experimental investigations, today often supported by diverse and frequent tool challenges, provide interesting indications about the applied technology, but typically fail to provide sufficient evidence to transfer results to other settings and tools. Moreover, implementation-specific details often dominate the observed effects which thereby become invalid for drawing conceptual conclusions. On the other hand, requiring consequent in-depth analysis of any experimental observation in order to pinpoint the underlying conceptual consequences before publication would slow down the scientific exchange and also hinder the scientific progress. This paper analyzes the situation of today’s software tools from a global perspective in terms of a SWOT (Strength, Weaknesses, Opportunities, Treats) analysis, identifies challenges, and establishes a global vision for overcoming current weaknesses.

Keywords

Validation tools Tool profiles Conceptual design SWOT analysis Open source Internet Exchange platform Tool competitions 

References

  1. 1.
    Anderson, J.D., Laing, P.A., Lau, E.L., Liu, A.S., Nieto, M.M., Turyshev, S.G.: Indication, from Pioneer 10/11, Galileo, and Ulysses data, of an apparent anomalous, weak, long-range acceleration. Phys. Rev. Lett. 81(14), 2858 (1998)CrossRefGoogle Scholar
  2. 2.
    Barrett, C., Conway, C.L., Deters, M., Hadarean, L., Jovanović, D., King, T., Reynolds, A., Tinelli, C.: CVC4. In: International Conference on Computer Aided Verification, pp. 171–177. Springer, Berlin (2011)Google Scholar
  3. 3.
    Beckert, B., Hähnle, R., Schmitt, P.H.: Verification of Object-Oriented Software: The Key Approach. Springer, Berlin (2007)MATHGoogle Scholar
  4. 4.
    Behrmann, G., David, A., Larsen, K.G., Hakansson, J., Petterson, P., Yi, W., Hendriks, M.: UPPAAL 4.0. In: Third International Conference on the Quantitative Evaluation of Systems-(QEST’06), pp. 125–126. IEEE (2006)Google Scholar
  5. 5.
    Beyer, D.: Reliable and reproducible competition results with BenchExec and witnesses (report on SV-COMP 2016). In: Proceedings of the 22Nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems, vol. 9636, pp. 887–904. Springer, New York (2016) doi:10.1007/978-3-662-49674-9
  6. 6.
    Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: International Conference on Computer Aided Verification, pp. 184–190. Springer, Berlin (2011)Google Scholar
  7. 7.
    Blom, S., van de Pol, J., Weber, M.: LTSmin: distributed and symbolic reachability. In: International Conference on Computer Aided Verification, pp. 354–359. Springer, Berlin (2010)Google Scholar
  8. 8.
    Braun, V., Margaria, T., Weise, C.: Integrating tools in the eti platform. Int. J. Softw. Tools Technol. Transf. (STTT) 1(1), 31–48 (1997)CrossRefMATHGoogle Scholar
  9. 9.
    Cimatti, A., Clarke, E., Giunchiglia, F., Roveri, M.: NuSMV: a new symbolic model verifier. In: International conference on computer aided verification, pp. 495–499. Springer, Berlin (1999)Google Scholar
  10. 10.
    Clarke, E., Kroening, D., Lerda, F.: a tool for checking ANSI-C programs. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 168–176. Springer, Berlin (2004)Google Scholar
  11. 11.
    Corbett, J.C., Dwyer, M.B., Hatcliff, J., Laubach, S., Pasareanu, C.S., Zheng, H., et al.: Bandera: extracting finite-state models from Java source code. In: Software Engineering, 2000. Proceedings of the 2000 International Conference on, pp. 439–448. IEEE (2000)Google Scholar
  12. 12.
    Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: The ASTRÉE analyzer. In: European Symposium on Programming, pp. 21–30. Springer, Berlin (2005)Google Scholar
  13. 13.
    De Moura, L., Bjørner, N.: Z3: An efficient smt solver. In: International conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 337–340. Springer, Berlin (2008)Google Scholar
  14. 14.
    Demyanova, Y., Pani, T., Veith, H., Zuleger, F.: Empirical software metrics for benchmarking of verification tools. In: International Conference on Computer Aided Verification, pp. 561–579. Springer, Berlin (2015)Google Scholar
  15. 15.
    Geske, M., Jasper, M., Steffen, B., Howar, F., Schordan, M., van de Pol, J.: RERS 2016: parallel and sequential benchmarks with focus on LTL verification. In: International Symposium on Leveraging Applications of Formal Methods, pp. 787–803. Springer, Berlin (2016)Google Scholar
  16. 16.
    de Gouw, S., Rot, J., de Boer, F.S., Bubel, R., Hähnle, R.: OpenJDKs Java.utils.Collection.sort() is broken: the good, the bad and the worst case. In: International Conference on Computer Aided Verification, pp. 273–289. Springer, Berlin (2015)Google Scholar
  17. 17.
    Havelund, K., Pressburger, T.: Model checking Java programs using Java pathfinder. Int. J. Softw. Tools Technol. Transf. 2(4), 366–381 (2000)CrossRefMATHGoogle Scholar
  18. 18.
    Henzinger, T.A., Ho, P.H., Wong-Toi, H.: HyTech: A model checker for hybrid systems. In: International Conference on Computer Aided Verification, pp. 460–463. Springer (1997)Google Scholar
  19. 19.
    Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Software verification with BLAST. In: International SPIN Workshop on Model Checking of Software, pp. 235–239. Springer, Berlin (2003)Google Scholar
  20. 20.
    Holzmann, G.: The SPIN Model Checker: Primer and Reference Manual, 1st edn. Addison-Wesley, Reading, MA (2011)Google Scholar
  21. 21.
    Howar, F., Isberner, M., Merten, M., Steffen, B., Beyer, D.: The RERS grey-box challenge 2012: analysis of event-condition-action systems. In: International Symposium On Leveraging Applications of Formal Methods, Verification and Validation, pp. 608–614. Springer, Berlin (2012)Google Scholar
  22. 22.
    Huisman, M., Klebanov, V., Monahan, R.: VerifyThis 2012 - A program verification competition. STTT 17(6), 647–657 (2015). doi:10.1007/s10009-015-0396-8 CrossRefGoogle Scholar
  23. 23.
    Isberner, M., Howar, F., Steffen, B.: Learning register automata: from languages to program structures. Mach. Learn. 96(1–2), 65–98 (2014)MathSciNetCrossRefMATHGoogle Scholar
  24. 24.
    Isberner, M., Howar, F., Steffen, B.: The open-source learnlib. In: International Conference on Computer Aided Verification, pp. 487–495. Springer, Berlin (2015)Google Scholar
  25. 25.
    Jakumeit, E., Buchwald, S., Wagelaar, D., Dan, L., Hegedüs, Á., Herrmannsdörfer, M., Horn, T., Kalnina, E., Krause, C., Lano, K., et al.: A survey and comparison of transformation tools based on the transformation tool contest. Sci. comput. program. 85, 41–99 (2014)CrossRefGoogle Scholar
  26. 26.
    Jasper, M., Schordan, M.: Multi-core model checking of large-scale reactive systems using different state representations. In: International Symposium on Leveraging Applications of Formal Methods, pp. 212–226. Springer, Berlin (2016)Google Scholar
  27. 27.
    Jegourel, C., Legay, A., Sedwards, S.: A platform for high performance statistical model checking – PLASMA. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 498–503. Springer, Berlin (2012)Google Scholar
  28. 28.
    Kant, G., Laarman, A., Meijer, J., van de Pol, J., Blom, S., van Dijk, T.: LTSmin: high-performance language-independent model checking. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 692–707. Springer, Berlin (2015)Google Scholar
  29. 29.
    Kelb, P., Margaria, T., Mendler, M., Gsottberger, C.: MOSEL: a fLexible toolset for monadic second-order logic. In: International Workshop on Tools and Algorithms for the Construction and Analysis of Systems, pp. 183–202. Springer, Berlin (1997)Google Scholar
  30. 30.
    Kordon, F., Garavel, H., Hillah, L.M., Hulin-Hubard, F., Chiardo, G., Hamez, A., Jezequel, L., Miner, A., Meijer, J., Paviot-Adet, E., Racordon, D., Rodriguez, C., Rohr, C., Srba, J., Thierry-Mieg, Y., Trinh, G., Wolf, K.: Complete Results for the 2016 Edition of the Model Checking Contest. http://mcc.lip6.fr/2016/results.php (2016)
  31. 31.
    Krishnamurth, S.: Website on artifact evaluation for software conferences. http://www.artifact-eval.org/ (2016)
  32. 32.
    Krishnamurthi, S.: Artifact evaluation for software conferences. ACM SIGPLAN Not. 48(4S), 17–21 (2013)CrossRefGoogle Scholar
  33. 33.
    Kwiatkowska, M., Norman, G., Parker, D.: PRISM 4.0: verification of probabilistic real-time systems. In: International Conference on Computer Aided Verification, pp. 585–591. Springer, Berlin (2011)Google Scholar
  34. 34.
    Lamprecht, A.L., Margaria, T., Steffen, B.: Seven variations of an alignment workflow-An illustration of agile process design and management in Bio-jETI. In: International Symposium on Bioinformatics Research and Applications, pp. 445–456. Springer, Berlin (2008)Google Scholar
  35. 35.
    Lamprecht, A.L., Margaria, T., Steffen, B.: Bio-jETI: a framework for semantics-based service composition. BMC Bioinform. 10(10), 1 (2009)Google Scholar
  36. 36.
    Larsen, K.G., Pettersson, P., Yi, W.: UPPAAL in a nutshell. Int. J. Softw. Tools Technol. Transf. (STTT) 1(1), 134–152 (1997)CrossRefMATHGoogle Scholar
  37. 37.
    Legay, A., Joloboff, V., et al.: PSCV: A runtime verification tool for probabilistic systemc models. In: International Conference on Computer Aided Verification, pp. 84–91. Springer, Berlin (2016)Google Scholar
  38. 38.
    Legay, A., Sedwards, S., Traonouez, L.M.: Plasma lab: a modular statistical model checking platform. In: International Symposium on Leveraging Applications of Formal Methods, pp. 77–93. Springer, Berlin (2016)Google Scholar
  39. 39.
    Legay, A., Viswanathan, M.: Statistical model checking: challenges and perspectives. Int. J. Softw. Tools Technol. Transf. 17(4), 369–376 (2015)CrossRefGoogle Scholar
  40. 40.
    Mao, H., Chen, Y., Jaeger, M., Nielsen, T.D., Larsen, K.G., Nielsen, B.: Learning deterministic probabilistic automata from a model checking perspective. Mach. Learn. 105(2), 1–45 (2016)Google Scholar
  41. 41.
    Margaria, T.: Web services-based tool-integration in the ETI platform. Softw. Syst. Model. 4(2), 141–156 (2005)CrossRefGoogle Scholar
  42. 42.
    Margaria, T., Kubczak, C., Steffen, B.: Bio-jETI: a service integration, design, and provisioning platform for orchestrated bioinformatics processes. BMC Bioinform. 9(4), 1 (2008)Google Scholar
  43. 43.
    Margaria, T., Kubczak, C., Steffen, B., Naujokat, S.: The FMICS-jETI platform: Status and perspectives. In: Leveraging Applications of Formal Methods, Verification and Validation, 2006. ISoLA 2006. Second International Symposium on, pp. 402–407. IEEE (2006)Google Scholar
  44. 44.
    Margaria, T., Nagel, R., Steffen, B.: jETI: a tool for remote tool integration. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, pp. 557–562. Springer, Berlin (2005)Google Scholar
  45. 45.
    Margaria, T., Nagel, R., Steffen, B.: Remote integration and coordination of verification tools in jETI. In: 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05), pp. 431–436. IEEE (2005)Google Scholar
  46. 46.
    Margaria, T., Steffen, B.: LTL guided planning: revisiting automatic tool composition in ETI. In: Software Engineering Workshop, 2007. SEW 2007. 31st IEEE, pp. 214–226. IEEE (2007)Google Scholar
  47. 47.
    Margaria, T., Steffen, B.: Service-orientation: conquering complexity with xmdd. In: Conquering Complexity, pp. 217–236. Springer, Berlin (2012)Google Scholar
  48. 48.
    Margaria, T., Steffen, B., Reitenspieß, M.: Service-oriented design: the roots. In: International Conference on Service-Oriented Computing, pp. 450–464. Springer, Berlin (2005)Google Scholar
  49. 49.
    McMillan, K.L.: Symbolic model checking. In: Symbolic Model Checking, pp. 25–60. Springer, Berlin (1993)Google Scholar
  50. 50.
    Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL: A Proof Assistant for Higher-Order Logic, vol. 2283. Springer Science and Business Media, Berlin (2002)MATHGoogle Scholar
  51. 51.
    Owre, S., Rushby, J.M., Shankar, N.: PVS: a prototype verification system. In: International Conference on Automated Deduction, pp. 748–752. Springer, Berlin (1992)Google Scholar
  52. 52.
    Raffelt, H., Merten, M., Steffen, B., Margaria, T.: Dynamic testing via automata learning. Int. J. Softw. Tools Technol. Transf. 11(4), 307–324 (2009)CrossRefGoogle Scholar
  53. 53.
    Raffelt, H., Steffen, B., Berg, T., Margaria, T.: LearnLib: a framework for extrapolating behavioral models. Int. J. Softw. Tools Technol. Transf. 11(5), 393–407 (2009)CrossRefGoogle Scholar
  54. 54.
    Reger, G., Hallé, S., Falcone, Y.: Third international competition on runtime verification. In: International Conference on Runtime Verification, pp. 21–37. Springer, Berlin (2016)Google Scholar
  55. 55.
    Rensink, A., Van Gorp, P.: Graph transformation tool contest 2008. Int. J. Softw. Tools Technol. Transf. 12(3–4), 171–181 (2010)CrossRefGoogle Scholar
  56. 56.
    Schordan, M., Quinlan, D.: A source-to-source architecture for user-defined optimizations. In: Modular Programming Languages: Joint Modular Languages Conference, JMLC 2003, Klagenfurt, Austria, August 25–27, 2003. Proceedings, pp. 214–223. Springer, Berlin (2003)Google Scholar
  57. 57.
    Steffen, B.: Generating data flow analysis algorithms from modal specifications. Sci. Comput. Program. 21(2), 115–139 (1993)Google Scholar
  58. 58.
    Steffen, B., Claßen, A., Klein, M., Knoop, J., Margaria, T.: The fixpoint-analysis machine. In: Lee, I., Smolka, S. A. (eds.) CONCUR ’95: Concurrency Theory. CONCUR 1995. Lecture Notes in Computer Science, vol. 962, pp. 72–87. Springer, Berlin (1995)Google Scholar
  59. 59.
    Steffen, B., Isberner, M., Naujokat, S., Margaria, T., Geske, M.: Property-driven benchmark generation: synthesizing programs of realistic structure. Int. J. Softw. Tools Technol. Transf. 16(5), 465–479 (2014)CrossRefGoogle Scholar
  60. 60.
    Steffen, B., Margaria, T., Braun, V.: The electronic tool integration platform: concepts and design. Int. J. Softw. Tools Technol. Transf. (STTT) 1(1), 9–30 (1997)CrossRefMATHGoogle Scholar
  61. 61.
    Tretmans, J., Brinksma, E.: Torx: Automated model-based testing. In: Hartman, A., Dussa-Ziegler, K. (eds.) First European Conference on Model-Driven Software Engineering, pp. 31–43, Nuremberg, Germany (2003)Google Scholar
  62. 62.
    Tulsian, V., Kanade, A., Kumar, R., Lal, A., Nori, A.V.: Mux: algorithm selection for software model checkers. In: Proceedings of the 11th Working Conference on Mining Software Repositories, pp. 132–141. ACM (2014)Google Scholar
  63. 63.
    Turyshev, S.G., Toth, V.T., Kinsella, G., Lee, S.C., Lok, S.M., Ellis, J.: Support for the thermal origin of the pioneer anomaly. Phys. Rev. Lett. 108(24), 241101 (2012)CrossRefGoogle Scholar
  64. 64.
    Yovine, S.: Kronos: a verification tool for real-time systems. Int. J. Softw. Tools Technol. Transf. (STTT) 1(1), 123–133 (1997)CrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.Programming SystemsTU Dortmund UniversityDortmundGermany

Personalised recommendations