Advertisement

Visualization and Abstractions for Execution Paths in Model-Based Software Testing

  • Rui WangEmail author
  • Cyrille Artho
  • Lars Michael Kristensen
  • Volker Stolz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11918)

Abstract

This paper presents a technique to measure and visualize execution-path coverage of test cases in the context of model-based software systems testing. Our technique provides visual feedback of the tests, their coverage, and their diversity. We provide two types of visualizations for path coverage based on so-called state-based graphs and path-based graphs. Our approach is implemented by extending the Modbat tool for model-based testing and experimentally evaluated on a collection of examples, including the ZooKeeper distributed coordination service. Our experimental results show that the state-based visualization is good at relating the tests to the model structure, while the path-based visualization shows distinct paths well, in particular linearly independent paths. Furthermore, our graph abstractions retain the characteristics of distinct execution paths, while removing some of the complexity of the graph.

References

  1. 1.
    Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2016)CrossRefGoogle Scholar
  2. 2.
    Artho, C.V., et al.: Modbat: a model-based API tester for event-driven systems. In: Bertacco, V., Legay, A. (eds.) HVC 2013. LNCS, vol. 8244, pp. 112–128. Springer, Cham (2013).  https://doi.org/10.1007/978-3-319-03077-7_8CrossRefGoogle Scholar
  3. 3.
    Artho, C., Rousset, G., Gros, Q.: Precondition coverage in software testing. In: Proceedings of 1st International Workshop on Validating Software Tests (VST 2016), Osaka, Japan. IEEE (2016)Google Scholar
  4. 4.
    AT&T Labs Research. Graphviz - Graph Visualization Software. https://www.graphviz.org
  5. 5.
    Brass, P.: Advanced Data Structures. Cambridge University Press, Cambridge (2008)CrossRefGoogle Scholar
  6. 6.
    Cheng, K., Krishnakumar, A.: Automatic functional test generation using the extended finite state machine model. In: Proceedings of 30th International Design Automation Conference, DAC, pp. 86–91, Dallas, USA. ACM (1993)Google Scholar
  7. 7.
    Chilenski, J.J., Miller, S.P.: Applicability of modified condition/decision coverage to software testing. Softw. Eng. J. 9(5), 193–200 (1994)CrossRefGoogle Scholar
  8. 8.
    CWI and INRIA. The VLTS benchmark suite (2019). https://cadp.inria.fr/resources/vlts/. Accessed 20 May 2019
  9. 9.
    Gansner, E., Koutsofios, E., North, S.: Drawing graphs with dot (2006). http://www.graphviz.org/pdf/dotguide.pdf
  10. 10.
    Groote, J.F., van Ham, F.: Interactive visualization of large state spaces. Int. J. Softw. Tools Technol. Transf. 8(1), 77–91 (2006)CrossRefGoogle Scholar
  11. 11.
    Hunt, P., Konar, M., Junqueira, F.P., Reed, B.: Zookeeper: wait-free coordination for internet-scale systems. In: Barham, P., Roscoe, T. (eds.) 2010 USENIX Annual Technical Conference. USENIX Association (2010)Google Scholar
  12. 12.
    Jorgensen, P.C.: Software Testing: A Craftsman’s Approach. Auerbach Publications, Boca Raton (2013)CrossRefGoogle Scholar
  13. 13.
    Koochakzadeh, N., Garousi, V.: TeCReVis: a tool for test coverage and test redundancy visualization. In: Bottaci, L., Fraser, G. (eds.) TAIC PART 2010. LNCS, vol. 6303, pp. 129–136. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15585-7_12CrossRefGoogle Scholar
  14. 14.
    Ladenberger, L., Leuschel, M.: Mastering the visualization of larger state spaces with projection diagrams. In: Butler, M., Conchon, S., Zaïdi, F. (eds.) ICFEM 2015. LNCS, vol. 9407, pp. 153–169. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-25423-4_10CrossRefGoogle Scholar
  15. 15.
    Lawrence, J., Clarke, S., Burnett, M., Rothermel, G.: How well do professional developers test with code coverage visualizations? An empirical study. In: IEEE Symposium on Visual Languages and Human-Centric Computing, pp. 53–60. IEEE (2005)Google Scholar
  16. 16.
    Lu, S., Zhou, P., Liu, W., Zhou, Y., Torrellas, J.: Pathexpander: Architectural support for increasing the path coverage of dynamic bug detection. In: Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 38–52. IEEE Computer Society (2006)Google Scholar
  17. 17.
    Myers, G.J., Badgett, T., Thomas, T.M., Sandler, C.: The Art of Software Testing, vol. 2. Wiley Online Library, Hoboken (2004)Google Scholar
  18. 18.
    Programming Methods Laboratory of École Polytechnique Fédérale de Lausanne. The Scala Programming Language. https://www.scala-lang.org
  19. 19.
    Rountev, A., Kagan, S., Sawin, J.: Coverage criteria for testing of object interactions in sequence diagrams. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 289–304. Springer, Heidelberg (2005).  https://doi.org/10.1007/978-3-540-31984-9_22CrossRefGoogle Scholar
  20. 20.
    Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann Publishers Inc., San Francisco (2007)Google Scholar
  21. 21.
    Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Test. Verif. Reliab. 22, 297–312 (2012)CrossRefGoogle Scholar
  22. 22.
    Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. J. 10(2), 203–232 (2003)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Rui Wang
    • 1
    Email author
  • Cyrille Artho
    • 2
  • Lars Michael Kristensen
    • 1
  • Volker Stolz
    • 1
  1. 1.Department of Computing, Mathematics, and PhysicsWestern Norway University of Applied SciencesBergenNorway
  2. 2.School of Computer Science and CommunicationKTH Royal Institute of TechnologyStockholmSweden

Personalised recommendations