Abstract
One of the largest communities on learning programming and sharing code is built around the Scratch programming language, which fosters visual and block-based programming. An essential requirement for building learning environments that support learners and educators is automated program analysis. Although the code written by learners is often simple, analyzing this code to show its correctness or to provide support is challenging, since Scratch programs are graphical, game-like programs that are controlled by the user using mouse and keyboard. While model checking offers an effective means to analyze such programs, the output of a model checker is difficult to interpret for users, in particular for novices. In this work, we introduce the notion of Scratch error witnesses that help to explain the presence of a specification violation. Scratch error witnesses describe sequences of timed inputs to Scratch programs leading to a program state that violates the specification. We present an approach for automatically extracting error witnesses from counterexamples produced by a model checking procedure. The resulting error witnesses can be exchanged with a testing framework, where they can be automatically re-played in order to re-produce the specification violations. Error witnesses can not only aid the user in understanding the misbehavior of a program, but can also enable the interaction between different verification tools, and therefore open up new possibilities for the combination of static and dynamic analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aljazzar, H., Leue, S.: Debugging of dependability models using interactive visualization of counterexamples. In: QEST, pp. 189–198. IEEE Computer Society (2008)
Beyer, D., Chlipala, A., Henzinger, T.A., Jhala, R., Majumdar, R.: Generating tests from counterexamples. In: ICSE, pp. 326–335. IEEE Computer Society (2004)
Beyer, D., Dangl, M., Dietsch, D., Heizmann, M., Stahlbauer, A.: Witness validation and stepwise testification across software verifiers. In: ESEC/SIGSOFT FSE, pp. 721–733. ACM (2015)
Beyer, D., Dangl, M., Lemberger, T., Tautschnig, M.: Tests from witnesses. In: Dubois, C., Wolff, B. (eds.) TAP 2018. LNCS, vol. 10889, pp. 3–23. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-92994-1_1
Beyer, D., Henzinger, T.A., Théoduloz, G.: Configurable software verification: concretizing the convergence of model checking and program analysis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 504–518. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73368-3_51
Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. CoRR abs/0902.0019 (2009)
Clarke, E., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000). https://doi.org/10.1007/10722167_15
Clarke, E.M., Grumberg, O., Long, D.E.: Model checking and abstraction. ACM Trans. Program. Lang. Syst. 16(5), 1512–1542 (1994)
Cordeiro, L., Fischer, B., Marques-Silva, J.: SMT-based bounded model checking for embedded ANSI-C software. IEEE Trans. Softw. Eng. 38(4), 957–974 (2011)
Csallner, C., Smaragdakis, Y.: Check ‘n’ Crash: combining static checking and testing. In: Proceedings of the 27th International Conference on Software Engineering, pp. 422–431 (2005)
Flanagan, C., Leino, K.R.M., Lillibridge, M., Nelson, G., Saxe, J.B., Stata, R.: Extended static checking for Java. In: Proceedings of the ACM SIGPLAN 2002 Conference on Programming Language Design and Implementation, pp. 234–245 (2002)
Gennari, J., Gurfinkel, A., Kahsai, T., Navas, J.A., Schwartz, E.J.: Executable counterexamples in software model checking. In: Piskac, R., Rümmer, P. (eds.) VSTTE 2018. LNCS, vol. 11294, pp. 17–37. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-03592-1_2
Graf, S., Saidi, H.: Construction of abstract state graphs with PVS. In: Grumberg, O. (ed.) CAV 1997. LNCS, vol. 1254, pp. 72–83. Springer, Heidelberg (1997). https://doi.org/10.1007/3-540-63166-6_10
Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: Proceedings of the 29th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pp. 58–70 (2002)
Larsen, K.G., Mikucionis, M., Nielsen, B., Skou, A.: Testing real-time embedded software using UPPAAL-TRON: an industrial case study. In: EMSOFT, pp. 299–306. ACM (2005)
Maloney, J., Resnick, M., Rusk, N., Silverman, B., Eastmond, E.: The scratch programming language and environment. ACM Trans. Comput. Educ. 10(4), 16:1–16:15 (2010)
Müller, P., Ruskiewicz, J.N.: Using debuggers to understand failed verification attempts. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 73–87. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21437-0_8
Rocha, H., Barreto, R., Cordeiro, L., Neto, A.D.: Understanding programming bugs in ANSI-C software using bounded model checking counter-examples. In: Derrick, J., Gnesi, S., Latella, D., Treharne, H. (eds.) IFM 2012. LNCS, vol. 7321, pp. 128–142. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-30729-4_10
Stahlbauer, A., Frädrich, C., Fraser, G.: Verified from scratch: program analysis for learners’ programs. In: ASE. IEEE (2020)
Stahlbauer, A., Kreis, M., Fraser, G.: Testing scratch programs automatically. In: ESEC/SIGSOFT FSE, pp. 165–175. ACM (2019)
Visser, W., Dwyer, M.B., Whalen, M.W.: The hidden models of model checking. Softw. Syst. Model. 11(4), 541–555 (2012). https://doi.org/10.1007/s10270-012-0281-9
Yovine, S.: Model checking timed automata. In: Rozenberg, G., Vaandrager, F.W. (eds.) EEF School 1996. LNCS, vol. 1494, pp. 114–152. Springer, Heidelberg (1998). https://doi.org/10.1007/3-540-65193-4_20
Yovine, S.: KRONOS: a verification tool for real-time systems. Int. J. Softw. Tools Technol. Transfer 1(1–2), 123–133 (1997). https://doi.org/10.1007/s100090050009
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Diner, D., Fraser, G., Schweikl, S., Stahlbauer, A. (2021). Generating Timed UI Tests from Counterexamples. In: Loulergue, F., Wotawa, F. (eds) Tests and Proofs. TAP 2021. Lecture Notes in Computer Science(), vol 12740. Springer, Cham. https://doi.org/10.1007/978-3-030-79379-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-79379-1_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79378-4
Online ISBN: 978-3-030-79379-1
eBook Packages: Computer ScienceComputer Science (R0)