Abstract
Model-based testing relies on models of the system under test to automatically generate test cases. Consequently, the effectiveness of the generated test cases depends on models. In general, these models are created manually, and as such, they are subject to errors like omission of certain system usage behavior. Such omitted behaviors are also omitted by the generated test cases. In practice, these faults are usually detected with exploratory testing. However, exploratory testing mainly relies on the knowledge and manual activities of experienced test engineers. In this paper, we introduce an approach and a toolset, ARME, for automatically refining system models based on recorded testing activities of these engineers. ARME compares the recorded execution traces with respect to the possible execution paths in test models. Then, these models are automatically refined to incorporate any omitted system behavior and update model parameters to focus on the mostly executed scenarios. The refined models can be used for generating more effective test cases. We applied our approach in the context of 3 industrial case studies to improve the models for model-based testing of a digital TV system. In all of these case studies, several critical faults were detected after generating test cases based on the refined models. These faults were not detected by the initial set of test cases. They were also missed during the exploratory testing activities.
Similar content being viewed by others
Notes
We discuss our observations based on the industrial case studies in Sect. 4.3.4.
Due to confidentiality, we do not disclose the real function names used in the implementation.
HBBTV (http://www.hbbtv.org/) stands for Hybrid Broadcast Broadband TV. It is an initiative for harmonizing the broadcast/broadband delivery of entertainment services for TVs and set-top boxes.
The ARME toolset is available at: http://srl.ozyegin.edu.tr/projects/armor/.
References
Bach, J. (2003). Exploratory testing explained, Technical report. http://www.satisfice.com/articles/et-article.pdf.
Belli, F. (2001). Finite state testing and analysis of graphical user interfaces. In Proceedings of 12th international symposium on software reliability engineering, ISSRE2001 (pp. 34–43).
Belli, F., Endo, A. T., Linschulte, M., & Simao, A. (2014). A holistic approach to model-based testing of web service compositions. Software: Practice and Experience, 44(2), 201–234.
Briand, L., & Labiche, Y. (2002). A uml-based approach to system testing. Software and Systems Modeling, 1(1), 10–42.
Chander, A., Dhurjati, D., Koushik, S., & Dachuan, Y. (2011). Optimal test input sequence generation for finite state models and pushdown systems. In Proceedings of the IEEE fourth international conference on software testing, verification and validation (pp. 140–149).
Chow, T. (1978). Testing software design modeled by finite-state machines. IEEE Transactions on Software Engineering, 4(3), 178–187.
Dalal, S. R., Jain, A., Karunanithi, N., Leaton, J. M., Lott, C. M., Patton, G. C., & Horowitz, B. M. (1999). Model-based testing in practice. In Proceedings of the international conference on software engineering (pp. 285–294).
Dallmeier, V., Knopp, N., Mallon, C., Fraser, G., Hack, S., & Zeller, A. (2012). Automatically generating test cases for specification mining. IEEE Transactions on Software Engineering, 38(2), 243–257.
Dustin, E., Rashka, J., & Paul, J. (1999). Automated software testing: Introduction, management, and performance. Boston, MA: Addison-Wesley Longman Publishing Co., Inc.
Entin, V., Winder, M., Zhang, B., & Christmann, S. (2011). Combining model-based and capture-replay testing techniques of graphical user interfaces: An industrial approach. In Proceedings of the 4th IEEE international conference on software testing, verification and validation workshops (pp. 572–577).
Felderer, M., & Schieferdecker, I. (2014). A taxonomy of risk-based testing. International Journal of Software Tools and Technology Transfer, 16(5), 559–568.
Ferguson, R., & Korel, B. (1996). The chaining approach for software test data generation. ACM Transactions on Software Engineering and Methodology, 5(1), 63–86.
Gebizli, C. S., & Sozer, H. (2014). Improving models for model-based testing based on exploratory testing. In Proceedings of the 6th IEEE workshop on software test automation (pp. 656–661) (COMPSAC Companion).
Gebizli, C.S., Metin, D., & Sozer, H. (2015). Combining model-based and risk-based testing for effective test case generation. In Proceedings of the 9th workshop on testing: Academic and industrial conference—practice and research techniques (pp. 1–4) (ICST Companion).
Gebizli, C. S., Sozer, H., & Ercan, A. (2016). Successive refinement of models for model-based testing to increase system test effectiveness. In Proceedings of the 10th workshop on testing: Academic and industrial conference—practice and research techniques (pp. 263–268) (ICST Companion).
Gonenc, G. (1970). A method for the design of fault detection experiments. IEEE Transactions on Computers, 19(6), 551–0558.
Guen, H. L., Marie, R., & Thelin, T. (2004). Reliability estimation for statistical usage testing using markov chains. In Proceedings of the 15th international symposium on software reliability engineering (pp. 54–65).
Harel, D. (1987). Statecharts: A visual formalism for complex systems. Science of Computer Programming, 8(3), 231–274.
Hetzel, W. C., & Hetzel, B. (1991). The complete guide to software testing (2nd ed.). New York, NY: Wiley.
Itkonen, J. (2011). Empirical studies on exploratory software testing, Ph.D. thesis. Aalto University.
Itkonen, J., & Rautiainen, K. (2005). Exploratory testing: A multiple case study. In Proceedings of international symposium on empirical software engineering (pp. 84–93).
Itkonen, J., Mantyla, M. V., & Lassenius, C. (2007). Defect detection efficiency: Test case based vs. exploratory testing. In First international symposium on empirical software engineering and measurement (pp. 61–70). IEEE Computer Society.
Itkonen, J., Mantyla, M. V., & Lassenius, C. (2013). The role of the testers knowledge in exploratory software testing. IEEE Transactions on Software Engineering, 39(5), 707–724.
Joye, C. (2014). Matelo test case generation algorithms: Explanation on available algorithms for test case generation. http://www.all4tec.net/MaTeLo-How-To/understanding-of-test-cases-generation-algorithms.html.
Kaner, C. (2006). Exploratory testing. In Quality assurance institute worldwide annual software testing conference.
Keranen, J., & Raty, T. (2011). Model-based testing of embedded systems in hardware in the loop environment. IET Software, 6(4), 364–376.
Lorenzoli, D., Mariani, L., & Pezzè, M. (2008). Automatic generation of software behavioral models. In Proceedings of the 30th international conference on software engineering, ICSE ’08 (pp. 501–510). ACM.
Mariani, L., Pezzè, M., Riganelli, O., & Santoro, M. (2014). Automatic testing of GUI-based applications. Software Testing, Verification and Reliability, 24(5), 341–366.
Mariani, L., Pezzè, M., & Zuddas, D. (2015). Recent advances in automatic black-box testing. In A. Memon (Ed.), Advances in computers (Vol. 99, pp. 157–193). Amsterdam: Elsevier.
Meinke, K., & Sindhu, M. A. (2013). Lbtest: A learning-based testing tool for reactive systems. In Sixth IEEE international conference on software testing, verification and validation, ICST 2013 (pp. 447–454). Luxembourg.
Michael, C., McGraw, G., & Schatz, M. (2001). Generating software test data by evolution. IEEE Transactions on Software Engineering, 27(12), 1085–1110.
Myers, G. J., & Sandler, C. (2004). The art of software testing. New York: Wiley.
Naito, S., & Tsunoyama, M. (1981). Fault detection for sequential machines by transitions tours. In IEEE fault tolerant computer symposium (pp. 238–243).
Neto, A. C. D., Subramanyan, R., Vieira, M., & Travassos, G. H. (2007). A survey on model-based testing approaches: A systematic review. In Proceedings of the 1st ACM international workshop on empirical assessment of software engineering languages and technologies (pp. 31–36).
Nguyen, B., & Memon, A. (2014). An observe-model-exercise* paradigm to test event-driven systems with undetermined input spaces. IEEE Transactions on Software Engineering, 40(3), 216–234.
Pacheco, C., Lahiri, S., Ernst, M., & Ball, T. (2006). Feedback directed random test generation. In Proceedings of the 29th international conference on software engineering (pp. 396–405).
Pretschner, A. (2005). Chap Model-based testing in practice. In Proceedings of the international symposium of formal methods Europe (pp. 537–541). Berlin: Springer.
Robinson, H. (1999). Finite state model-based testing on a shoestring. In Proceedings of the software testing and analysis and review west conference.
Robinson, H. (2000). Intelligent test automation a model-based method for generating tests from a description of an applications behavior. In Software testing and quality engineering magazine (pp. 24–32).
Shoaib, L., Nadeem, A., & Akbar, A. (2009). An empirical evaluation of the influence of human personality on exploratory software testing. In Multitopic conference and IEEE 13th international (pp. 1–6).
Tinkham, A., & Kaner, C. (2003a). Exploring exploratory testing. In Proceedings of the software testing and analysis and review east conference.
Tinkham, A., & Kaner, C. (2003b). Learning styles and exploratory testing. In Proceedings of the Pacific northwest software quality conference.
Tretmans, J. (2011). Chap Model-based testing and some steps towards test-based modelling. In Formal methods for eternal networked software systems: 11th international school on formal methods for the design of computer, communication and software systems, SFM 2011, Bertinoro. Advanced Lectures (pp. 297–326). Berlin: Springer.
Utting, M., & Legeard, B. (2007). Practical model-based testing: A tools approach. San Francisco, CA: Morgan Kaufmann Publishers Inc.
Utting, M., Pretschner, A., & Legeard, B. (2012). A taxonomy of model-based testing approaches. Software Testing, Verification and Reliability, 22(5), 297–312.
Whittaker, J., & Thomason, M. (1994). A Markov chain model for statistical software testing. IEEE Transactions on Software Engineering, 20(10), 812–824.
Whittaker, J. A. (2009). Exploratory software testing: Tips, tricks, tours, and techniques to guide test design (1st ed.). Reading, MA: Addison-Wesley Professional.
Xie, T., & Notkin, D. (2006). Tool-assisted unit-test generation and selection based on operational abstractions. Automated Software Engineering, 13(3), 345–371.
Yuan, X., & Memon, A. (2010). Generating event sequence-based test cases using GUI runtime state feedback. IEEE Transactions on Software Engineering, 36(1), 81–95.
Zuo, Z. (2015). Refinement techniques in mining software behavior, Ph.D. thesis. National University of Singapore.
Acknowledgments
This work is supported by the joint grant of Vestel Electronics and the Turkish Ministry of Science, Industry and Technology (909.STZ.2015). The contents of this article reflect the ideas and positions of the authors and do not necessarily reflect the ideas or positions of Vestel Electronics and the Turkish Ministry of Science, Industry and Technology. We would like to thank software developers and test engineers at Vestel Electronics for supporting our case study.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Gebizli, C.Ş., Sözer, H. Automated refinement of models for model-based testing using exploratory testing. Software Qual J 25, 979–1005 (2017). https://doi.org/10.1007/s11219-016-9338-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11219-016-9338-2