Can My Test Case Run on Your Test Plant? A Logic-Based Compliance Check and Its Evaluation on Real Data

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10364)


Test automation is adopted by the majority of software and hardware producers since it speeds up the testing phase and allows to design and perform a large bunch of tests that would be hardly manageable in a manual way. When dealing with the testing of hardware instruments, different physical environments have to be created so that the instruments under test can be analyzed in different scenarios, involving disparate components and software configurations.

Creating a test case is a time consuming activity: test cases should be reused as much as possible. Unfortunately, when a physical test plant changes or a new one is created, understanding if existing test cases can be executed over the updated or new test plant is extremely difficult.

In this paper we present our approach for checking the compliance of a test case w.r.t. a physical test plant characterized by its devices and their current configuration. The compliance check, which is fully automated and exploits a logic-based approach, answers the query “Can the test case A run over the physical configured test plant B”?


Test automation Test case compliance Logic programming 



We thank Vladimir Zaikin who contributed to the realization of some parts of this work, and the engineers from BlindedCompany’s test automation team involved in the project. We also thank the reviewers for their valuable comments.


  1. 1.
    Ackermann, C., Cleaveland, R., Huang, S., Ray, A., Shelton, C., Latronico, E.: Automatic requirement extraction from test cases. In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 1–15. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-16612-9_1 CrossRefGoogle Scholar
  2. 2.
    Ancona, D., Drossopoulou, S., Mascardi, V.: Automatic generation of self-monitoring mass from multiparty global session types in Jason. In: Baldoni, M., Dennis, L., Mascardi, V., Vasconcelos, W. (eds.) DALT 2012. LNCS (LNAI), vol. 7784, pp. 76–95. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-37890-4_5 CrossRefGoogle Scholar
  3. 3.
    Asaithambi, S.P.R., Jarzabek, S.: Towards test case reuse: a study of redundancies in android platform test libraries. In: Favaro, J., Morisio, M. (eds.) ICSR 2013. LNCS, vol. 7925, pp. 49–64. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-38977-1_4 CrossRefGoogle Scholar
  4. 4.
    Briola, D., Mascardi, V., Ancona, D.: Distributed runtime verification of JADE and Jason multiagent systems with Prolog. In: Italian Conference on Computational Logic, CEUR Workshop Proceedings, vol. 1195, pp. 319–323 (2014)Google Scholar
  5. 5.
    Briola, D., Mascardi, V., Ancona, D.: Distributed runtime verification of JADE multiagent systems. In: Camacho, D., Braubach, L., Venticinque, S., Badica, C. (eds.) Intelligent Distributed Computing VIII. SCI, vol. 570, pp. 81–91. Springer, Cham (2015). doi: 10.1007/978-3-319-10422-5_10 Google Scholar
  6. 6.
    Cai, L., Tong, W., Liu, Z., Zhang, J.: Test case reuse based on ontology. In: 15th IEEE Pacific Rim International Symposium on Dependable Computing, 2009. PRDC 2009, pp. 103–108. IEEE (2009)Google Scholar
  7. 7.
    Dustin, E., Rashka, J., Paul, J., Testing, A.S.: Introduction, Management, and Performance. Addison-Wesley, Boston (1999)Google Scholar
  8. 8.
    Fewster, M., Graham, D.: Software Test Automation. Addison-Wesley, Reading (1999)MATHGoogle Scholar
  9. 9.
    Gorlick, M.M., Kesselman, C.F., Marotta, D.A., Parker, D.S.: Mockingbird: a logical methodology for testing. J. Log. Program. 8(1–2), 95–119 (1990)CrossRefMATHGoogle Scholar
  10. 10.
    Graham, D., Fewster, M.: Experiences of Test Automation: Case Studies of Software Test Automation. Addison-Wesley, Upper Saddle River (2012)MATHGoogle Scholar
  11. 11.
    Hayes, L.G.: Automated Testing Handbook. Software Testing Inst, Dallas (2004)Google Scholar
  12. 12.
    Hoffman, D.: Cost benefits analysis of test automation. Report of Software Quality Methods, LLC (1999).
  13. 13.
    Jääskeläinen, A.: Towards model construction based on test cases and GUI extraction. In: Wotawa, F., Nica, M., Kushik, N. (eds.) ICTSS 2016. LNCS, vol. 9976, pp. 225–230. Springer, Cham (2016). doi: 10.1007/978-3-319-47443-4_15 CrossRefGoogle Scholar
  14. 14.
    Jääskeläinen, A., Kervinen, A., Katara, M., Valmari, A., Virtanen, H.: Synthesizing test models from test cases. In: Chockler, H., Hu, A.J. (eds.) HVC 2008. LNCS, vol. 5394, pp. 179–193. Springer, Heidelberg (2009). doi: 10.1007/978-3-642-01702-5_18 CrossRefGoogle Scholar
  15. 15.
    Kifer, M., Lausen, G., Wu, J.: Logical foundations of object-oriented and frame-based languages. J. ACM 42(4), 741–843 (1995)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Kumar, D., Mishra, K.K.: The impacts of test automation on software’s cost, quality and time to market. In: 7th International Conference on Communication Procedia Computer Science, Computing and Virtualization 2016, vol. 79, pp. 8–15 (2016)Google Scholar
  17. 17.
    Lucio, L., Pedro, L., Buchs, D.: A methodology and a framework for model-based testing. In: Guelfi, N. (ed.) RISE 2004. LNCS, vol. 3475, pp. 57–70. Springer, Heidelberg (2005). doi: 10.1007/11423331_6 CrossRefGoogle Scholar
  18. 18.
    Mascardi, V., Ancona, D.: Attribute global types for dynamic checking of protocols in logic-based multiagent systems. TPLP 13(4-5-Online-Supplement) (2013)Google Scholar
  19. 19.
    Meudec, C.: Atgen: automatic test data generation using constraint logic programming and symbolic execution. Softw. Test. Verif. Reliab. 11(2), 81–96 (2001)CrossRefGoogle Scholar
  20. 20.
    Mosley, D.J., Posey, B.A.: Just Enough Software Test Automation. Prentice Hall, Upper Saddle River (2002)Google Scholar
  21. 21.
    Pesch, H., Schnupp, P., Schaller, H., Spirk, A.P.: Test case generation using Prolog. In: 8th International Conference on Software Engineering, ICSE 1985, pp. 252–258. IEEE Computer Society Press (1985)Google Scholar
  22. 22.
    Philipps, J., Pretschner, A., Slotosch, O., Aiglstorfer, E., Kriebel, S., Scholl, K.: Model-based test case generation for smart cards. Electron. Notes Theoret. Comput. Sci. 80, 170–184 (2003)CrossRefGoogle Scholar
  23. 23.
    Sterling, L., Shapiro, E.Y.: The Art of Prolog - Advanced Programming Techniques, 2nd edn. MIT Press, Cambridge (1994)MATHGoogle Scholar
  24. 24.
    The Object Management Group: OMG Unified Modeling Language\(^{TM}\)(OMG UML). Version 2.5. OMG Document Number formal/2015-03-01 (2015).
  25. 25.
    The W3C OWL Working Group: OWL 2 Web Ontology Language Document Overview, 2nd Ed. W3C Recommendation, 11 December 2012.
  26. 26.
    Unmesh, G.: Selenium Testing Tools Cookbook. Packt Publishing, Burmingham (2012)Google Scholar
  27. 27.
    Von Mayrhauser, A., Mraz, R., Walls, J., Ocken, P.: Domain based testing: increasing test case reuse. In: IEEE International Conference on Computer Design: VLSI in Computers and Processors. ICCD 1994, pp. 484–491. IEEE (1994)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.DISCO DepartmentUniversità degli Studi di Milano BicoccaMilanItaly
  2. 2.DIBRIS DepartmentUniversità degli Studi di GenovaGenoaItaly

Personalised recommendations