Applying Surveys and Interviews in Software Test Tool Evaluation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11915)


Despite the multitude of available software testing tools, literature lists lack of right tools and costs as problems for adopting a tool. We conducted a case study to analyze how a group of practitioners, familiar with Robot Framework (an open source, generic test automation framework), evaluate the tool. We based the case and the unit of analysis on our academia-industry relations, i.e., availability. We used a survey (n = 68) and interviews (n = 6) with convenience sampling to develop a comprehensive view of the phenomena. The study reveals the importance of understanding the interconnection of different criteria and the potency of the context on those. Our results show that unconfirmed or unfocused opinions about criteria, e.g., about Costs or Programming Skills, can lead to misinterpretations or hamper strategic decisions if overlooking required technical competence. We conclude surveys can serve as a useful instrument for collecting empirical knowledge about tool evaluation, but experiential reasoning collected with a complementary method is required to develop into comprehensive understanding about it.


Test automation Software testing tool Tool support Tool evaluation Case study Survey Interviewing 



The work was supported partially by research Grants No.: 3192/31/2017 from Business Finland for the EUREKA ITEA3 TESTOMAT project (16032), and No.: 286386-CPDSS from the Academy of Finland for the CPDSS project.


  1. 1.
    Bhargava, S., Guleria, S., Gaurang, A.: A study on the current trends in software testing tools. Int. J. Adv. Res. Comput. Sci. 8(5), 129–131 (2017)Google Scholar
  2. 2.
    Capgemini, Micro Focus and Sogeti: World quality report 2017–2018 (2017). Accessed 5 June 2019
  3. 3.
    Dybå, T., Kitchenham, B.A., Jørgensen, M.: Evidence-based software engineering for practitioners. IEEE Softw. 22(1), 58–65 (2005). Scholar
  4. 4.
    Fenton, N., Pfleeger, S.L., Glass, R.L.: Science and substance: a challenge to software engineers. IEEE Softw. 11(4), 86–95 (1994). Scholar
  5. 5.
    Garousi, V., Zhi, J.: A survey of software testing practices in canada. J. Syst. Softw. 86(5), 1354–1376 (2013). Scholar
  6. 6.
    Goncalves, J., Hosio, S., Kostakos, V.: Eliciting structured knowledge from situated crowd markets. ACM Trans. Internet Technol. 17(2), 1–21 (2017). Scholar
  7. 7.
    Hosio, S., Goncalves, J., Anagnostopoulos, T., Kostakos, V.: Leveraging wisdom of the crowd for decision support. In: Proceedings of the 30th International BCS Human Computer Interaction, pp. 1–12. BCS Learning & Development Ltd., Swindon (2016).
  8. 8.
    Kitchenham, B., Pickard, L., Pfleeger, S.L.: Case studies for method and tool evaluation. IEEE Softw. 12(4), 52–62 (1995). Scholar
  9. 9.
    Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., Emam, K.E., Rosenberg, J.: Preliminary guidelines for empirical research in software engineering. IEEE Trans. Softw. Eng. 28(8), 721–734 (2002). Scholar
  10. 10.
    Kitchenham, B.A., Pfleeger, S.L.: Personal opinion surveys. In: Shull, F., Singer, J., Sjøberg, D.I.K. (eds.) Guide to Advanced Empirical Software Engineering, pp. 63–92. Springer, London (2008). Scholar
  11. 11.
    Lethbridge, T.C., Sim, S.E., Singer, J.: Studying software engineers: data collection techniques for software field studies. Empirical Softw. Eng. 10(3), 311–341 (2005). Scholar
  12. 12.
    Linacre, J.M.: Judge ratings with forced agreement. Trans. Rasch Meas. SIG Am. Educ. Res. Assoc. 16(1), 857–858 (2002)Google Scholar
  13. 13.
    Murphy-Hill, E., Lee, D.Y., Murphy, G.C., McGrenere, J.: How do users discover new tools in software development and beyond? Comput. Support. Coop. Work (CSCW) 24(5), 389–422 (2015). Scholar
  14. 14.
    Ng, S.P., Murnane, T., Reed, K., Grant, D., Chen, T.Y.: A preliminary survey on software testing practices in Australia. In: Proceedings of the 2004 Australian Software Engineering Conference, pp. 116–125. IEEE, NJ, USA (2004).
  15. 15.
    Pano, A., Graziotin, D., Abrahamsson, P.: Factors and actors leading to the adoption of a Javascript framework. Empirical Softw. Eng. 23(6), 3503–3534 (2018). Scholar
  16. 16.
    Perry, D.E., Sim, S.E., Easterbrook, S.M.: Case studies for software engineers. In: Proceedings. 26th International Conference on Software Engineering, pp. 736–738 (2004).
  17. 17.
    Poston, R.M., Sexton, M.P.: Evaluating and selecting testing tools. In: Proceedings of the Second Symposium on Assessment of Quality Software Development Tools, pp. 55–64 (1992).
  18. 18.
    Rafi, D.M., Moses, K.R.K., Petersen, K., Mäntylä, M.V.: Benefits and limitations of automated software testing: systematic literature review and practitioner survey. In: 7th International Workshop on Automation of Software Test (AST), pp. 36–42 (2012).
  19. 19.
    Rainer, A., Hall, T., Baddoo, N.: Persuading developers to “buy into” software process improvement: a local opinion and empirical evidence. In: Proceedings of the 2003 International Symposium on Empirical Software Engineering, 2003, ISESE 2003, pp. 326–335. IEEE, Rome, September 2003.
  20. 20.
    Raulamo-Jurvanen, P., Kakkonen, K., Mäntylä, M.: Using surveys and web-scraping to select tools for software testing consultancy. In: Abrahamsson, P., Jedlitschka, A., Nguyen Duc, A., Felderer, M., Amasaki, S., Mikkonen, T. (eds.) PROFES 2016. LNCS, vol. 10027, pp. 285–300. Springer, Cham (2016). Scholar
  21. 21.
    Raulamo-Jurvanen, P., Mäntylä, M.V., Garousi, V.: Choosing the right test automation tool: a grey literature review of practitioner sources. In: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering, EASE 2017, pp. 21–30. ACM, New York (2017).
  22. 22.
    Raulamo-Jurvanen, P., Hosio, S., Mäntylä, M.V.: Practitioner evaluations on software testing tools. In: Proceedings of the Evaluation and Assessment on Software Engineering, EASE 2019, pp. 57–66. ACM, New York (2019).
  23. 23.
    Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Softw. Eng. 14(2), 131–164 (2009). Scholar
  24. 24.
    Seaman, C.B.: Qualitative methods in empirical studies of software engineering. IEEE Trans. Softw. Eng. 25(4), 557–572 (1999). Scholar
  25. 25.
    Sjøberg, D.I.K., Dybå, T., Jørgensen, M.: The future of empirical methods in software engineering research. In: Future of Software Engineering, FOSE 2007, pp. 358–378. IEEE (2007).
  26. 26.
    Stemler, S.E.: A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Pract. Assess. Res. Eval. 9(4), 1–11 (2004).
  27. 27.
    Taipale, O., Smolander, K., Kälviäinen, H.: Cost reduction and quality improvement in software testing. In: Software Quality Management Conference (2006)Google Scholar
  28. 28.
    Vos, T.E.J., Marin, B., Escalona, M.J., Marchetto, A.: A methodological framework for evaluating software testing techniques and tools. In: 12th International Conference on Quality Software, pp. 230–239. IEEE (2012).
  29. 29.
    Yin, R.K.: Case Study Research: Design and Methods. SAGE Publications, Inc. (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.ITEE, M3S, University of OuluOuluFinland
  2. 2.ITEE, UBICOMP, University of OuluOuluFinland

Personalised recommendations