Advertisement

Applying Surveys and Interviews in Software Test Tool Evaluation

  • Päivi Raulamo-JurvanenEmail author
  • Simo Hosio
  • Mika V. Mäntylä
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11915)

Abstract

Despite the multitude of available software testing tools, literature lists lack of right tools and costs as problems for adopting a tool. We conducted a case study to analyze how a group of practitioners, familiar with Robot Framework (an open source, generic test automation framework), evaluate the tool. We based the case and the unit of analysis on our academia-industry relations, i.e., availability. We used a survey (n = 68) and interviews (n = 6) with convenience sampling to develop a comprehensive view of the phenomena. The study reveals the importance of understanding the interconnection of different criteria and the potency of the context on those. Our results show that unconfirmed or unfocused opinions about criteria, e.g., about Costs or Programming Skills, can lead to misinterpretations or hamper strategic decisions if overlooking required technical competence. We conclude surveys can serve as a useful instrument for collecting empirical knowledge about tool evaluation, but experiential reasoning collected with a complementary method is required to develop into comprehensive understanding about it.

Keywords

Test automation Software testing tool Tool support Tool evaluation Case study Survey Interviewing 

Notes

Acknowledgments

The work was supported partially by research Grants No.: 3192/31/2017 from Business Finland for the EUREKA ITEA3 TESTOMAT project (16032), and No.: 286386-CPDSS from the Academy of Finland for the CPDSS project.

References

  1. 1.
    Bhargava, S., Guleria, S., Gaurang, A.: A study on the current trends in software testing tools. Int. J. Adv. Res. Comput. Sci. 8(5), 129–131 (2017)Google Scholar
  2. 2.
    Capgemini, Micro Focus and Sogeti: World quality report 2017–2018 (2017). https://www.sogeti.com/globalassets/global/downloads/testing/wqr-2017-2018/wqr_2017_v9_secure.pdf. Accessed 5 June 2019
  3. 3.
    Dybå, T., Kitchenham, B.A., Jørgensen, M.: Evidence-based software engineering for practitioners. IEEE Softw. 22(1), 58–65 (2005).  https://doi.org/10.1109/MS.2005.6CrossRefGoogle Scholar
  4. 4.
    Fenton, N., Pfleeger, S.L., Glass, R.L.: Science and substance: a challenge to software engineers. IEEE Softw. 11(4), 86–95 (1994).  https://doi.org/10.1109/52.300094CrossRefGoogle Scholar
  5. 5.
    Garousi, V., Zhi, J.: A survey of software testing practices in canada. J. Syst. Softw. 86(5), 1354–1376 (2013).  https://doi.org/10.1016/j.jss.2012.12.051CrossRefGoogle Scholar
  6. 6.
    Goncalves, J., Hosio, S., Kostakos, V.: Eliciting structured knowledge from situated crowd markets. ACM Trans. Internet Technol. 17(2), 1–21 (2017).  https://doi.org/10.1145/3007900CrossRefGoogle Scholar
  7. 7.
    Hosio, S., Goncalves, J., Anagnostopoulos, T., Kostakos, V.: Leveraging wisdom of the crowd for decision support. In: Proceedings of the 30th International BCS Human Computer Interaction, pp. 1–12. BCS Learning & Development Ltd., Swindon (2016).  https://doi.org/10.14236/ewic/HCI2016.38
  8. 8.
    Kitchenham, B., Pickard, L., Pfleeger, S.L.: Case studies for method and tool evaluation. IEEE Softw. 12(4), 52–62 (1995).  https://doi.org/10.1109/52.391832CrossRefGoogle Scholar
  9. 9.
    Kitchenham, B.A., Pfleeger, S.L., Pickard, L.M., Jones, P.W., Hoaglin, D.C., Emam, K.E., Rosenberg, J.: Preliminary guidelines for empirical research in software engineering. IEEE Trans. Softw. Eng. 28(8), 721–734 (2002).  https://doi.org/10.1109/TSE.2002.1027796CrossRefGoogle Scholar
  10. 10.
    Kitchenham, B.A., Pfleeger, S.L.: Personal opinion surveys. In: Shull, F., Singer, J., Sjøberg, D.I.K. (eds.) Guide to Advanced Empirical Software Engineering, pp. 63–92. Springer, London (2008).  https://doi.org/10.1007/978-1-84800-044-5_3CrossRefGoogle Scholar
  11. 11.
    Lethbridge, T.C., Sim, S.E., Singer, J.: Studying software engineers: data collection techniques for software field studies. Empirical Softw. Eng. 10(3), 311–341 (2005).  https://doi.org/10.1007/s10664-005-1290-xCrossRefGoogle Scholar
  12. 12.
    Linacre, J.M.: Judge ratings with forced agreement. Trans. Rasch Meas. SIG Am. Educ. Res. Assoc. 16(1), 857–858 (2002)Google Scholar
  13. 13.
    Murphy-Hill, E., Lee, D.Y., Murphy, G.C., McGrenere, J.: How do users discover new tools in software development and beyond? Comput. Support. Coop. Work (CSCW) 24(5), 389–422 (2015).  https://doi.org/10.1007/s10606-015-9230-9CrossRefGoogle Scholar
  14. 14.
    Ng, S.P., Murnane, T., Reed, K., Grant, D., Chen, T.Y.: A preliminary survey on software testing practices in Australia. In: Proceedings of the 2004 Australian Software Engineering Conference, pp. 116–125. IEEE, NJ, USA (2004).  https://doi.org/10.1109/ASWEC.2004.1290464
  15. 15.
    Pano, A., Graziotin, D., Abrahamsson, P.: Factors and actors leading to the adoption of a Javascript framework. Empirical Softw. Eng. 23(6), 3503–3534 (2018).  https://doi.org/10.1007/s10664-018-9613-xCrossRefGoogle Scholar
  16. 16.
    Perry, D.E., Sim, S.E., Easterbrook, S.M.: Case studies for software engineers. In: Proceedings. 26th International Conference on Software Engineering, pp. 736–738 (2004).  https://doi.org/10.1109/ICSE.2004.1317512
  17. 17.
    Poston, R.M., Sexton, M.P.: Evaluating and selecting testing tools. In: Proceedings of the Second Symposium on Assessment of Quality Software Development Tools, pp. 55–64 (1992).  https://doi.org/10.1109/AQSDT.1992.205836
  18. 18.
    Rafi, D.M., Moses, K.R.K., Petersen, K., Mäntylä, M.V.: Benefits and limitations of automated software testing: systematic literature review and practitioner survey. In: 7th International Workshop on Automation of Software Test (AST), pp. 36–42 (2012).  https://doi.org/10.1109/IWAST.2012.6228988
  19. 19.
    Rainer, A., Hall, T., Baddoo, N.: Persuading developers to “buy into” software process improvement: a local opinion and empirical evidence. In: Proceedings of the 2003 International Symposium on Empirical Software Engineering, 2003, ISESE 2003, pp. 326–335. IEEE, Rome, September 2003.  https://doi.org/10.1109/ISESE.2003.1237993
  20. 20.
    Raulamo-Jurvanen, P., Kakkonen, K., Mäntylä, M.: Using surveys and web-scraping to select tools for software testing consultancy. In: Abrahamsson, P., Jedlitschka, A., Nguyen Duc, A., Felderer, M., Amasaki, S., Mikkonen, T. (eds.) PROFES 2016. LNCS, vol. 10027, pp. 285–300. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49094-6_18CrossRefGoogle Scholar
  21. 21.
    Raulamo-Jurvanen, P., Mäntylä, M.V., Garousi, V.: Choosing the right test automation tool: a grey literature review of practitioner sources. In: Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering, EASE 2017, pp. 21–30. ACM, New York (2017).  https://doi.org/10.1145/3084226.3084252
  22. 22.
    Raulamo-Jurvanen, P., Hosio, S., Mäntylä, M.V.: Practitioner evaluations on software testing tools. In: Proceedings of the Evaluation and Assessment on Software Engineering, EASE 2019, pp. 57–66. ACM, New York (2019).  https://doi.org/10.1145/3319008.3319018
  23. 23.
    Runeson, P., Höst, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical Softw. Eng. 14(2), 131–164 (2009).  https://doi.org/10.1007/s10664-008-9102-8CrossRefGoogle Scholar
  24. 24.
    Seaman, C.B.: Qualitative methods in empirical studies of software engineering. IEEE Trans. Softw. Eng. 25(4), 557–572 (1999).  https://doi.org/10.1109/32.799955CrossRefGoogle Scholar
  25. 25.
    Sjøberg, D.I.K., Dybå, T., Jørgensen, M.: The future of empirical methods in software engineering research. In: Future of Software Engineering, FOSE 2007, pp. 358–378. IEEE (2007).  https://doi.org/10.1109/FOSE.2007.30
  26. 26.
    Stemler, S.E.: A comparison of consensus, consistency, and measurement approaches to estimating interrater reliability. Pract. Assess. Res. Eval. 9(4), 1–11 (2004). https://www.ingentaconnect.com/content/doaj/15317714/2004/00000009/00000004/art00001
  27. 27.
    Taipale, O., Smolander, K., Kälviäinen, H.: Cost reduction and quality improvement in software testing. In: Software Quality Management Conference (2006)Google Scholar
  28. 28.
    Vos, T.E.J., Marin, B., Escalona, M.J., Marchetto, A.: A methodological framework for evaluating software testing techniques and tools. In: 12th International Conference on Quality Software, pp. 230–239. IEEE (2012).  https://doi.org/10.1109/QSIC.2012.16
  29. 29.
    Yin, R.K.: Case Study Research: Design and Methods. SAGE Publications, Inc. (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Päivi Raulamo-Jurvanen
    • 1
    Email author
  • Simo Hosio
    • 2
  • Mika V. Mäntylä
    • 1
  1. 1.ITEE, M3S, University of OuluOuluFinland
  2. 2.ITEE, UBICOMP, University of OuluOuluFinland

Personalised recommendations