Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

SERP-test: a taxonomy for supporting industry–academia communication

Abstract

This paper presents the construction and evaluation of SERP-test, a taxonomy aimed to improve communication between researchers and practitioners in the area of software testing. SERP-test can be utilized for direct communication in industry academia collaborations. It may also facilitate indirect communication between practitioners adopting software engineering research and researchers who are striving for industry relevance. SERP-test was constructed through a systematic and goal-oriented approach which included literature reviews and interviews with practitioners and researchers. SERP-test was evaluated through an online survey and by utilizing it in an industry–academia collaboration project. SERP-test comprises four facets along which both research contributions and practical challenges may be classified: Intervention, Scope, Effect target and Context constraints. This paper explains the available categories for each of these facets (i.e., their definitions and rationales) and presents examples of categorized entities. Several tasks may benefit from SERP-test, such as formulating research goals from a problem perspective, describing practical challenges in a researchable fashion, analyzing primary studies in a literature review, or identifying relevant points of comparison and generalization of research.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    http://ease.cs.lth.se/.

  2. 2.

    Follow the progress at: http://serpconnect.cs.lth.se.

  3. 3.

    http://serp.cs.lth.se.

  4. 4.

    Follow the progress at: http://serpconnect.cs.lth.se.

  5. 5.

    Follow the progress at: http://serpconnect.cs.lth.se.

References

  1. Ali, S., Briand, L. C., Hemmati, H., & Panesar-Walawege, R. K. (2010). A systematic review of the application and empirical investigation of search-based test case generation. IEEE Transactions on Software Engineering, 36(6), 742–762.

  2. Ammann, P., & Offutt, J. (2008). Introduction to software testing. Cambridge: Cambridge University Press.

  3. Barbosa, E. F., Nakagawa, E. Y., & Maldonado, J. C. (2006). Towards the establishment of an ontology of software testing. In Proceedings of the eighteenth international conference on software engineering & knowledge engineering (SEKE’2006), San Francisco, CA, USA, July 5–7, 2006 (pp. 522–525).

  4. Basili, V. R., Caldiera, G., McGarry, F. E., Pajerski, R., Page, G. T., & Waligora, S. (1992). The software engineering laboratory: An operational software experience factory. In Proceedings of the 14th international conference on software engineering, Melbourne, Australia, May 11–15, 1992 (pp. 370–381).

  5. Basili, V. R., Caldiera, G., & Rombach, H. D. (1994). The experience factory, Encyclopedia of Software Engineering, 2, 469–476.

  6. Bertolino, A. (2007). Software testing research: Achievements, challenges, dreams. In Proceedings of the workshop on the future of software engineering, FOSE (pp. 85–103).

  7. Bourque, P., & Fairley, R. E. (2014). Guide to the software engineering body of knowledge v3. IEEE Computer Society.

  8. Britto, R. (2015). Knowledge classification for supporting effort estimation in global software engineering projects. Karlskrona: Blekinge Institute of Technology.

  9. Burnstein, I. (2006). Practical software testing: A process-oriented approach. Berlin: Springer.

  10. Chakrabarti, S., Dom, B., Agrawal, R., & Raghavan, P. (1997). Using taxonomy, discriminants, and signatures for navigating in text databases. VLDB, 97, 446–455.

  11. Connor, A. M., Buchan, J., & Petrova, K. (2009). Bridging the research-practice gap in requirements engineering through effective teaching and peer learning. In Sixth international conference on information technology: New generations, ITNG 2009, Las Vegas, Nevada, 27–29 April 2009 (pp. 678–683).

  12. Dias Neto, A. C., & Travassos, G. H. (2009). Model-based testing approaches selection for software projects. Information & Software Technology, 51(11), 1487–1504.

  13. Dias Neto, A. C., Subramanyan, R., Vieira, M., & Travassos, G. H. (2007). A survey on model-based testing approaches: A systematic review. In: Proceedings of the 1st ACM international workshop on empirical assessment of software engineering languages and technologies (WEASEL Tech 07) (pp. 31–36).

  14. Dias Neto, A. C., & Travassos, G. H. (2014). Supporting the combined selection of model-based testing techniques. IEEE Transactions on Software Engineering, 40(10), 1025–1041.

  15. Dybå, T., Kitchenham, B. A., & Jørgensen, M. (2005). Evidence-based software engineering for practitioners. IEEE Software, 22(1), 58–65.

  16. Engström, E., Feldt, R., & Torkar, R. (2012). Indirect effects in evidential assessment: A case study on regression test technology adoption. In Proceedings of the 2nd international workshop on evidential assessment of software technologies (pp. 15–20).

  17. Engström, E., & Petersen, K. (2015). Mapping software testing practice with software testing research—SERP-test taxonomy. In Proceedings of the eighth IEEE international conference on software testing, verification and validation, ICST 2015 workshops, Graz, Austria, April 13–17, 2015 (pp. 1–4).

  18. Felderer, M., & Schieferdecker, I. (2014). A taxonomy of risk-based testing. International Journal on Software Tools for Technology Transfer, 16(5), 559–568.

  19. Ghaisas, S., Rose, P., Daneva, M., Sikkel, K., & Wieringa, R. J. (2013). Generalizing by similarity: Lessons learnt from industrial case studies. In Proceedings of the 1st international workshop on conducting empirical studies in industry. IEEE Press (pp. 37–42).

  20. Happel, H., Maalej, W., & Seedorf, S. (2010). Applications of ontologies in collaborative software development. In M. Ivan, G. John, H. André & W. Jim (Eds.), Collaborative software engineering (pp. 109–129). Springer.

  21. Hesse, W. (2005). Ontologies in the software engineering process. In: Proceedings of the 2nd GI-workshop on enterprise application integration (EAI-05), Marburg, Germany, June 30/July 1st, 2005.

  22. Hevner, A. (2007). A three cycle view of design science research 19(2). http://aisel.aisnet.org/sjis/vol19/iss2/4.

  23. ISO/IEC. (2013). The international standard for software testing—concepts & definitions.

  24. ISTQB. (2010). Standard glossary of terms used in software testing.

  25. Jain, S., Babar, M. A., & Fernandez, J. (2013). Conducting empirical studies in industry: Balancing rigor and relevance. In Proceedings of the 1st international workshop on conducting empirical studies in industry, CESI 2013, San Francisco, California, USA, May 20, 2013 (pp. 9–14).

  26. Jiwnani, K., & Zelkowitz, M. V. (2002). Maintaining software with a security perspective. In Proceedings of the 18th international conference on software maintenance (ICSM) (pp 194–203).

  27. Juzgado, N. J., Moreno, A. M., Vegas, S., & Solari, M. (2006). In search of what we experimentally know about unit testing. IEEE Software, 23(6), 72–80.

  28. Kitchenham, B. (2007). Guidelines for performing systematic literature reviews in software engineering.

  29. Mariani, L. (2003). A fault taxonomy for component-based software. Electronic Notes in Theoretical Computer Science, 82(6), 55–65.

  30. Mendes, O., & Abran, A. (2004). Software engineering ontology: A development methodology. In: In metrics news (Vol. 9, pp. 68–76).

  31. Myers, G. J., Sandler, C., & Badgett, T. (2011). The art of software testing (3rd ed.). Hoboken: Wiley.

  32. Nie, C., & Leung, H. (2011). A survey of combinatorial testing. ACM Computing Surveys, 43(2), 11.

  33. Novak, J., Krajnc, A., & Zontar, R. (2010). Taxonomy of static code analysis tools. In Proceedings of the 33rd international convention MIPRO, IEEE (pp. 418–422).

  34. Oré, S. B., Calvo-Manzano, J. A., Agustín, G. C., & Gilabert, T. S. F. (2014). Critical success factors taxonomy for software process deployment. Software Quality Journal, 22(1), 21–48.

  35. Petersen, K., & Engström, E. (2014). Finding relevant research solutions for practical problems—The SERP taxonomy architecture. In Proceedings of the international workshop on long-term industrial collaboration on software engineering (WISE 2014).

  36. Petersen, K., & Wohlin, C. (2009). Context in industrial software engineering research. In Proceedings of the 3rd international symposium on empirical software engineering and measurement (ESEM ’09) (pp. 401–404).

  37. Reinisch, C. (2014). Academia-industry collaboration: Ways to a strong and beneficial partnership. In International workshop on long-term industrial collaboration on software engineering, New York, NY, USA (pp. 1–2).

  38. Ruiz, F., & Hilera, J. (2006). Using ontologies in software engineering and technology. In C. Calero, F. Ruiz, & M. Piattini (Eds.), Ontologies for software engineering and software technology (pp. 49–102). Berlin: Springer.

  39. Runeson, P. (2006). A survey of unit testing practices. IEEE Software, 23(4), 22–29. doi:10.1109/MS.2006.91.

  40. Runeson, P., Minör, S., & Svenér, J. (2014). Get the cogs in synch: time horizon aspects of industry-academia collaboration. In: Proceedings of the 2014 ACM international workshop on long-term industrial collaboration on software engineering, Vasteras, Sweden, September 16, 2014 (pp. 25–28).

  41. Sandberg, A., Pareto, L., & Arts, T. (2011). Agile collaborative research: Action principles for industry-academia collaboration. IEEE Software, 28(4), 74–83.

  42. Sicilia, M., Cuadrado, J. J., & Rodríguez, D. (2005). Ontologies of software artifacts and activities: Resource annotation and application to learning technologies. In Proceedings of the 17th international conference on software engineering and knowledge engineering (SEKE’2005) (pp. 145–150).

  43. Smite, D., Wohlin, C., Galvina, Z., & Prikladnicki, R. (2014). An empirically based terminology and taxonomy for global software engineering. Empirical Software Engineering, 19(1), 105–153.

  44. Souza, É. F., Falbo, R. A., & Vijaykumar, N. (2013). Ontologies in software testing: A systematic literature review. In Proceedings of the VI workshop on ontology Brazil (ONTOBRAS), Belo Horizonte/MG (pp. 71–82).

  45. Stricker, V., Heuer, A., Zaha, J. M., Pohl, K., & Panfilis, S. D. (2009). Agreeing upon SOA terminology—Lessons learned. In Future internet assembly (pp. 345–354).

  46. Studer, R., Benjamins, V. R., & Fensel, D. (1998). Knowledge engineering: Principles and methods. Data & Knowledge Engineering, 25(1–2), 161–197.

  47. Unterkalmsteiner, M., Feldt, R., & Gorschek, T. (2014). A taxonomy for requirements engineering and software test alignment. ACM Transactions on Software Engineering and Methodology, 23(2), 16.

  48. Utting, M., Pretschner, A., & Legeard, B. (2012). A taxonomy of model-based testing approaches. Software Testing Verification and Reliability, 22(5), 297–312.

  49. Vegas, S., Juzgado, N. J., & Basili, V. R. (2009). Maturing software engineering knowledge through classifications: A case study on unit testing techniques. IEEE Transactions on Software Engineering, 35(4), 551–565.

  50. Wohlin, C. (2013). Empirical software engineering research with industry: Top 10 challenges. In Proceedings of the 1st international workshop on conducting empirical studies in industry, CESI 2013, San Francisco, California, USA, May 20, 2013 (pp. 43–46).

  51. Wongthongtham, P., Chang, E., Dillon, T., & Sommerville, I. (2009). Development of a software engineering ontology for multisite software development. IEEE Transactions on Knowledge and Data Engineering, 21(8), 1205–1217.

  52. Yu, L., Zhang, L., Xiang, H., Su, Y., Zhao, W., & Zhu, J. (2009). A framework of testing as a service. In Proceedings of the conference of information system management.

  53. Zhu, H. (2006). A framework for service-oriented testing of web services. In Proceedings of the 30th annual international computer software and applications conference, COMPSAC (pp. 145–150).

  54. Zhu, H., Hall, P. A. V., & May, J. H. R. (1997). Software unit test coverage and adequacy. ACM Computing Surveys, 29(4), 366–427.

Download references

Acknowledgments

This work has been supported by ELLIIT, the Strategic Area for ICT research, funded by the Swedish Government. Support has also been received from the Gyllenstierna Krapperup’s Foundation and EASE, the Industrial Excellence Centre for Embedded Applications Software Engineering.

Author information

Correspondence to Emelie Engström.

Appendix: Example entities collected during the evaluation of SERP-test

Appendix: Example entities collected during the evaluation of SERP-test

See Table 11

Table 11 Example of entities collected during the evaluation of SERP-test

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Engström, E., Petersen, K., Ali, N. et al. SERP-test: a taxonomy for supporting industry–academia communication. Software Qual J 25, 1269–1305 (2017). https://doi.org/10.1007/s11219-016-9322-x

Download citation

Keywords

  • Software testing
  • Classification
  • SERP-test
  • Taxonomy
  • Methodology
  • Industry relevance
  • Intervention
  • Context
  • Effect target
  • Scope