On the pragmatic design of literature studies in software engineering: an experience-based guideline

  • Marco Kuhrmann
  • Daniel Méndez Fernández
  • Maya Daneva
Article

Abstract

Systematic literature studies have received much attention in empirical software engineering in recent years. They have become a powerful tool to collect and structure reported knowledge in a systematic and reproducible way. We distinguish systematic literature reviews to systematically analyze reported evidence in depth, and systematic mapping studies to structure a field of interest in a broader, usually quantified manner. Due to the rapidly increasing body of knowledge in software engineering, researchers who want to capture the published work in a domain often face an extensive amount of publications, which need to be screened, rated for relevance, classified, and eventually analyzed. Although there are several guidelines to conduct literature studies, they do not yet help researchers coping with the specific difficulties encountered in the practical application of these guidelines. In this article, we present an experience-based guideline to aid researchers in designing systematic literature studies with special emphasis on the data collection and selection procedures. Our guideline aims at providing a blueprint for a practical and pragmatic path through the plethora of currently available practices and deliverables capturing the dependencies among the single steps. The guideline emerges from various mapping studies and literature reviews conducted by the authors and provides recommendations for the general study design, data collection, and study selection procedures. Finally, we share our experiences and lessons learned in applying the different practices of the proposed guideline.

Keywords

Systematic literature review Systematic mapping study Empirical software engineering Guideline proposal Lessons learned 

References

  1. Ali NB, Petersen K (2014) Evaluating strategies for study selection in systematic literature studies. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1145/2652524.2652557. ACM, New York, pp 45:1–45:4
  2. Badampudi D, Wohlin C, Petersen K (2015) Experiences from using snowballing and database searches in systematic literature studies. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. doi:10.1145/2745802.2745818. ACM, New York, pp 17:1– 17:10
  3. Bowes D, Hall T, Beecham S (2012) SLurp: a tool to help large complex systematic literature reviews deliver valid and rigorous results. In: Proceedings of the International Workshop on Evidential Assessment of Software Technologies. ACM, NY, USA, pp 33–36Google Scholar
  4. Brereton P, Kitchenham BA, Budgen D, Turner M, Khalil M (2007) Lessons from applying the systematic literature review process within the software engineering domain. J Syst Softw 80(4):571–583. doi:10.1016/j.jss.2006.07.009 CrossRefGoogle Scholar
  5. Carver JC, Hassler E, Hernandes E, Kraft NA (2013) Identifying barriers to the systematic literature review process. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2013.28. IEEE, Washington, DC, pp 203–212
  6. Cohen J (1968) Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychol Bull 70(4):213–220CrossRefGoogle Scholar
  7. Condori-Fernandez N, Daneva M, Sikkel K, Wieringa R, Dieste O, Pastor O (2009) A systematic mapping study on empirical evaluation of software requirements specifications techniques. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2009.5314232. IEEE, Washington, DC, pp 502–505
  8. Dybå T, Dingsøyr T, Hanssen GK (2007) Applying systematic reviews to diverse study types: An experience report. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2007.21. IEEE, Washington, pp 225–234
  9. Fabbri S, Silva C, Hernandes E, Octaviano F, Di Thommazo A, Belgamo A (2016) Improvements in the start tool to better support the systematic review process. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. doi:10.1145/2915970.2916013. ACM, New York, pp 21:1–21:5
  10. Fabbri SCPF, Felizardo KR, Ferrari FC, Hernandes ECM, Octaviano FR, Nakagawa EY, Maldonado JC (2013) Externalising tacit knowledge of the systematic review process. IET Softw 7(6):298–307. doi:10.1049/iet-sen.2013.0029 CrossRefGoogle Scholar
  11. Fleiss JL (1971) Measuring nominal scale agreement among many raters. Psychol Bull 76(5):378–382CrossRefGoogle Scholar
  12. Hanneman A, Riddle M (2005) Introduction to social network methods Online http://faculty.ucr.edu/~hanneman/
  13. Hassler E, Carver JC, Hale D, Al-Zubidy A (2016) Identification of slr tool needs – results of a community workshop. Inf Softw Technol 70:122–129. doi:10.1016/j.infsof.2015.10.011 CrossRefGoogle Scholar
  14. Inayat I, Salim SS, Marczak S, Daneva M, Shamshirband S (2015) A systematic literature review on agile requirements engineering practices and challenges. Comput Hum Behav 51, Part B:915–929. doi:10.1016/j.chb.2014.10.046 CrossRefGoogle Scholar
  15. Ingibergsson J, Schultz U, Kuhrmann M (2015) On the use of safety certification practices in autonomous field robot software development: a systematic mapping study. In: Proceedings of the International Conference on Product Focused Software Development and Process Improvement, Lecture Notes in Computer Science, vol 9459. Springer, Berlin Heidelberg, pp 335–352Google Scholar
  16. Ivarsson M, Gorschek T (2011) A method for evaluating rigor and industrial relevance of technology evaluations. Empir Softw Eng 16 (3):365–395. doi:10.1007/s10664-010-9146-4 CrossRefGoogle Scholar
  17. Jacobson JW, Kuhrmann M, Münch J, Diebold P, Felderer M (2016) On the role of software quality management in software process improvement. In: Proceedings of the International Conference on Product-Focused Software Process Improvement, Lecture Notes in Computer Science, vol 10027. Springer, Berlin, Heidelberg, pp 327–343Google Scholar
  18. Kalus G, Kuhrmann M (2013) Criteria for software process tailoring: a systematic review. In: Proceedings of the International Conference on Software and System Process, ICSSP. ACM Press, New York, pp 171–180Google Scholar
  19. Kitchenham B (2004) Procedures for performing systematic reviews. Technical Report. TR/SE-0401 Keele UniversityGoogle Scholar
  20. Kitchenham B, Brereton P (2013) A systematic review of systematic review process research in software engineering. Inf Softw Technol 55(12):2049–2075. doi:10.1016/j.infsof.2013.07.010 CrossRefGoogle Scholar
  21. Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. Technical Report. EBSE-2007-01 Keele UniversityGoogle Scholar
  22. Kitchenham BA, Budgen D, Brereton P (2015) Evidence-Based Software engineering and systematic reviews. CRC PressGoogle Scholar
  23. Kuhrmann M, Diebold P, Münch J (2016) Software process improvement: A systematic mapping study on the state of the art. Peer J Comput Sc 2:e62CrossRefGoogle Scholar
  24. Kuhrmann M, Diebold P, Münch J, Tell P (2016) How does software process improvement address global software engineering?. In: International Conference on Global Software Engineering, ICGSE. IEEE, Washington, DC, pp 89–98Google Scholar
  25. Kuhrmann M, Fernández DM, Gröber M (2013) Towards artifact models as process interfaces in distributed software projects. In: Proceedings of the International Conference on Global Software Engineering, ICGSE. IEEE, Washington, DC, pp 11–20Google Scholar
  26. Kuhrmann M, Fernández DM, Steenweg R (2013) Systematic software process development: Where do we stand today?. In: Proceedings of the International Conference on Software and System Process, ICSSP. ACM Press, New York, pp 166–170Google Scholar
  27. Kuhrmann M, Fernández DM, Tiessler M (2014) A mapping study on the feasibility of method engineering. J Softw: Evol Process 26(12):1053–1073Google Scholar
  28. Kuhrmann M, Konopka C, Nellemann P, Diebold P, Münch J (2015) Software process improvement: Where is the evidence?. In: Proceedings of the International Conference on Software and Systems Process, ICSSP. ACM, New York, pp 107–116Google Scholar
  29. Kuo BYL, Hentrich T, Good BM, Wilkinson MD (2007) Tag clouds for summarizing web search results. In: Proceedings of the International Conference on World Wide Web, WWW. doi:10.1145/1242572.1242766. ACM, New York, pp 1203–1204
  30. Marshall C, Brereton P (2013) Tools to support systematic literature reviews in software engineering: A mapping study. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2013.32. IEEE, Washington, DC, pp 296–299
  31. Marshall C, Brereton P (2015) Systematic review toolbox: a catalogue of tools to support systematic reviews. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 23:1–23:6Google Scholar
  32. Marshall C, Brereton P, Kitchenham B (2014) Tools to support systematic reviews in software engineering: a feature analysis. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 13:1–13:10Google Scholar
  33. Marshall C, Brereton P, Kitchenham B (2015) Tools to support systematic reviews in software engineering: a cross-domain survey using semi-structured interviews. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York , pp 26:1–26:6Google Scholar
  34. Méndez Fernández D, Ognawala S, Wagner S, Daneva M (2014) Where do we stand in requirements engineerign improvement today? first results from a mapping study. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. ACM, New York, pp 58:1–58:4Google Scholar
  35. Molléri J. S, Benitti FBV (2015) SESRA: a web-based automated tool to support the systematic literature review process. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 24:1– 24:6Google Scholar
  36. Oosterman J, Cockburn A (2010) An empirical comparison of tag clouds and tables. In: Proceedings of the Conference of the Computer-Human Interaction Special Interest Group of Australia on Computer-Human Interaction, OZCHI. doi:10.1145/1952222.1952284. ACM, New York, pp 288–295
  37. Paternoster N, Giardino C, Unterkalmsteiner M, Gorschek T, Abrahamsson P (2014) Software development in startup companies: A systematic mapping study. Inf Softw Technol 56(10):1200–1218. doi:10.1016/j.infsof.2014.04.014
  38. Penzenstadler B, Raturi A, Richardson D, Calero C, Femmer H, Franch X (2014) Systematic mapping study on software engineering for sustainability (SE4S). In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. doi:10.1145/2601248.2601256. ACM, New York, pp 14:1–14:14
  39. Petersen K, Ali NB (2011) Identifying strategies for study selection in systematic reviews and maps. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2011.46. IEEE, Washington DC, pp 351–354
  40. Petersen K, Feldt R, Mujtaba S, Mattson M (2008) Systematic mapping studies in software engineering. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 68–77Google Scholar
  41. Petersen K, Vakkalanka S, Kuzniarz L (2015) Guidelines for conducting systematic mapping studies in software engineering: an update. Inf Softw Technol 64:1–18CrossRefGoogle Scholar
  42. Portillo-Rodríguez J, Vizcaíno A, Piattini M, Beecham S (2012) Tools used in global software engineering: A systematic mapping review. Inf Softw Technol 54(7):663–685. doi:10.1016/j.infsof.2012.02.006 CrossRefGoogle Scholar
  43. Racheva Z, Daneva M, Sikkel K (2009) Value creation by agile projects: Methodology or mystery?. In: Product-Focused Software Process Improvement, Lecture Notes in Business Information Processing. doi:10.1007/978-3-642-02152-7_12, vol 32. Springer, Berlin Heidelberg, pp 141–155
  44. Ramage D, Dumais S, Liebling D (2010) Characterizing microblogs with topic models. In: Proceedings of the International AAAI Conference on Weblogs and Social Media. Association for the advancement of artificial intelligence, pp 130–137Google Scholar
  45. Riaz M, Sulayman M, Salleh N, Mendes E (2010) Experiences conducting systematic reviews from novices’ perspective. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. British Computer Society, Swinton, UK, pp 44– 53Google Scholar
  46. Rivadeneira AW, Gruen DM, Muller MJ, Millen DR (2007) Getting our head in the clouds: Toward evaluation studies of tagclouds. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI. doi:10.1145/1240624.1240775. ACM, New York, pp 995–998
  47. Schramm J, Dohrmann P, Rausch A, Ternité T (2014) Process model engineering lifecycle: Holistic concept proposal and systematic literature review. In: Proceedings of the Euromicro Conference on Software Engineering and Advanced Applications, SEAA. IEEE, Washington, DC, pp 127– 130Google Scholar
  48. Schrammel J, Leitner M, Tscheligi M (2009) Semantically structured tag clouds: An empirical evaluation of clustered presentation approaches. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI. doi:10.1145/1518701.1519010. ACM, New York, pp 2037–2040
  49. Scott J (2000) Social network analysis: A handbook, 2nd edn. ISBN-13: 978-0761963394. SAGE PublicationsGoogle Scholar
  50. Shaw M (2003) Writing good software engineering research papers: Minitutorial. In: International Conference on Software Engineering, ICSE. IEEE, DC, USA, pp 726–736Google Scholar
  51. Staples M, Niazi M (2007) Experiences using systematic review guidelines. J Syst Softw 80(9):1425–1437. doi:10.1016/j.jss.2006.09.046 CrossRefGoogle Scholar
  52. Tell P, Cholewa J, Nellemann P, Kuhrmann M (2016) Beyond the spreadsheet: Reflections on tool support for literature studies. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, NY, USA, pp 22:1–22:5Google Scholar
  53. Theocharis G, Kuhrmann M, Münch J, Diebold P (2015) Is Water-Scrum-Fall reality? on the use of agile and traditional development practices, vol 9459. Springer, Berlin, HeidelbergGoogle Scholar
  54. Wasserman S, Faust K (1994) Social network analysis: Methods and applications, 2nd edn. University Press, CambridgeCrossRefMATHGoogle Scholar
  55. Wieringa R, Maiden N, Mead N, Rolland C (2005) Requirements engineering paper classification and evaluation criteria: A proposal and a discussion. Requir Eng 11 (1):102– 107. doi:10.1007/s00766-005-0021-6
  56. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering. SpringerGoogle Scholar
  57. Wohlin C, Runeson P, Da Mota Silveira Neto PA, Engströmb E, Do Carmo Machado I, De Almeida ES (2013) On the reliability of mapping studies in software engineering. J Syst Softw 86(10):2594 – 2610. doi:10.1016/j.jss.2013.04.076 CrossRefGoogle Scholar
  58. Zhang H, Babar MA, Tell P (2011) Identifying relevant studies in software engineering. Inf Softw Technol 53(6):625–637. doi:10.1016/j.infsof.2010.12.010 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Mærsk Mc-Kinney Møller Institute, Section Software EngineeringUniversity of Southern DenmarkOdense MDenmark
  2. 2.Institute for Informatics, Software, Systems EngineeringTechnical University of MunichGarchingGermany
  3. 3.University of TwenteEnschedeThe Netherlands

Personalised recommendations