Skip to main content
Log in

On the pragmatic design of literature studies in software engineering: an experience-based guideline

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Systematic literature studies have received much attention in empirical software engineering in recent years. They have become a powerful tool to collect and structure reported knowledge in a systematic and reproducible way. We distinguish systematic literature reviews to systematically analyze reported evidence in depth, and systematic mapping studies to structure a field of interest in a broader, usually quantified manner. Due to the rapidly increasing body of knowledge in software engineering, researchers who want to capture the published work in a domain often face an extensive amount of publications, which need to be screened, rated for relevance, classified, and eventually analyzed. Although there are several guidelines to conduct literature studies, they do not yet help researchers coping with the specific difficulties encountered in the practical application of these guidelines. In this article, we present an experience-based guideline to aid researchers in designing systematic literature studies with special emphasis on the data collection and selection procedures. Our guideline aims at providing a blueprint for a practical and pragmatic path through the plethora of currently available practices and deliverables capturing the dependencies among the single steps. The guideline emerges from various mapping studies and literature reviews conducted by the authors and provides recommendations for the general study design, data collection, and study selection procedures. Finally, we share our experiences and lessons learned in applying the different practices of the proposed guideline.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Note that finding the “right” research question is a challenge and highly depends on the actual study type. For instance, Kitchenham et al. (2015) mention (standard) research questions for systematic reviews usually addressing the evaluation of impact and/effectiveness of certain paradigms, while mapping studies usually address more high-level questions with the purpose of providing some sort of categorization. The questions presented in Table 1 are addressing more the latter aspect, as this covers information available from all sorts of studies. Nonetheless, to plan and implement a literature study efficiently, Staples and Niazi (2007) make clear that narrowly defined research questions are key. We therefore recommend to use a combination of generic research questions (e.g., Table 1 to “get a feeling” about the result set) and specific narrow research questions—even for mapping studies.

  2. Note that the construction of search strings also depends on the planned search strategy (see Section 2.2), since search stings for automated database searches have a different “layout” than those used for a curiosity-driven or trail-and-error search, e.g., using Google Scholar. Regardless of the search strategy, finding the proper key words is crucial. The most straight-forward approach to develop appropriate search strings is either to do a trail-and-error search or to call in domain experts. Alternatively, a preliminary study can be conducted to “test” the field of interest.

  3. As it is also criticized by Staples and Niazi (2007). In Kuhrmann et al. (2015), however, we accepted this challenge. It took us about a year just to clean the data and perform the selection procedures. We do not recommend this for replication.

  4. Such as the Senior Scholars Basket, cf. http://home.aisnet.org/displaycommon.cfm?an=1&subarticlenbr=346

  5. Note: Apart from serving the backup search, meta-search engines can also be a useful instrument in studies that also include (continuous) updates, e.g., to monitor the development of a field over time (Kuhrmann et al. 2016).

  6. Note: Some of the tools have limitations regarding the amount of text they can process. Furthermore, the tools offer different features, such as thresholds, visualization and export mechanisms. Those points need to be evaluated prior to usage.

  7. Both tools are available at: http://www.wordle.net and http://tagcrowd.com/.

  8. Please note that a reviewer can be an internal reviewer (e.g., a co-author) as well as an external researcher or expert not involved at all in the design in case of unknown domains.

  9. So far, we did not yet apply this method to a complete study, but partially applied it during sample-based result set testing and evaluation (cf. Section 3). As this approach is quite complex compared to the majority vote, it requires sufficient tool support.

  10. Please note that inter-rater reliability calculations also depend on the scales applied, e.g., weighted κ values when using ordinal data (cf. Kitchenham et al. 2015; Wohlin et al. 2012).

  11. This approach needs to be considered with care, as for instance newer publications may have a high-quality contribution, but don’t have a high citation count (e.g., compared to a 10-year old publication). Therefore, citation networks only deliver initial indication and trends shouldn’t be taken for granted.

References

  • Ali NB, Petersen K (2014) Evaluating strategies for study selection in systematic literature studies. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1145/2652524.2652557. ACM, New York, pp 45:1–45:4

  • Badampudi D, Wohlin C, Petersen K (2015) Experiences from using snowballing and database searches in systematic literature studies. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. doi:10.1145/2745802.2745818. ACM, New York, pp 17:1– 17:10

  • Bowes D, Hall T, Beecham S (2012) SLurp: a tool to help large complex systematic literature reviews deliver valid and rigorous results. In: Proceedings of the International Workshop on Evidential Assessment of Software Technologies. ACM, NY, USA, pp 33–36

  • Brereton P, Kitchenham BA, Budgen D, Turner M, Khalil M (2007) Lessons from applying the systematic literature review process within the software engineering domain. J Syst Softw 80(4):571–583. doi:10.1016/j.jss.2006.07.009

    Article  Google Scholar 

  • Carver JC, Hassler E, Hernandes E, Kraft NA (2013) Identifying barriers to the systematic literature review process. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2013.28. IEEE, Washington, DC, pp 203–212

  • Cohen J (1968) Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychol Bull 70(4):213–220

    Article  Google Scholar 

  • Condori-Fernandez N, Daneva M, Sikkel K, Wieringa R, Dieste O, Pastor O (2009) A systematic mapping study on empirical evaluation of software requirements specifications techniques. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2009.5314232. IEEE, Washington, DC, pp 502–505

  • Dybå T, Dingsøyr T, Hanssen GK (2007) Applying systematic reviews to diverse study types: An experience report. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2007.21. IEEE, Washington, pp 225–234

  • Fabbri S, Silva C, Hernandes E, Octaviano F, Di Thommazo A, Belgamo A (2016) Improvements in the start tool to better support the systematic review process. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. doi:10.1145/2915970.2916013. ACM, New York, pp 21:1–21:5

  • Fabbri SCPF, Felizardo KR, Ferrari FC, Hernandes ECM, Octaviano FR, Nakagawa EY, Maldonado JC (2013) Externalising tacit knowledge of the systematic review process. IET Softw 7(6):298–307. doi:10.1049/iet-sen.2013.0029

    Article  Google Scholar 

  • Fleiss JL (1971) Measuring nominal scale agreement among many raters. Psychol Bull 76(5):378–382

    Article  Google Scholar 

  • Hanneman A, Riddle M (2005) Introduction to social network methods Online http://faculty.ucr.edu/~hanneman/

  • Hassler E, Carver JC, Hale D, Al-Zubidy A (2016) Identification of slr tool needs – results of a community workshop. Inf Softw Technol 70:122–129. doi:10.1016/j.infsof.2015.10.011

    Article  Google Scholar 

  • Inayat I, Salim SS, Marczak S, Daneva M, Shamshirband S (2015) A systematic literature review on agile requirements engineering practices and challenges. Comput Hum Behav 51, Part B:915–929. doi:10.1016/j.chb.2014.10.046

    Article  Google Scholar 

  • Ingibergsson J, Schultz U, Kuhrmann M (2015) On the use of safety certification practices in autonomous field robot software development: a systematic mapping study. In: Proceedings of the International Conference on Product Focused Software Development and Process Improvement, Lecture Notes in Computer Science, vol 9459. Springer, Berlin Heidelberg, pp 335–352

  • Ivarsson M, Gorschek T (2011) A method for evaluating rigor and industrial relevance of technology evaluations. Empir Softw Eng 16 (3):365–395. doi:10.1007/s10664-010-9146-4

    Article  Google Scholar 

  • Jacobson JW, Kuhrmann M, Münch J, Diebold P, Felderer M (2016) On the role of software quality management in software process improvement. In: Proceedings of the International Conference on Product-Focused Software Process Improvement, Lecture Notes in Computer Science, vol 10027. Springer, Berlin, Heidelberg, pp 327–343

  • Kalus G, Kuhrmann M (2013) Criteria for software process tailoring: a systematic review. In: Proceedings of the International Conference on Software and System Process, ICSSP. ACM Press, New York, pp 171–180

  • Kitchenham B (2004) Procedures for performing systematic reviews. Technical Report. TR/SE-0401 Keele University

  • Kitchenham B, Brereton P (2013) A systematic review of systematic review process research in software engineering. Inf Softw Technol 55(12):2049–2075. doi:10.1016/j.infsof.2013.07.010

    Article  Google Scholar 

  • Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. Technical Report. EBSE-2007-01 Keele University

  • Kitchenham BA, Budgen D, Brereton P (2015) Evidence-Based Software engineering and systematic reviews. CRC Press

  • Kuhrmann M, Diebold P, Münch J (2016) Software process improvement: A systematic mapping study on the state of the art. Peer J Comput Sc 2:e62

    Article  Google Scholar 

  • Kuhrmann M, Diebold P, Münch J, Tell P (2016) How does software process improvement address global software engineering?. In: International Conference on Global Software Engineering, ICGSE. IEEE, Washington, DC, pp 89–98

  • Kuhrmann M, Fernández DM, Gröber M (2013) Towards artifact models as process interfaces in distributed software projects. In: Proceedings of the International Conference on Global Software Engineering, ICGSE. IEEE, Washington, DC, pp 11–20

  • Kuhrmann M, Fernández DM, Steenweg R (2013) Systematic software process development: Where do we stand today?. In: Proceedings of the International Conference on Software and System Process, ICSSP. ACM Press, New York, pp 166–170

  • Kuhrmann M, Fernández DM, Tiessler M (2014) A mapping study on the feasibility of method engineering. J Softw: Evol Process 26(12):1053–1073

    Google Scholar 

  • Kuhrmann M, Konopka C, Nellemann P, Diebold P, Münch J (2015) Software process improvement: Where is the evidence?. In: Proceedings of the International Conference on Software and Systems Process, ICSSP. ACM, New York, pp 107–116

  • Kuo BYL, Hentrich T, Good BM, Wilkinson MD (2007) Tag clouds for summarizing web search results. In: Proceedings of the International Conference on World Wide Web, WWW. doi:10.1145/1242572.1242766. ACM, New York, pp 1203–1204

  • Marshall C, Brereton P (2013) Tools to support systematic literature reviews in software engineering: A mapping study. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2013.32. IEEE, Washington, DC, pp 296–299

  • Marshall C, Brereton P (2015) Systematic review toolbox: a catalogue of tools to support systematic reviews. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 23:1–23:6

  • Marshall C, Brereton P, Kitchenham B (2014) Tools to support systematic reviews in software engineering: a feature analysis. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 13:1–13:10

  • Marshall C, Brereton P, Kitchenham B (2015) Tools to support systematic reviews in software engineering: a cross-domain survey using semi-structured interviews. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York , pp 26:1–26:6

  • Méndez Fernández D, Ognawala S, Wagner S, Daneva M (2014) Where do we stand in requirements engineerign improvement today? first results from a mapping study. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. ACM, New York, pp 58:1–58:4

  • Molléri J. S, Benitti FBV (2015) SESRA: a web-based automated tool to support the systematic literature review process. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 24:1– 24:6

  • Oosterman J, Cockburn A (2010) An empirical comparison of tag clouds and tables. In: Proceedings of the Conference of the Computer-Human Interaction Special Interest Group of Australia on Computer-Human Interaction, OZCHI. doi:10.1145/1952222.1952284. ACM, New York, pp 288–295

  • Paternoster N, Giardino C, Unterkalmsteiner M, Gorschek T, Abrahamsson P (2014) Software development in startup companies: A systematic mapping study. Inf Softw Technol 56(10):1200–1218. doi:10.1016/j.infsof.2014.04.014

  • Penzenstadler B, Raturi A, Richardson D, Calero C, Femmer H, Franch X (2014) Systematic mapping study on software engineering for sustainability (SE4S). In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. doi:10.1145/2601248.2601256. ACM, New York, pp 14:1–14:14

  • Petersen K, Ali NB (2011) Identifying strategies for study selection in systematic reviews and maps. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, ESEM. doi:10.1109/ESEM.2011.46. IEEE, Washington DC, pp 351–354

  • Petersen K, Feldt R, Mujtaba S, Mattson M (2008) Systematic mapping studies in software engineering. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, New York, pp 68–77

  • Petersen K, Vakkalanka S, Kuzniarz L (2015) Guidelines for conducting systematic mapping studies in software engineering: an update. Inf Softw Technol 64:1–18

    Article  Google Scholar 

  • Portillo-Rodríguez J, Vizcaíno A, Piattini M, Beecham S (2012) Tools used in global software engineering: A systematic mapping review. Inf Softw Technol 54(7):663–685. doi:10.1016/j.infsof.2012.02.006

    Article  Google Scholar 

  • Racheva Z, Daneva M, Sikkel K (2009) Value creation by agile projects: Methodology or mystery?. In: Product-Focused Software Process Improvement, Lecture Notes in Business Information Processing. doi:10.1007/978-3-642-02152-7_12, vol 32. Springer, Berlin Heidelberg, pp 141–155

  • Ramage D, Dumais S, Liebling D (2010) Characterizing microblogs with topic models. In: Proceedings of the International AAAI Conference on Weblogs and Social Media. Association for the advancement of artificial intelligence, pp 130–137

  • Riaz M, Sulayman M, Salleh N, Mendes E (2010) Experiences conducting systematic reviews from novices’ perspective. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. British Computer Society, Swinton, UK, pp 44– 53

  • Rivadeneira AW, Gruen DM, Muller MJ, Millen DR (2007) Getting our head in the clouds: Toward evaluation studies of tagclouds. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI. doi:10.1145/1240624.1240775. ACM, New York, pp 995–998

  • Schramm J, Dohrmann P, Rausch A, Ternité T (2014) Process model engineering lifecycle: Holistic concept proposal and systematic literature review. In: Proceedings of the Euromicro Conference on Software Engineering and Advanced Applications, SEAA. IEEE, Washington, DC, pp 127– 130

  • Schrammel J, Leitner M, Tscheligi M (2009) Semantically structured tag clouds: An empirical evaluation of clustered presentation approaches. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI. doi:10.1145/1518701.1519010. ACM, New York, pp 2037–2040

  • Scott J (2000) Social network analysis: A handbook, 2nd edn. ISBN-13: 978-0761963394. SAGE Publications

  • Shaw M (2003) Writing good software engineering research papers: Minitutorial. In: International Conference on Software Engineering, ICSE. IEEE, DC, USA, pp 726–736

  • Staples M, Niazi M (2007) Experiences using systematic review guidelines. J Syst Softw 80(9):1425–1437. doi:10.1016/j.jss.2006.09.046

    Article  Google Scholar 

  • Tell P, Cholewa J, Nellemann P, Kuhrmann M (2016) Beyond the spreadsheet: Reflections on tool support for literature studies. In: Proceedings of the International Conference on Evaluation and Assessment in Software Engineering, EASE. ACM, NY, USA, pp 22:1–22:5

  • Theocharis G, Kuhrmann M, Münch J, Diebold P (2015) Is Water-Scrum-Fall reality? on the use of agile and traditional development practices, vol 9459. Springer, Berlin, Heidelberg

  • Wasserman S, Faust K (1994) Social network analysis: Methods and applications, 2nd edn. University Press, Cambridge

    Book  MATH  Google Scholar 

  • Wieringa R, Maiden N, Mead N, Rolland C (2005) Requirements engineering paper classification and evaluation criteria: A proposal and a discussion. Requir Eng 11 (1):102– 107. doi:10.1007/s00766-005-0021-6

  • Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2012) Experimentation in software engineering. Springer

  • Wohlin C, Runeson P, Da Mota Silveira Neto PA, Engströmb E, Do Carmo Machado I, De Almeida ES (2013) On the reliability of mapping studies in software engineering. J Syst Softw 86(10):2594 – 2610. doi:10.1016/j.jss.2013.04.076

    Article  Google Scholar 

  • Zhang H, Babar MA, Tell P (2011) Identifying relevant studies in software engineering. Inf Softw Technol 53(6):625–637. doi:10.1016/j.infsof.2010.12.010

    Article  Google Scholar 

Download references

Acknowledgments

We want to thank Roel Wieringa for fruitful discussions on previous versions of this article and all our students, especially those who contributed to our previously conducted literature studies over the past years. Finally, we are grateful for the constructive feedback provided by the anonymous reviewers of this article who helped improving it substantially.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marco Kuhrmann.

Additional information

Communicated by: Jeffrey C. Carver

Appendices

Appendix: A Study Workflow Templates

In this appendix, we provide selected workflow templates, which we inferred from experiences (Table 5) for simple reuse in research method descriptions of scientific papers. The provided templates can be used to inspire or shorten the description of research methods, which especially in conference papers consumes much precious space. For each model in subsequent sections, we provide a brief context description, an exemplary workflow, and textual description.

1.1 A.1 Template 1: 2 Researcher Workshop Model with Snowballing

This model addresses smaller literature studies in which just two researchers collaborate, thus, having no option to implement more comprehensive study selection procedures, such as majority votes. Our experience shows this model to be well-applicable in settings with up to approximately 50 papers, two senior or one senior and one junior researcher, and in distributed settings. Apart from an initial research objective and/or a set of research questions and a (small) set of reference publications, no extra entry conditions need to be fulfilled.

Figure 8 illustrates the basic workflow for this model including some notes emphasizing the most relevant points to be considered.

Fig. 8
figure 8

Exemplary workflow for the 2 researcher workshop model with a snowballing-based preliminary study

The 2 Researcher Workshop Model with Snowballing is implemented as follows: Right in the beginning of the study, a snowballing-based preliminary study is conducted. For this pre-study, a set of reference papers is selected to lay the foundation for an (incremental) snowballing search. When the snowballing is done, the obtained papers are analyzed for keywords, which are used to construct the search queries for an automated database search. As the last preparation steps, the data sources of interest are selected and the inclusion and exclusion criteria are defined.

The data collection is performed (according to the search strategy, Section 2.2.1). After the search, the dataset is cleaned (Section 2.2.2), e.g., by a stepwise integration of individual datasets. The kick-off meeting is—on the one hand—closing the data collection and cleaning phase and—on the other hand—starts the study selection phase. In the kick-off meeting, both researchers reflect on all the criteria, inspect and prepare the dataset for the rating, and agree on a schedule. According to the procedure illustrated in Fig. 5, each researcher gets a copy of the dataset and carries out the individual rating. When the rating is done, both datasets are integrated and checked for consensus. In a rating workshop (or multiple workshops), both researchers iterate through the dataset discussing all items that are not yet decided to find an agreement. When the concluding integration is done, the study selection phase is closed and the result set is transferred to the main study (Section 2.4). For handing over the result set, a copy of the fully rated result set is created for archiving, and the actual result set is reduced, i.e., those dataset items that were rated as irrelevant for the main study are removed from the dataset so that only relevant data finds its way into the analysis.

1.2 A.2 Template 2: 3 Researcher Voting-only Model

This model addresses literature studies in which three researchers collaborate and implement a voting-based study selection procedure. Our experience shows this model to be well-applicable in the majority of all literature study settings. This model supports mixed and distributed teams, whereas at least one senior researcher has to be involved to guide the study project. Our standard implementation of the 3 Researcher Voting-only Model follows the 2+1 approach (Fig. 5, p. 15), i.e., the voting procedure to select relevant papers is organized by two researchers carrying out the full voting independently and calling in a third researcher to make the final decisions. In order to set up a study following this model, research objectives and questions, keyword lists and accordingly derived search queries have to be in place; optionally, a (small) set of reference publications is available.

Figure 9 illustrates the basic workflow for this model including some notes emphasizing the most relevant points to be considered.

Fig. 9
figure 9

Exemplary workflow for a data collection and study selection approach for 3 reviewers using a voting-only approach

The 3 Researcher Voting-only Model is implemented as follows: After defining the search queries, data sources of interest, and the required inclusion and exclusion criteria, actual data collection is performed (Section 2.2.1). After the data collection, the data sets are cleaned (Section 2.2.2), e.g., via a stepwise integration of individual datasets.

In the kick-off meeting, the team of researchers nominates two researchers who will conduct the initial rating. According to the procedure illustrated in Fig. 5, each of the two selected researchers gets a copy of the integrated dataset for carrying out the individual rating. When both researchers have rated the dataset, one of them integrates both and analyzes the integrated result set for the agreement. Those dataset items that are not yet decided are selected and exported in a reduced dataset, which is given to the third reviewer. The third reviewer then performs a rating on the reduced dataset and, eventually, integrates the outcome with the full dataset. After performing this third rating, the dataset is now fully decided and can be prepared to be transferred to the main analysis (Section 2.4). If using a tool-supported approach as, for instance, shown in Fig. 10, the different stages can be supported by simple calculation, scripts, and conditional formatting (color coding).

Fig. 10
figure 10

Example of a color-coded voting spreadsheet. The sheet shows different combinations of a 3-person majority vote (2 reviewers + 1 extra reviewer for final decisions)

Appendix: B Recommended Data Structure

In this section, we present a recommendation of a data structure to store data obtained by a manual/automatic literature search. Table 8 presents this recommended data structure, which emerges from several literature studies (Table 5), and the table explains the meaning of the different fields. Note: We consider the presented data structure to be minimal, i.e., specific studies will require further data fields. However, due to the absence of comprehensive and mature tools to support mapping studies, the normal would be to set up a simple spreadsheet. Examples of such spreadsheets (Fig. 10) can be obtained from http://goo.gl/PBylsn.

The data structure as presented in Table 8 only contains a minimal set of data, which needs to be extended according to the study’s scope. For systematic mapping studies, the following extra data should be contained:

  • Generic/reused classification schemas, such as research/contribution type facet (Wieringa et al. 2005, Petersen et al. 2008)

  • Study-specific classification schemas, such as focus type facets (Paternoster et al. 2014) or rigor/relevance models (Ivarsson and Gorschek 2011)

  • In-/exclusion criteria to document, why a paper was in-/excluded (cf. Table 2)

Table 8 Recommended minimal data structure

Furthermore, grounded in our experience from Kuhrmann et al. (2015), we also recommend adding “dynamic metadata” to the data structure (as already mentioned in Table 8). Such metadata can be added on-the-fly and can support the enhancement of the dataset. From our experience (Kuhrmann et al. 2016), we recommend to collect metadata at least from the dimensions Study and Context.

The dimension Study covers the overall research approach followed in a particular paper, e.g., is a particular paper a primary study, a replication, or even a secondary study, and it can even contain the research methods used, such as interview research or grounded theory analyses. Metadata from this category supports a more detailed classification and analysis of papers regarding the research and contribution type facets. The dimension Context aims at collecting as much context information from the selected papers as possible, such as the software engineering lifecycle phase addressed by a paper (e.g., design, coding, test), the organizational context in which the research was conducted (e.g., SMEs, global players etc.), and the application domain of a paper (e.g., automotive software or software for the healthcare domain).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kuhrmann, M., Fernández, D.M. & Daneva, M. On the pragmatic design of literature studies in software engineering: an experience-based guideline. Empir Software Eng 22, 2852–2891 (2017). https://doi.org/10.1007/s10664-016-9492-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-016-9492-y

Keywords

Navigation