Skip to main content
Log in

The who, what, how of software engineering research: a socio-technical framework

  • Published:
Empirical Software Engineering Aims and scope Submit manuscript

Abstract

Software engineering is a socio-technical endeavor, and while many of our contributions focus on technical aspects, human stakeholders such as software developers are directly affected by and can benefit from our research and tool innovations. In this paper, we question how much of our research addresses human and social issues, and explore how much we study human and social aspects in our research designs. To answer these questions, we developed a socio-technical research framework to capture the main beneficiary of a research study (the who), the main type of research contribution produced (the what), and the research strategies used in the study (how we methodologically approach delivering relevant results given the who and what of our studies). We used this Who-What-How framework to analyze 151 papers from two well-cited publishing venues—the main technical track at the International Conference on Software Engineering, and the Empirical Software Engineering Journal by Springer—to assess how much this published research explicitly considers human aspects. We find that although a majority of these papers claim the contained research should benefit human stakeholders, most focus predominantly on technical contributions. Although our analysis is scoped to two venues, our results suggest a need for more diversification and triangulation of research strategies. In particular, there is a need for strategies that aim at a deeper understanding of human and social aspects of software development practice to balance the design and evaluation of technical innovations. We recommend that the framework should be used in the design of future studies in order to steer software engineering research towards explicitly including human and social concerns in their designs, and to improve the relevance of our research for human stakeholders.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Cooperative and Human Aspects of Software Engineering, co-located with ICSE since 2008 http://www.chaseresearch.org/

  2. We shorten this to “Humans” in the rest of the paper.

  3. We recognize that most technical systems are studied or improved with the final goal to benefit a human stakeholder. However, we found in many papers that these human stakeholders are not discussed and that the research is aimed at understanding or improving the technical system.

  4. By in silico, we mean performed on a computer or via computer simulation.

  5. For example, one EMSE paper we read reported a user study but did not indicate how many participants were involved, nor who the participants were.

  6. http://www.gousios.gr/blog/Scaling-qualitative-research.html

  7. Visual Languages and Human-Centric Computing, http://conferences.computer.org/VLHCC/

  8. ACM Conference on Computer Supported Cooperative Work https://cscw.acm.org

  9. Stol and Fitzgerald interpret and extend this model quite differently to us as they are not concerned with using their framework to discriminate which strategies directly involve human actors. Runkel and McGrath developed their model to capture behavioral aspects and we maintain the behavioral aspect in our extension of their model.

References

  • Aranda J, Venolia G (2009) The secret life of bugs: Going past the errors and omissions in software repositories. In: Icse, pp 298–308. https://doi.org/10.1109/ICSE.2009.5070530

  • Baecker RM, Grudin J, Buxton WAS, Greenberg S (eds) (1995) Human-Computer interaction: Toward the Year, vol 2000. Morgan Kaufmann Publishers Inc., San Francisco

    Google Scholar 

  • Bird C, Rigby PC, Barr ET, Hamilton DJ, German DM, Devanbu P (2009) The promises and perils of mining git. In: Proceedings of the international working conference on mining software repositories. https://doi.org/10.1109/msr.2009.5069475

  • Brooks FP Jr (1995) The Mythical Man-month (Anniversary Ed.) Addison-wesley Longman Publishing Co., Inc., Boston

    Google Scholar 

  • Colavizza G, Hrynaszkiewicz I, Staden I, Whitaker K, McGillivray B (2019) The citation advantage of linking publications to research data. Tech. Rep. arXiv:1907.02565

  • Cruz A, Correia A, Paredes H, Fonseca B, Morgado L, Martins P (2012) Towards an overarching classification model of cscw and groupware: a socio-technical perspective. In: International conference on collaboration and technology, Springer, pp 41–56

  • DeMarco T, Lister T (1987) Peopleware: productive projects and teams. Dorset House Publishing Co., Inc., New York

    Google Scholar 

  • Denzin NK (1973) The research act: A theoretical introduction to sociological methods. Transaction Publishers, New Jersey

    Google Scholar 

  • Dittrich Y, John M, Singer J, Tessem B, (eds) (2007) Special issue on qualitative software engineering research. vol 49. https://www.sciencedirect.com/journal/information-and-software-technology/vol/49/issue/6

  • Dybå T, Prikladnicki R, Rönkkö K, Seaman C, Sillito J (eds) (2011) Special Issue on Qualitative Research in Software Engineering, vol 16. Springer, Berlin. https://link.springer.com/journal/10664/16/4

    Google Scholar 

  • Easterbrook S, Singer J, Storey MA, Damian D (2008) Selecting empirical methods for software engineering research. Springer, London, pp 285–311. https://doi.org/10.1007/978-1-84800-044-5_11

    Google Scholar 

  • Engström E, Storey MD, Runeson P, Höst M, Baldassarre MT (2019) A review of software engineering research from a design science perspective. arXiv:abs/1904.12742

  • Felderer M, Travassos GH (2019) The evolution of empirical methods in software engineering

  • Feldt R, Torkar R, Angelis L, Samuelsson M (2008) Towards individualized software engineering: empirical studies should collect psychometrics. In: Proceedings of the 2008 international workshop on Cooperative and human aspects of software engineering, ACM, pp 49–52

  • Guéhéneuc YG, Khomh F (2019) Empirical software engineering. In: Handbook of software engineering, Springer, pp 285–320

  • Hassan AE (2008) The road ahead for mining software repositories. In: 2008 Frontiers of software maintenance, pp 48–57. https://doi.org/10.1109/FOSM.2008.4659248

  • Kalliamvakou E, Gousios G, Blincoe K, Singer L, German DM, Damian D (2014) The promises and perils of mining GitHub. In: International working conference on mining software repositories. https://doi.org/10.1145/2597073.2597074

  • Kirk J, Miller ML (1986) Reliability and validity in qualitative research. Sage Publications. https://doi.org/10.4135/9781412985659

  • Kitchenham BA, Pfleeger SL (2008) Personal opinion surveys. Springer, London, pp 63–92. https://doi.org/10.1007/978-1-84800-044-5_3

    Google Scholar 

  • Kontio J, Bragge J, Lehtola L (2008) The focus group method as an empirical tool in software engineering. Springer, London, pp 93–116. https://doi.org/10.1007/978-1-84800-044-5_4

    Google Scholar 

  • Lanza M, Mocci A, Ponzanelli L (2016) The tragedy of defect prediction, prince of empirical software engineering research. IEEE Softw 33(6):102–105

    Article  Google Scholar 

  • Lenberg P, Feldt R, Tengberg LGW, Tidefors I, Graziotin D (2017) Behavioral software engineering – guidelines for qualitative studies. arXiv:1712.08341

  • Lenberg P, Feldt R, Wallgren LG (2014) Towards a behavioral software engineering In: Proceedings of the 7th international workshop on cooperative and human aspects of software engineering, pp 48–55

  • Lex A, Gehlenborg N, Strobelt H, Vuillemot R, Pfister H (2014) Upset: Visualization of intersecting sets. IEEE Trans Vis Comput Graph 20(12):1983–1992. https://doi.org/10.1109/tvcg.2014.2346248

    Article  Google Scholar 

  • McGrath JE (1995) Methodology matters: Doing research in the behavioral and social sciences. In: Baecker RM, Grudin J, Buxton W, Greenberg S (eds) Readings in Human-Computer Interaction: Toward the Year 2000. Morgan Kaufmann Publishers Inc, pp 152–169

  • Miles MB, Huberman AM, Saldana J (2013) Qualitative data analysis: a methods sourcebook. SAGE Publications Incorporated

  • Onwuegbuzie AJ, Leech NL (2007) Validity and qualitative research: an oxymoron? Quality & Quantity 41:233–249

    Article  Google Scholar 

  • Roller MR, Lavrakas PJ (2015) Applied Qualitative Research Design: A Total Quality Framework Approach. Guilford Press. https://www.amazon.com/Applied-Qualitative-Research-Design-Framework/dp/1462515754

  • Runeson P, Höst M (2008) Guidelines for conducting and reporting case study research in software engineering. Empir Softw Eng 14(2):131. https://doi.org/10.1007/s10664-008-9102-8

    Article  Google Scholar 

  • Runkel PJ, McGrath JE (1972) Research on human behavior. Holt, Rinehart, and Winston Inc

  • Seaman CB (1999) Qualitative methods in empirical studies of software engineering. IEEE Transactions on Software Engineering 25(4):557–572

    Article  Google Scholar 

  • Seaman CB (2008) Qualitative methods. Springer, London, pp 35–62. https://doi.org/10.1007/978-1-84800-044-5_2

    Google Scholar 

  • Sharp H, Dittrich Y, de Souza CRB (2016) The role of ethnographic studies in empirical software engineering. IEEE Trans Softw Eng 42(8):786–804. https://doi.org/10.1109/TSE.2016.2519887

    Article  Google Scholar 

  • Shaw M (2003) Writing good software engineering research papers: Minitutorial. In: Proceedings of the 25th international conference on software engineering, ICSE ’03. IEEE Computer Society, Washington, pp 726–736

  • Shneiderman B (1980) Software psychology: human factors in computer and information systems (winthrop computer systems series). winthrop publishers

  • Singer J, Sim SE, Lethbridge TC (2008) Software engineering data collection for field studies. Springer, London, pp 9–34. https://doi.org/10.1007/978-1-84800-044-5_1

    Google Scholar 

  • Singer J, Vinson NG (2002) Ethical issues in empirical studies of software engineering. IEEE Trans Softw Eng 28(12):1171–1180

    Article  Google Scholar 

  • Stol KJ, Fitzgerald B (2015) A holistic overview of software engineering research strategies. In: Proceedings of the Third international workshop on conducting empirical studies in industry, CESI ’15. IEEE Press, Piscataway, pp 47–54

  • Stol KJ, Fitzgerald B (2018) The abc of software engineering research. ACM Trans Softw Eng Methodol 27(3):11,1–11,51. https://doi.org/10.1145/3241743

    Article  Google Scholar 

  • Theisen C, Dunaiski M, Williams L, Visser W (2017) Writing good software engineering research papers: Revisited. In: 2017 IEEE/ACM 39Th international conference on software engineering companion (ICSE-c), pp 402–402. https://doi.org/10.1109/ICSE-C.2017.51

  • Weinberg GM (1985) The psychology of computer programming. Wiley, New York

    Google Scholar 

  • Whitworth B (2009) The social requirements of technical systems. In: Whitworth B, de Moor A (eds) Handbook of research on socio-technical design and social networking systems. https://doi.org/10.4018/978-1-60566-264-0. IGI Global, pp 2–22

  • Williams C (2019) Methodology matters: mapping software engineering research through a sociotechnical lens. Master’s thesis, University of Victoria. https://dspace.library.uvic.ca//handle/1828/9997

  • Zelkowitz MV (2007) Techniques for empirical validation. Springer, Berlin, pp 4–9

    Google Scholar 

  • Barik T, Smith J, Lubick K, Holmes E, Feng J, Murphy-Hill E, Parnin C (2017) Do developers read compiler error messages?. In: Proceedings of the ACM/IEEE international conference on software engineering, IEEE. https://doi.org/10.1109/icse.2017.59

  • Bezemer CP, McIntosh S, Adams B, German DM, Hassan AE (2017) An empirical study of unspecified dependencies in make-based build systems. Empir Softw Eng 22(6):3117–3148. https://doi.org/10.1007/s10664-017-9510-8

    Article  Google Scholar 

  • Charpentier A, Falleri JR, Morandat F, Yahia EBH, Réveillère L (2017) Raters’ reliability in clone benchmarks construction. Empir Softw Eng 22(1):235–258. https://doi.org/10.1007/s10664-015-9419-z

    Article  Google Scholar 

  • Christakis M, Emmisberger P, Godefroid P, Müller P. (2017) A general framework for dynamic stub injection. In: Proceedings of the ACM/IEEE international conference on software engineering, pp 586–596. https://doi.org/10.1109/ICSE.2017.60

  • Faitelson D, Tyszberowicz S (2017) Uml diagram refinement (focusing on class- and use case diagrams). In: Proceedings of the ACM/IEEE international conference on software engineering, pp 735–745. https://doi.org/10.1109/ICSE.2017.73

  • Fernández DM, Wagner S, Kalinowski M, Felderer M, Mafra P, Vetrò A, Conte T, Christiansson MT, Greer D, Lassenius C, Männistö T, Nayabi M, Oivo M, Penzenstadler B, Pfahl D, Prikladnicki R, Ruhe G, Schekelmann A, Sen S, Spinola R, Tuzcu A, de la Vara JL, Wieringa R (2017) Naming the pain in requirements engineering. Empir Softw Eng 22(5):2298–2338. https://doi.org/10.1007/s10664-016-9451-7

    Article  Google Scholar 

  • Heikkilä VT, Paasivaara M, Lasssenius C, Damian D, Engblom C (2017) Managing the requirements flow from strategy to release in large-scale agile development: a case study at ericsson. Empir Softw Eng 22(6):2892–2936. https://doi.org/10.1007/s10664-016-9491-z

    Article  Google Scholar 

  • Hoda R, Noble J (2017) Becoming agile: a grounded theory of agile transitions in practice. In: Proceedings of the ACM/IEEE international conference on software engineering, IEEE. https://doi.org/10.1109/icse.2017.21

  • Jiang H, Li X, Yang Z, Xuan J (2017) What causes my test alarm? automatic cause analysis for test alarms in system and integration testing. In: 2017 IEEE/ACM 39Th international conference on software engineering (ICSE), IEEE. https://doi.org/10.1109/icse.2017.71

  • Joblin M, Apel S, Hunsen C, Mauerer W (2017) Classifying developers into core and peripheral: an empirical study on count and network metrics. In: Proceedings of the ACM/IEEE international conference on software engineering, pp 164–174. https://doi.org/10.1109/ICSE.2017.23

  • Kafali O, Jones J, Petruso M, Williams L, Singh MP (2017) How good is a security policy against real breaches? a HIPAA case study. In: Proceedings of the ACM/IEEE international conference on software engineering , IEEE. https://doi.org/10.1109/icse.2017.55

  • Kitchenham B, Madeyski L, Budgen D, Keung J, Brereton P, Charters S, Gibbs S, Pohthong A (2016) Robust statistical methods for empirical software engineering. Empir Softw Eng 22(2):579–630. https://doi.org/10.1007/s10664-016-9437-5

    Article  Google Scholar 

  • Lenberg P, Tengberg LGW, Feldt R (2016) An initial analysis of software engineers’ attitudes towards organizational change. Empir Softw Eng 22 (4):2179–2205. https://doi.org/10.1007/s10664-016-9482-0

    Article  Google Scholar 

  • Li M, Wang W, Wang P, Wang S, Wu D, Liu J, Xue R, Huo W (2017) Libd: Scalable and precise third-party library detection in android markets. In: Proceedings of the ACM/IEEE international conference on software engineering. https://doi.org/10.1109/icse.2017.38

  • Lin Y, Sun J, Xue Y, Liu Y, Dong J (2017) Feedback-based debugging. In: Proceedings of the ACM/IEEE international conference on software engineering. https://doi.org/10.1109/icse.2017.43

  • Mkaouer MW, Kessentini M, Cinnéide MÓ, Hayashi S, Deb K (2016) A robust multi-objective approach to balance severity and importance of refactoring opportunities. Empir Softw Eng 22(2):894–927. https://doi.org/10.1007/s10664-016-9426-8

    Article  Google Scholar 

  • Rojas JM, White TD, Clegg BS, Fraser G (2017) Code defenders: Crowdsourcing effective tests and subtle mutants with a mutation testing game. In: Proceedings of the ACM/IEEE international conference on software engineering. https://doi.org/10.1109/icse.2017.68

  • Stol KJ, Ralph P, Fitzgerald B (2016) Grounded theory in software engineering research: a critical review and guidelines. In: Proceedings of the ACM/IEEE international conference on software engineering, pp 120–131. https://doi.org/10.1145/2884781.2884833

Download references

Acknowledgements

We would like to thank Cassandra Petrachenko, Alexey Zagalsky and Soroush Yousefi for their invaluable help with this paper and research. We also thank Marian Petre and the anonymous reviewers for their insightful suggestions to improve our paper. We also acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Margaret-Anne Storey.

Additional information

Communicated by: Burak Turhan

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: The Circumplex of Runkel and McGrath

Figure 6 shows a sketch of the research strategy circumplex designed by Runkel and McGrath (1972) for categorizing behavioral research strategies. We adapted their model for the How part of our research framework. Runkel and McGrath’s model of research strategies was developed in the 1970s for categorizing human behavioral research, hence it provides a good model for examining socio-technical factors in software engineering.

Fig. 6
figure 6

Runkel and McGrath’s research strategy circumplex

The McGrath model has been used by other software engineering researchers to reflect on research strategy choice and its implications on research design (Easterbrook et al. 2008), and most recently by Stol and Fitzgerald (2018) as a way to to provide consistent terminology for research strategies (Stol and Fitzgerald 2018) Footnote 9 It is used in the field of Human Computer Interaction (Baecker et al. 1995) and CSCW (Cruz et al. 2012) to guide research design on human aspects.

Three of our quadrants (Respondent, Lab, Field) mirror three of the quadrants in Runkel and McGrath’s book (although we refer to Experimental Strategies as Lab Strategies as we find this less confusing). The fourth quadrant they suggest captures non-empirical research methods: they refer to this quadrant as Theoretical Strategies. We consider two types of non-empirical strategies in our framework: Meta (e.g., systematic literature review), and Formal Theory. We show these non empirical strategies separately to the four quadrants of empirical strategies in our framework. Our fourth quadrant includes Computer Simulations (which we consider empirical), but it also includes other types of data strategies that rely solely on previously collected data in addition to simulated data. We call this fourth quadrant in our framework “Data Strategies”.

One of the core contributions of the Runkel and McGrath research strategy model is to highlight the trade-offs inherent in choosing a research strategy and how each strategy has strengths and weaknesses in terms of achieving higher levels of generalizability, realism and control. Runkel and McGrath refer to these criteria as “quality criteria”, since achieving higher levels of these criteria is desirable. Generalizability captures how generalizable the findings may be to the population outside of the specific actors under study. Realism captures how closely the context under which evidence is gathered may match real life. Control refers to the control over the measurement of variables that may be relevant when human behaviors are studied. Field strategies typically exhibit low generalizability, but have higher potential for higher realism. Lab studies have high control over human variables, but lower realism. Respondent strategies show higher potential for generalizability, but lower realism and control over human variables.

We added a fourth research quality criterion to our model, data precision. Data strategies have higher potential for collecting precise measurements of system data over other strategies. Data studies may be reported as ‘controlled’ by some authors when they really mean precision over data collected, therefore, we reserve the term control in this paper for control over variables in the data generation process (e.g., applying a treatment to one of two groups and observing effects on a dependent variable). McGrath himself debated the distinction between precision and control in his later work. We note that McGrath’s observations were based on work in sociology and less likely to involve large data studies, unlike in software engineering. The Who-What-How framework (bottom of Fig. 1) denotes these criteria in italics outside the quadrants. The closer a quadrant to the criterion, the more the quadrant has the potential to maximize that criterion.

We recommend that the interested reader refer to Runkel and McGrath’s landmark book (Runkel and McGrath 1972) for additional insights on methodology choice that we could not include in our paper.

Appendix B: Sample Paper Classification

Table 3 shows a 15-paper sample classified using our Who-What-How framework. Full data is available at https://doi.org/10.5281/zenodo.3813878.

Table 3 Examples of our paper classification and coding. FS: Field Study, D: Data Study, LE: Lab Experiment, JS: Judgment Study, FT: Formal Theory, SS: Sample Study

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Storey, MA., Ernst, N.A., Williams, C. et al. The who, what, how of software engineering research: a socio-technical framework. Empir Software Eng 25, 4097–4129 (2020). https://doi.org/10.1007/s10664-020-09858-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10664-020-09858-z

Keywords

Navigation