Empirical Software Engineering

, Volume 15, Issue 1, pp 35–59 | Cite as

A checklist for integrating student empirical studies with research and teaching goals

  • Jeffrey C. Carver
  • Letizia Jaccheri
  • Sandro Morasca
  • Forrest Shull
Article

Abstract

The use of empirical studies with students in software engineering helps researchers gain insight into new or existing techniques and methods. However, due mainly to concerns of external validity, questions have been raised about the value of these types of studies. The authors of this paper draw on their experiences of conducting a large number of empirical studies in university courses in three countries (Italy, Norway, and the United States) to address this important issue. This paper first identifies the requirements that research and pedagogy place on a valid empirical study with students. This information is then used as the basis for a checklist that provides guidance for researchers and educators when planning and conducting studies in university courses. The goal of this checklist is to help ensure that these studies have as much research and pedagogical value as possible. Finally, an example application of the checklist is provided to illustrate its use.

Keywords

Software engineering education Empirical studies 

References

  1. Bagert D, Hilburn TB, Hislop G, Lutz M, McCracken M, Mengel S (1999) Guidelines for Software Engineering Education. SEI Technical Reports. CMU/SEI-99-TR-032Google Scholar
  2. Baresi L, Morasca S (2002) An empirical study on the design effort of web applications. Proceedings of 3rd International Conference on Web Information Systems Engineering, Singapore. 345–354Google Scholar
  3. Baresi L, Morasca S, Paolini P (2003) Estimating the design effort of web applications. Proceedings of Ninth International Software Metrics Symposium. 62–72Google Scholar
  4. Basili VR, Briand LC, Melo WL (1996) A validation of object-oriented design metrics as quality indicators. IEEE Trans Softw Eng 22(10):751–761. doi:10.1109/32.544352 CrossRefGoogle Scholar
  5. Basili VR, Carver JC, Cruzes D, Hochstein LM, Hollingsworth JK, Shull F, Zelkowitz MV (2008) Understanding the high-performance-computing community: a software engineer’s perspective. IEEE Softw 25(4):29–36. doi:10.1109/MS.2008.103 CrossRefGoogle Scholar
  6. Bloom BS (ed) (1956) Handbook I, cognitive domain. Taxonomy of educational objectives: the classification of educational goals. Longman, New YorkGoogle Scholar
  7. Braught G (2005) Teaching empirical skills and concepts in computer science using random walks. Proceedings of 36th SIGCSE technical symposium on Computer science education, St. Louis, Missouri, USA, ACM Press. 41–45Google Scholar
  8. Carver J, Jaccheri L, Morasca S, Shull F (2003) Issues in using students in empirical studies in software engineering education. Proceedings of Ninth International Software Metrics Symposium (METRICS 2003). 239–249Google Scholar
  9. CORPORATE (2001) Computing Curricula 2001. J Educ Resour Comput 1(3es):1CrossRefGoogle Scholar
  10. Daly J (1996) Replication and a multi-method approach to empirical software engineering research. Department of Computer Science. University of Strathclyde. PhDGoogle Scholar
  11. Denning PJ (1992) Educating a new engineer. Commun ACM 35(12):82–97. doi:10.1145/138859.138870 CrossRefGoogle Scholar
  12. Hilburn TB, Humphrey WS (2002) The impending changes in software education. IEEE Softw 19(5):22–24. doi:10.1109/MS.2002.1032848 CrossRefGoogle Scholar
  13. Hochstein L, Nakamura T, Basili VR, Asgari S, Zelkowitz MV, Hollingsworth JK, Shull F, Carver J, Voelp M, Zazworka N, Johnson P (2006) Experiments to understand Hpc time to development. CTWatch Quarterly. November: 24–32Google Scholar
  14. Höst M (2002) Introducing empirical software engineering methods in education. Proceedings of 15th Conference on Software Engineering Education and Training, 2002. (CSEE&T 2002). 170–179Google Scholar
  15. Höst M, Regnell B, Wohlin C (2000) Using students as subjects-a comparative study of students and professionals in lead-time impact assessment. Empir Softw Eng 5(3):201–214. doi:10.1023/A:1026586415054 MATHCrossRefGoogle Scholar
  16. Höst M, Wohlin C, Thelin T (2005) Experimental context classification: incentives and experience of subjects. Proceedings of 27th international conference on Software engineering, St. Louis, MO, USA, ACM Press. 470–478Google Scholar
  17. Jaccheri L (2001) Software quality and software process improvement course based on interaction with the local software industry. Comput Appl Eng Educ 9(4):265–272. doi:10.1002/cae.10000 CrossRefGoogle Scholar
  18. Jay R (2002) How to build a great team, Financial Times ManagementGoogle Scholar
  19. Jorgensen M, Teigen KH, Molokken K (2004) Better sure than safe? Over-confidence in judgement based software development effort prediction intervals. J Syst Softw 70(1–2):79–93. doi:10.1016/S0164-1212(02)00160-7 CrossRefGoogle Scholar
  20. Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734. doi:10.1109/TSE.2002.1027796 CrossRefGoogle Scholar
  21. McBurney DH (2001) Research methods. Wadsworth Thomson Learning, StamfordGoogle Scholar
  22. Miller J (2005) Replicating software engineering experiments: a poisoned chalice or the Holy Grail. Inf Softw Technol 47(4):233–244. doi:10.1016/j.infsof.2004.08.005 CrossRefGoogle Scholar
  23. Morasca S (2003) A Bayesian approach to software testing evaluation. Proceedings of Software Engineering and Knowledge Engineering, San Francisco Bay, USA. 706–713.Google Scholar
  24. Pastel R (2005) Integrating science and research in a Hci design course. St Louis, ACM PressGoogle Scholar
  25. Port D, Klappholz D (2004) Empirical research in the software engineering classroom. Proceedings of 17th Conference on Software Engineering Education and Training, 2004. 132–137Google Scholar
  26. Shull F, Lanubile F, Basili VR (2000) Investigating reading techniques for object-oriented framework learning. IEEE Trans Softw Eng 26(11):1101–1118. doi:10.1109/32.881720 CrossRefGoogle Scholar
  27. Shull F, Carver J, Travassos G (2001) An empirical methodology for introducing software processes. Proceedings of The Joint 8th European Software Engineering Conference and 9th ACM SIGSOFT Foundations of Software Engineering, Vienna, Austria. 288–296Google Scholar
  28. Shull F, Carver J, Hochstein L, Basili VR (2005) Empirical study design in the area of high performance computing (Hpc). Proceedings of International Symposium on Empirical Software Engineering, Noosa Heads, Australia. 305–314Google Scholar
  29. Shull F, Carver J, Vegas S, Juristo N (2008) The role of replications in empirical software engineering. Empir Softw Eng 13(2):211–218. doi:10.1007/s10664-008-9060-1 CrossRefGoogle Scholar
  30. Singer J, Vinson NG (2002) Ethical issues in empirical studies of software engineering. IEEE Trans Softw Eng 28(12):1171–1180. doi:10.1109/TSE.2002.1158289 CrossRefGoogle Scholar
  31. Sjoeberg DIK, Anda B, Arisholm E, Dyba T, Jorgensen M, Karahasanovic A, Koren EF, Vokac M (2002) Conducting realistic experiments in software engineering. Proceedings of 2002 International Symposium on Empirical Software Engineering. 17–26.Google Scholar
  32. Sjoeberg DIK, Hannay JE, Hansen O, Kampenes VB, Karahasanovic A, Liborg NK, Rekdal AC (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753. doi:10.1109/TSE.2005.97 CrossRefGoogle Scholar
  33. Tichy WF (2000) Hints for reviewing empirical work in software engineering. Empir Softw Eng 5(4):309–312. doi:10.1023/A:1009844119158 CrossRefMathSciNetGoogle Scholar
  34. Umphress DA, Hendrix TD, Cross JH (2002) Software process in the classroom: the capstone project experience. IEEE Softw 19(5):78–81. doi:10.1109/MS.2002.1032858 CrossRefGoogle Scholar
  35. Vygotsky LS (1978) Mind in society: development of higher psychological processes. Harvard University Press, CambridgeGoogle Scholar
  36. Walia GS, Carver J (2006) Requirements error abstraction and classification: an empirical study. Proceedings of The 5th International Symposium on Empirical Software Engineering, Rio de Janeiro. 336–345Google Scholar
  37. Wang AI, Arisholm E (2008) The effect of task order on the maintainability of object-oriented software. Information and Software Technology. Accepted for Publication: Technical Report Available: http://www.idi.ntnu.no/grupper/su/publ/alfw/idi-tr-02-07.pdf
  38. Wang AI, Arisholm E, Jaccheri L (2007) Educational approach to an experiment in a software architecture course proceedings of The 20th Conference on Software Engineering Education & Training, 2007. 291–300Google Scholar
  39. Way TP (2005) A company-based framework for a software engineering course. Proceedings of 36th SIGCSE technical symposium on Computer science education, St. Louis, Missouri, USA, ACM Press. 132–136Google Scholar
  40. Wohlin C, Runeson P, Host M, Ohlsson MC, Regnell B, Wesslen A (2000) Experimentation in software engineering: an introduction, Kluwer Academic PublishersGoogle Scholar
  41. Wood M, Daly J, Miller J, Roper M (1999) Mulit-method research: an empirical investigation of object-oriented technology. J Syst Softw 48(1):13–26. doi:10.1016/S0164-1212(99)00042-4 CrossRefGoogle Scholar
  42. Zelkowitz MV, Wallace DR (1998) Experimental models for validating technology. IEEE Comput 31(5):23–31Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Jeffrey C. Carver
    • 1
  • Letizia Jaccheri
    • 2
  • Sandro Morasca
    • 3
  • Forrest Shull
    • 4
  1. 1.Department of Computer ScienceUniversity of AlabamaTuscaloosaUSA
  2. 2.Department of Computer and Information ScienceNorwegian University of Science and TechnologyTrondheimNorway
  3. 3.Dipartimento di Scienze della Cultura, Politiche e dell’InformazioneUniversità degli Studi dell’InsubriaVareseItaly
  4. 4.Frauhnofer Center for Experimental Software EngineeringCollege ParkUSA

Personalised recommendations