Reporting experiments to satisfy professionals’ information needs

Abstract

Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers’ information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly. We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology’s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers.

This is a preview of subscription content, access via your institution.

Fig. 1

References

  1. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gøtzsche PC, Lang T, for the CONSORT Group (2001) The Revised CONSORT Statement for Reporting Randomized Trials Explanation and Elaboration. Ann Intern Med 134(8):663–694

    Article  Google Scholar 

  2. Association AP (2001) Publication Manual of the American Psychological Association, 5th edn. American Psychological Association, Washington, DC

    Google Scholar 

  3. Ayari B, Rombach D (Supervisor), Jedlitschka A (Supervisor), Weibelzahl S (Supervisor) (2004) Anforderungsanalyse, Entwurf und Implementierung eines web-basierten Entscheidungsunterstützungssystems für Software Engineering Improvement Management, Diploma Thesis, Dept. of Computer Science, University of Kaiserslautern, Germany

  4. Basili VR, Rombach D (1991) Support for comprehensive reuse. Softw Eng J, IEEE Br Comput Soc 6(5):303–316

    Google Scholar 

  5. Basili VR, Caldiera G, Rombach HD (2001) Experience Factory. In: Marciniak JJ (ed.), Encyclopedia of Software Engineering, Vol.1, John Wiley & Sons, 2001, pp. 511–519.

  6. Birk A (2000) A Knowledge Management Infrastructure for Systematic Improvement in Software Engineering, Dissertation, Dept. of Computer Science, University of Kaiserslautern, Germany, Stuttgart Fraunhofer IRB Verlag

  7. Chrissis MB, Konrad M, Shrum S (2011) CMMI for Development Guidelines for Process Integration and Product Improvement, 3 revisedth edn. Addison-Wesley Longman, Amsterdam

    Google Scholar 

  8. Ciolkowski M, Laitenberger O, Biffl S (2003) Software reviews the state of the practice. IEEE Softw 20(6):46–51

    Article  Google Scholar 

  9. Dybå T (2001) Enabling software process improvement-an investigation of the importance of organizational issues, Dissertation, NTNU, Trondheim, Norway

  10. Glass RL (2004) Matching methodology to problem domain. Column Pract Program Commun ACM 47(5):19–21

    Article  Google Scholar 

  11. Glass RL (2006) The Academe/Practice Communication Chasm—Position Paper. Dagstuhl Seminar on Empirical SE 27.06.-30.06.06 (06262), Participant Materials. http://www.dagstuhl.de/Materials/Files/06/06262/06262.GlassRobert.ExtAbstract!.pdf Accessed on 26 June 2013

  12. Guba EG, Lincoln YS (1994) Competing paradigms in qualitative research. In: Denzin NK, Lincoln YS (eds) Handbook of qualitative research. Sage, London, pp 105–117

    Google Scholar 

  13. Harris P (2002) Designing and Reporting Experiments in Psychology, 2nd edn. Open University Press, Berkshire

    Google Scholar 

  14. Henderson-Sellers B, Simons A, Younessi H (1998) The OPEN Toolbox of Techniques. Harlow Addison-Wesley (The OPEN Series)

  15. Henninger S (1996) Accelerating the Successful Reuse of Problem Solving Knowledge Through the Domain Lifecycle. In Proc. of the 4th Intern. Conf. on Software Reuse (ICSR '96). IEEE Computer Society, Washington, DC, USA, pp. 124–133

  16. Hinchey MG, Pressburger T, Markosian L, Feather MS (2006) The NASA software research infusion initiative successful technology transfer for software assurance. In: Proc. of the 2006 intern. workshop on Software technology transfer in software engineering (TT ’06). ACM, New York, pp 43–48

    Google Scholar 

  17. Ivarsson M, Gorschek T (2011) A method for evaluating rigor and industrial relevance of technology evaluations. Empir Softw Eng 16(3):365–395

    Article  Google Scholar 

  18. Jedlitschka A (2007) How to improve the use of controlled experiments as a means for early technology transfer? Position Paper. In: Basili VR, Rombach D, Schneider K, Kitchenham B, Pfahl D, Selby RW (eds.) Empirical Software Engineering Issues Critical Assessment and Future Directions, International Workshop Dagstuhl Castle, Germany, January 2007, Springer Verlag, LNCS 4336, p. 130

  19. Jedlitschka A (2009) An empirical model of software managers information needs for software engineering technology selection, Dissertation, Dept. of Computer Science, University of Kaiserslautern, Germany, Fraunhofer IRB Verlagm Stuttgart, Germany

  20. Jedlitschka A (2010) Evaluating a model of software managers' information needs an experiment. In Proc. of the 2010 ACM-IEEE Intern. Symposium on Empirical Software Engineering and Measurement (ESEM '10). ACM, New York, NY, USA, Article No. 19, 10 pages

  21. Jedlitschka A, Briand LC (2007) The role of controlled experiments Working group results. In: Basili VR, Rombach D, Schneider K, Kitchenham B, Pfahl D, Selby RW (eds.) Empirical Software Engineering Issues Critical Assessment and Future Directions, International Workshop Dagstuhl Castle, Germany, January 2007, Springer Verlag, LNCS 4336, pp. 58–62

  22. Jedlitschka A, Ciolkowski M (2004) Towards Evidence in Software Engineering. In Proc. Intern. Symposium on Empirical Software Engineering 2004 (ISESE2004), Redondo Beach, California, USA, August 2004, 2004, pp. 261–270

  23. Jedlitschka A, Pfahl D (2004) Requirements of a Tool supporting decision making for SE Technology Selection. In Proc. Of 16th Intern. Conf. on Software Engineering and Knowledge Engineering (SEKE2004), Banff, Canada, 2004, pp. 513–516

  24. Jedlitschka A, Ciolkowski M, Denger C, Freimut B, Schlichting A (2007) Relevant Information Sources for Successful Technology Transfer A Survey Using Inspections as an Example, In: Proc. Intern. Symposium on Empirical SE and Measurement 2007 (ESEM2007), Madrid, Spain, September 2007, 2007, pp. 31–40

  25. Jedlitschka A, Hamann D, Göhlert T, Schröder A (2005) Adapting PROFES for Use in an Agile Process An Industry Experience Report. In: Bomarius F and Komi-Sirviö S (Eds.) 6th Intern. Conf. on Product Focused Software Process Improvement (Profes2005) . Springer-Verlag, Berlin 2005, pp. 502–516

  26. Jedlitschka A, Ciolkowski M, Pfahl D (2008) Reporting controlled experiments in Software Engineering. In: Shull F, Singer J, Sjøberg D. (eds.) Guide to Advanced Empirical Software Engineering, Springer, 2008 pp. 201–228

  27. Juristo N, Moreno A (2001) Basics of Software Engineering Experimentation. Kluwer Academic Publishers

  28. Juristo N, Vegas S (2003) Functional Testing, Structural Testing and Code Reading What Fault Type Do They Each Detect? In: Conradi R, Wang AI (eds) Empirical Methods and Studies in SE—Experiences from ESERNET. Springer-Verlag, LNCS, Berlin, pp 208–232

    Google Scholar 

  29. Kitchenham BA (2004) Procedures for Performing Systematic Reviews. Keele University Joint Technical Report TR/SE-0401, ISSN 1353–7776 and National ICT Australia Ltd. NICTA Technical Report 0400011T.1

  30. Kitchenham BA, Pfleeger SL, Pickard LM, Jones PW, Hoaglin DC, El Emam K, Rosenberg J (2002) Preliminary guidelines for empirical research in Software Engineering. IEEE Trans Softw Eng 28(8):721–734

    Google Scholar 

  31. Kvale S (1996) InterViews. An Introduction to Qualitative Research Interviewing. Sage Publications, Thousand Oaks, p 326 S

    Google Scholar 

  32. Lott CM, Rombach HD (1996) Repeatable Software Engineering experiments for comparing defect-detection techniques. Empir Softw Eng J 1(3):241–277

    Google Scholar 

  33. Maiden NAM, Rugg G (1996) ACRE selecting methods for requirements acquisition. Softw Eng 11(3):183–192

    Article  Google Scholar 

  34. Marshall G (1998) “Secondary analysis”. A Dictionary of Sociology. 1998. Encyclopedia.com http://www.encyclopedia.com/doc/1O88-secondaryanalysis.html Accessed 26 June 2013

  35. Moher D, Schulz KF, Altman D, for the CONSORT Group (2001) The CONSORT statement revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA 285(15):1987–1991

    Article  Google Scholar 

  36. Pfleeger SL (1999) Understanding and improving technology transfer in SE. J Syst Softw 47(1999):111–124

    Article  Google Scholar 

  37. Pfleeger SL, Menezes W (2000) Marketing technology to software practitioners. IEEE Softw 17(1):27–33

    Article  Google Scholar 

  38. Pickard LM, Kitchenham BA, Jones PW (1998) Combining empirical results in software engineering. Information and Software Technology, 40(14) 1998, pp. 811–821

    Google Scholar 

  39. Prieto-Díaz R (1985) A Software Classification Scheme (Reusability, Libraries, Development). Doctoral Thesis. Department of Information and Computer Science, University of California, Irvine

  40. Rainer A, Hall T, Baddoo N (2003) Persuading Developers to 'Buy into' Software Process Improvement: Local Opinion and Empirical Evidence. In Proceedings of the 2003 International Symposium on Empirical Software Engineering (ISESE '03). IEEE Computer Society, Washington, DC, USA, pp. 326–335

  41. Reifer DJ (2003) Is the SE state of the practice getting closer to the state of the art? IEEE Softw 20(6):78–83

    Article  Google Scholar 

  42. Rogers EM (2003) Diffusion of Innovations, 5 (Paperback)th edn. The Free Press, New York

    Google Scholar 

  43. Rombach D, Ciolkowski M, Jeffery R, Laitenberger O, McGarry F, Shull F (2008) Impact of research on practice in the field of inspections, reviews and walkthroughs learning from successful industrial uses. ACM SIGSOFT Softw Eng Notes 33(6):26–35

    Article  Google Scholar 

  44. Runeson P, Höst M (2009) Guidelines for conducting and reporting case study research in SE. Empir Softw Eng 14(2):131–164

    Article  Google Scholar 

  45. Runeson P, Thelin T (2003) Prospects and Limitations for Cross-Study Analyses—A Study on an Experiment Series. In: Jedlitschka A and Ciolkowski M (eds) The Future of Empirical Studies in Software Engineering, Proc. of 2nd Int. Workshop on Empirical Software Engineering, WSESE 2003, Roman Castles, Italy, Sept. 2003, Fraunhofer IRB Verlag, 2004, pp. 141–150

  46. SEI (1997) C4 Technology Reference Guide—A Prototype. Technical Report CMU/SEI-97-HB-001. SE Institute, Pittsburgh

    Google Scholar 

  47. Shaw M (2003) Writing Good SE Research Papers—Mini-tutorial. In: Proc. of the 25th Intern. Conf. on SE (ICSE’03). IEEE Computer Society, Portland, pp 726–736

    Google Scholar 

  48. Shiffman RN, Shekelle P, Overhage JM, Slutsky J, Grimshaw J, Deshpande AM (2003) Standardized reporting of clinical practice guidelines a proposal from the conference on guideline standardization. Ann Intern Med 139(6):493–498

    Article  Google Scholar 

  49. Shull F, Carver J, Travassos GH, Maldonado JC, Conradi R, Basili VR (2003) Replicated Studies Building a Body of Knowledge about Software Reading Techniques. In: Juristo N, Moreno A (eds) Lecture Notes on Empirical Softw. Engg. USA World Scientific Publishing, River Edge, pp 39–84

    Google Scholar 

  50. Singer J (1999) Association (APA) Style Guidelines to Report Experimental Results. In: Proc. of Workshop on Empirical Studies in Software Maintenance. Oxford, England. September 1999. pp. 71–75 http://dec.bmth.ac.uk/ESERG/WESS99/singer.ps Accessed on 25 June 2013

  51. Sjøberg D, Hannay J, Hansen O, By Kampenes V, Karahasanovic A, Liborg N-K, Rekdal A (2005) A survey of controlled experiments in software engineering. Trans Softw Eng 31(9):733–753

    Article  Google Scholar 

  52. Strauss A, Corbin J (1998) Basics of Qualitative Research. Techniques and Procedures for Developing Grounded Theory, 2nd edn. Sage, Thousand Oaks

    Google Scholar 

  53. Turner R (2004) Why we need empirical information on best practices. CROSSTALK—The Journal of Defense Software Engineering. April 2004, pp. 9–11

  54. Vegas S (2002) Characterisation Schema for Selecting Testing Techniques. Dissertation Universidad Politéchnica de Madrid, Facultad de Informática, Departamento de Lenguajes y Sistemas Informáticos e Ingeniería del Software, Madrid, Spain

  55. Vegas S, Juristo N, Basili VR (2006) Packaging experiences for improving testing technique selection. J Syst Softw 79(11):1606–1618

    Article  Google Scholar 

  56. Venkatesh V, Morris MG, Davis GB, Davis FD (2003) User acceptance of information technology—towards a unified view. Vol. 27, No.3 (Sep., 2003), pp. 425–478

  57. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B, Wesslén A (2000) Experimentation in Software Engineering—An Introduction. Kluwer Academic Publishers

  58. Wohlin C, Petersson H, Aurum A (2003) Combining Data from Reading Experiments in Software Inspections. In: Juristo N, Moreno A (eds) Lecture Notes on Empirical Softw. Engg. USA World Scientific Publishing, River Edge, pp 85–132

    Google Scholar 

  59. Wohlin C, Runeson P, Höst M, Ohlsson MC, Regnell B (2012) Experimentation in Software Engineering. Springer

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Andreas Jedlitschka.

Additional information

Communicated by: Per Runeson

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Jedlitschka, A., Juristo, N. & Rombach, D. Reporting experiments to satisfy professionals’ information needs. Empir Software Eng 19, 1921–1955 (2014). https://doi.org/10.1007/s10664-013-9268-6

Download citation

Keywords

  • Software manager
  • Information needs
  • Technology selection
  • Experiment
  • Reporting