Advertisement

The Relationship Between Software Process, Context and Outcome

  • Dag I. K. SjøbergEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10027)

Abstract

Most practitioners and researchers agree that when developing software, process affects product, and the usefulness of a process depends on the context. However, which processes are most useful for a specific company or project is generally unknown. When studying the relation between context, process and product, one challenge is that experiments often lack realism, which makes the transfer of results to industry difficult. In contrast, most of the important factors vary beyond the researcher’s control in case studies, which makes it difficult to identify cause and effect relationships. This paper reports a study where realism was combined with control over certain context and process factors. Four companies developed the same system, and the price varied by a factor of six. Certain patterns of relationships were expected (expensive company, low cost, schedule overrun); others were unexpected (cheap company, maintainable system because of small code). The community needs to identify the most important relationships among process, context and outcome.

Keywords

Software process improvement Controlled multiple-case study Software industry Theory Software engineering folklore Measurement 

References

  1. 1.
    Dybå, T., Sjøberg, D.I.K., Cruzes, D.S.: What works for whom, where, when, and why? On the role of context in empirical software engineering. In: ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM (2012)Google Scholar
  2. 2.
    Brooks Jr., F.P.: The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley Publishing Company, Reading (1975)CrossRefGoogle Scholar
  3. 3.
    Glass, R.L.: Facts and Fallacies of Software Engineering. Addison-Wesley Professional, Reading (2002)Google Scholar
  4. 4.
    Endres, A., Dieter Rombach, H.: A Handbook of Software and Systems Engineering: Empirical Observations, Laws, and Theories. Pearson Education, New York (2003)Google Scholar
  5. 5.
    Boehm, B.W., McClean, R.K., Urfrig, D.E.: Some experience with automated aids to the design of large-scale reliable software. IEEE Trans. Softw. Eng. 1, 125–133 (1975)CrossRefGoogle Scholar
  6. 6.
    Sjøberg, D.I.K., Hannay, J.E., Hansen, O., Kampenes, V.B., Karahasanovic, A., Liborg, N.K., Rekdal, A.C.: A survey of controlled experiments in software engineering. IEEE Trans. Softw. Eng. 31(9), 733–753 (2005)CrossRefGoogle Scholar
  7. 7.
    Sjøberg, D.I.K., Dybå, T., Anda, B.C., Hannay, J.E.: Building theories in software engineering. In: Shull, F., et al. (eds.) Guide to Advanced Empirical Software Engineering, pp. 312–336. Springer, London (2008)CrossRefGoogle Scholar
  8. 8.
    Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk. Econometrica 47(2), 263–291 (1979)CrossRefzbMATHGoogle Scholar
  9. 9.
    Jørgensen, M., Carelius, G.J.: An empirical study of software project bidding. IEEE Trans. Softw. Eng. 30(12), 953–969 (2004)CrossRefGoogle Scholar
  10. 10.
    Anda, B.C.D., Sjøberg, D.I.K., Mockus, A.: Variability and reproducibility in software engineering: A study of four companies that developed the same system. IEEE Trans. Softw. Eng. 35(3), 407–429 (2009)CrossRefGoogle Scholar
  11. 11.
    Anda, B., Benestad, H.C., Hove, S.E.: A multiple-case study of software effort estimation based on use case points. In: International Symposium on Empirical Software Engineering, pp. 407–416 (2005)Google Scholar
  12. 12.
    Sjøberg, D.I.K., Anda, B.C., Mockus, A.: Questioning software maintenance metrics: a comparative case study. In: ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (2012)Google Scholar
  13. 13.
    Sjøberg, D.I.K., Yamashita, A., Anda, B.C., Mockus, A., Dybå, T.: Quantifying the effect of code smells on maintenance effort. IEEE Trans. Softw. Eng. 39(8), 1144–1156 (2013)CrossRefGoogle Scholar
  14. 14.
    Bergersen, G.R., Sjøberg, D.I.K., Dybå, T.: Construction and validation of an instrument for measuring programming skill. IEEE Trans. Softw. Eng. 40(12), 1163–1184 (2014)CrossRefGoogle Scholar
  15. 15.
    Arisholm, E., Sjøberg, D.I.K.: Evaluating the effect of a delegated versus centralized control style on the maintainability of object-oriented software. IEEE Trans. Softw. Eng. 30(8), 521–534 (2004)CrossRefGoogle Scholar
  16. 16.
    Anda, B., Sjøberg, D.I.K.: Investigating the role of use cases in the construction of class diagrams. Empirical Softw. Eng. 10(3), 285–309 (2005)CrossRefGoogle Scholar
  17. 17.
    Følstad, A., Anda, B.C.D., Sjøberg, D.I.K.: The usability inspection performance of work-domain experts: An empirical study. Interact. Comput. 22(2), 75–87 (2010)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Department of InformaticsUniversity of OsloOsloNorway
  2. 2.SINTEF ICTTrondheimNorway

Personalised recommendations