Advertisement

Opportunity Costs in Free Open-Source Software

  • Siim KarusEmail author
Conference paper
Part of the IFIP Advances in Information and Communication Technology book series (IFIPAICT, volume 556)

Abstract

Opportunity cost is a key concept in economics to express the value one misses out on when choosing one alternative over another. This concept is used to explain rational decision making in a scenario where multiple mutually exclusive alternative choices can be made. In this paper, we explore this concept in the realm of open-source software. We look at the different ways for measuring the cost and these can be used to support decisions involving open-source software. We review literature on opportunity cost use in decision support in software development process. We explain how the opportunity cost analysis in the realm of open-source software can be used for supporting architectural decisions within software projects. We demonstrate that different measures of costs can be used to mitigate problems (and maintenance complexity) arising from the use of open source software, allowing for better planning of both closed-source commercial and open-source community projects alike.

Keywords

Code churn What-if analysis Scenario analysis Coding effort Alternative cost Opportunity cost Software cost 

Notes

Acknowledgement

This work was supported by the Estonian Research Council (grant IUT20-55).

References

  1. 1.
    Jørgensen, M., Shepperd, M.J.: A systematic review of software development cost estimation. IEEE Trans. Softw. Eng. 33(1), 33–53 (2007)CrossRefGoogle Scholar
  2. 2.
  3. 3.
    Pandey, P.: Analysis of the techniques for software cost estimation. In: Proceedings of 2013 Third International Conference on Advanced Computing and Communication Technologies, pp. 16–19. IEEE, Rohtak (2013)Google Scholar
  4. 4.
    Dalal, S.R., Mallows, C.L.: Some graphical aids for deciding when to stop testing software. IEEE J. Sel. Areas Commun. 8(2), 169–175 (1990)CrossRefGoogle Scholar
  5. 5.
    Kapur, P., Shrivastava, A.: Release and testing stop time of a software: a new insight. In: Proceedings of 4th International Conference on Reliability, Infocom Technologies and Optimization. IEEE, Noida (2015)Google Scholar
  6. 6.
    Hall, G.A., Munson, J.C.: Software evolution: code delta and code churn. J. Syst. Softw. 54(2), 111–118 (2000)CrossRefGoogle Scholar
  7. 7.
    Munson, J.C., Elbaum, S.G.: Code churn: a measure for estimating the impact of code change. In: Proceedings of International Conference on Software Maintenance (ICSM), pp. 24–31 (1998)Google Scholar
  8. 8.
    Nagappan, N., Ball, T.: Use of relative code churn measures to predict system defect density. In: Proceedings of the International Conference on Software Engineering, pp. 284–329 (2005)Google Scholar
  9. 9.
    Thwin, M.M., Quah, T.-S.: Application of neural networks for software quality prediction using object-oriented metrics. J. Syst. Softw. 76(2), 147–156 (2005)CrossRefGoogle Scholar
  10. 10.
    Koten, C.V., Gray, A.R.: An application of Bayesian network for predicting object-oriented software maintainability. Inf. Softw. Technol. 48(1), 59–67 (2006)CrossRefGoogle Scholar
  11. 11.
    Karus, S., Dumas, M.: Predicting coding effort in projects containing XML. In: Proceedings of 16th European Conference on Software Maintenance and Reengineering (CSMR), pp. 203–212 (2012)Google Scholar
  12. 12.
    Li, W., Henry, S.: Object-oriented metrics which predict maintainability. J. Syst. Softw. 23(2), 111–122 (1993)CrossRefGoogle Scholar
  13. 13.
    Zhou, Y., Xu, B.: Predicting the maintainability of open source software using design metrics. Wuhan Univ. J. Nat. Sci. 13(1), 14–20 (2008)CrossRefGoogle Scholar
  14. 14.
    Rahman, F., Devanbu, P.: How, and why, process metrics are better. In: Proceedings of the 2013 International Conference on Software Engineering, pp. 432–441. IEEE Press, San Francisco (2013)Google Scholar
  15. 15.
    Karus, S., Dumas, M.: Code churn estimation using organisational and code metrics: an experimental comparison. Inf. Softw. Technol. 54(2), 203–211 (2012)CrossRefGoogle Scholar
  16. 16.
    Nagappan, N., Murphy, B., Basili, V.R.: The influence of organizational structure on software quality: an empirical case study. In: Proceedings of 30th International Conference on Software Engineering (ICSE), pp. 521–530. ACM, Leipzig (2008)Google Scholar
  17. 17.
    Tom, E., Aurum, A., Vidgen, R.: An exploration of technical debt. J. Syst. Softw. 86(6), 1498–1516 (2013)CrossRefGoogle Scholar
  18. 18.
    Wierzbicki, A.P.: A mathematical basis for satisficing decision making. Math. Model. 3(5), 391–405 (1982)MathSciNetCrossRefGoogle Scholar
  19. 19.
    de Mesquita Spinola, M., de Paula Pessoa, M.S., Tonini, A.C.: The Cp and Cpk indexes in software development resource relocation. In: Portland International Center for Management of Engineering and Technology, pp. 2431–2439. IEEE, Portland (2007)Google Scholar
  20. 20.
    Özogul, C.O., Karsak, E.E., Tolga, E.: A real options approach for evaluation and justification of a hospital information system. J. Syst. Softw. 82(12), 2091–2102 (2009)CrossRefGoogle Scholar
  21. 21.
    Cai, Y., Sullivan, K.J.: A value-oriented theory of modularity in design. In: Proceedings of the Seventh International Workshop on Economics-Driven Software Engineering Research, pp. 1–4. ACM, St. Louis (2005)Google Scholar
  22. 22.
    Schwartz, A., Do, H.: Cost-effective regression testing through adaptive test prioritization strategies. J. Syst. Softw. 115, 61–81 (2016)CrossRefGoogle Scholar
  23. 23.
    Ramler, R., Wolfmaier, K.: Economic perspectives in test automation: balancing automated and manual testing with opportunity cost. In: Proceedings of the 2006 International Workshop on Automation of Software Test, pp. 85–91. ACM, Shanghai (2006)Google Scholar
  24. 24.
    Chan, T.: Beyond productivity in software maintenance: factors affecting lead time in servicing users’ requests. In: Proceedings of International Conference on Software Maintenance, pp. 228–235. IEEE (2000)Google Scholar
  25. 25.
    Raemaekers, S., Deursen, A.V., Visser, J.: Measuring software library stability through historical version analysis. In: Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM), pp. 378–387. IEEE, Trento (2012)Google Scholar
  26. 26.
    Upadhyay, N., Despande, B.M., Agrawal, V.P.: Towards a software component quality model. In: Meghanathan, N., Kaushik, B.K., Nagamalai, D. (eds.) CCSIT 2011. CCIS, vol. 131, pp. 398–412. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-17857-3_40CrossRefGoogle Scholar
  27. 27.
    Mari, M., Eila, N.: The impact of maintainability on component-based software systems. In: Proceedings of 29th Euromicro Conference, pp. 25–32 (2003)Google Scholar
  28. 28.
    Buse, R.P., Zimmermann, T.: Information needs for software development analytics. In: Proceedings of the 34th International Conference on Software Engineering, pp. 987–996. IEEE Press, Zurich (2012)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2019

Authors and Affiliations

  1. 1.University of TartuTartuEstonia

Personalised recommendations