On the Security Cost of Using a Free and Open Source Component in a Proprietary Product

  • Stanislav Dashevskyi
  • Achim D. Brucker
  • Fabio Massacci
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9639)


The work presented in this paper is motivated by the need to estimate the security effort of consuming Free and Open Source Software (FOSS) components within a proprietary software supply chain of a large European software vendor. To this extent we have identified three different cost models: centralized (the company checks each component and propagates changes to the different product groups), distributed (each product group is in charge of evaluating and fixing its consumed FOSS components), and hybrid (only the least used components are checked individually by each development team). We investigated publicly available factors (e. g., development activity such as commits, code size, or fraction of code size in different programming languages) to identify which one has the major impact on the security effort of using a FOSS component in a larger software product.


Free and open source software usage Free and open source software vulnerabilities Security maintenance costs 



This work has been partly supported by the European Union under agreement no. 285223 SECONOMICS, no. 317387 SECENTIS (FP7-PEOPLE-2012-ITN), the Italian Project MIUR-PRIN-TENACE, and PON - Distretto Cyber Security attività RI.4.


  1. 1.
    Aberdour, M.: Achieving quality in open-source software. IEEE Softw. 24(1), 58–64 (2007)CrossRefGoogle Scholar
  2. 2.
    Alhazmi, O., Malaiya, Y., Ray, I.: Security vulnerabilities in software systems: a quantitative perspective. In: Jajodia, S., Wijesekera, D. (eds.) Data and Applications Security 2005. LNCS, vol. 3654, pp. 281–294. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Beecher, K., Capiluppi, A., Boldyreff, C.: Identifying exogenous drivers and evolutionary stages in floss projects. J. Syst. Softw. 82(5), 739–750 (2009)CrossRefGoogle Scholar
  4. 4.
    ben Othmane, L., Chehrazi, G., Bodden, E., Tsalovski, P., Brucker, A.D., Miseldine, P.: Factors impacting the effort required to fix security vulnerabilities: An industrial case study. In: López, J., Mitchell, C.J. (eds.) ISC 2015. LNCS, vol. 9290, pp. 102–119. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  5. 5.
    Capiluppi, A.: Models for the evolution of os projects. In: Proceedings of International Conference on Software Maintenance (2003)Google Scholar
  6. 6.
    Christey, S.: Unforgivable vulnerabilities. Black Hat Briefings (2007)Google Scholar
  7. 7.
    Gegick, M., Williams, L., Osborne, J., Vouk, M.: Prioritizing software security fortification throughcode-level metrics. In: Proceedings of the 4th ACM Workshop on Quality of Protection (2008)Google Scholar
  8. 8.
    Hansen, M., Köhntopp, K., Pfitzmann, A.: The open source approach opportunities and limitations with respect to security and privacy. Comput. Secur. J. 21(5), 461–471 (2002)CrossRefGoogle Scholar
  9. 9.
    Hoepman, J.-H., Jacobs, B.: Increased security through open source. Commun. ACM 50(1), 79–83 (2007)CrossRefGoogle Scholar
  10. 10.
    Jones, R.L., Rastogi, A.: Secure coding: Building security into the software development life cycle. Inf. Syst. Secur. 13(5), 29–39 (2004)CrossRefGoogle Scholar
  11. 11.
    Kamei, Y., Shihab, E., Adams, B., Hassan, A.E., Mockus, A., Sinha, A., Ubayashi, N.: A large-scale empirical study of just-in-time quality assurance. IEEE Trans. Softw. Eng. 39(6), 757–773 (2013)CrossRefGoogle Scholar
  12. 12.
    Li, Z., Tan, L., Wang, X., Lu, S., Zhou, Y., Zhai, C.: Have things changed now?: An empirical study of bug characteristics in modern open source software. In: Proceedings of the 1st Workshop on Architectural and System Support for Improving Software Dependability (2006)Google Scholar
  13. 13.
    Massacci, F., Nguyen, V.H.: Which is the right source for vulnerability studies?: an empirical analysis on mozilla firefox. In: Proceedings of the 6th International Workshop on Security Measurements and Metrics (2010)Google Scholar
  14. 14.
    Massacci, F., Nguyen, V.H.: An empirical methodology to evaluate vulnerability discovery models. IEEE Trans. Softw. Eng. 40(12), 1147–1162 (2014)CrossRefGoogle Scholar
  15. 15.
    Nagappan, N., Ball, T.: Use of relative code churn measures to predict system defect density. In: Proceedings of 27th International Conference on Software Engineering (2005)Google Scholar
  16. 16.
    Nguyen, V.H., Tran, L.M.S.: Predicting vulnerable software components with dependency graphs. In: Proceedings of the 6th International Workshop on Security Measurements and Metrics (2010)Google Scholar
  17. 17.
    Ozment, A., Schechter, S.E.: Milk or wine: Does software security improve with age? In: Proceedings of Usenix Security Symposium (2006)Google Scholar
  18. 18.
    Polančič, G., Horvat, R.V., Rozman, T.: Comparative assessment of open source software using easy accessible data. In: Proceedings of 26th International Conference on Information Technology Interfaces (2004)Google Scholar
  19. 19.
    Raymond, E.: The cathedral and the bazaar. Knowl. Technol. Policy 12(3), 23–49 (1999)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Sajnani, H., Saini, V., Ossher, J., Lopes, C.V.: Is popularity a measure of quality? an analysis of maven components. In: Proceedings of IEEE International Conference on Software Maintenance and Evolution (2014)Google Scholar
  21. 21.
    Scandariato, R., Walden, J., Hovsepyan, A., Joosen, W.: Predicting vulnerable software components via text mining. IEEE Trans. Softw. Eng. 40(10), 993–1006 (2014)CrossRefGoogle Scholar
  22. 22.
    Schryen, G.: Is open source security a myth? Commun. ACM 54(5), 130–140 (2011)CrossRefGoogle Scholar
  23. 23.
    Seacord, R.C.: Secure coding standards. In: Proceedings of the Static Analysis Summit, NIST Special Publication (2006)Google Scholar
  24. 24.
    Shin, Y., Meneely, A., Williams, L., Osborne, J., et al.: Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Trans. Softw. Eng. 37(6), 772–787 (2011)CrossRefGoogle Scholar
  25. 25.
    Shin, Y., Williams, L.: An empirical model to predict security vulnerabilities using code complexity metrics. In: Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement (2008)Google Scholar
  26. 26.
    Stol, K.-J., Ali Babar, M.: Challenges in using open source software in product development: A review of the literature. In: Proceedings of the 3rd International Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development (2010)Google Scholar
  27. 27.
    Walden, J., Doyle, M.: Savi: Static-analysis vulnerability indicator. IEEE Secur. Priv. J. 10(3), 32–39 (2012)CrossRefGoogle Scholar
  28. 28.
    Walden, J., Stuckman, J., Scandariato, R.: Predicting vulnerable components: Software metrics vs text mining. In: Proceedings of IEEE 25th International Symposium on Software Reliability Engineering (2014)Google Scholar
  29. 29.
    Wheeler, D.A.: How to evaluate open source software/free software (oss/fs) programs (2005).
  30. 30.
    Wheeler, D.A., Khakimov, S.: Open source software projects needing security investments (2015)Google Scholar
  31. 31.
    Zhang, D., El Emam, K., Liu, H., et al.: An investigation into the functional form of the size-defect relationship for software modules. IEEE Trans. Softw. Eng. 35(2), 293–304 (2009)CrossRefGoogle Scholar
  32. 32.
    Zhang, F., Mockus, A., Zou, Y., Khomh, F., Hassan, A.E.: How does context affect the distribution of software maintainability metrics? In: Proceedings of 29th IEEE International Conference on Software Maintenance (2013)Google Scholar
  33. 33.
    Zhang, H.: An investigation of the relationships between lines of code and defects. In: Proceedings of IEEE International Conference on Software Maintenance (2009)Google Scholar
  34. 34.
    Zimmermann, T., Nagappan, N., Williams, L.: Searching for a needle in a haystack: Predicting security vulnerabilities for windows vista. In: Proceedings of Third International Conference on Software Testing, Verification and Validation (2010)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Stanislav Dashevskyi
    • 1
    • 3
  • Achim D. Brucker
    • 2
    • 3
  • Fabio Massacci
    • 1
  1. 1.University of TrentoTrentoItaly
  2. 2.Department of Computer ScienceThe University of SheffieldSheffieldUK
  3. 3.SAP SEWalldorfGermany

Personalised recommendations