Advertisement

A Double-Edged Sword? Software Reuse and Potential Security Vulnerabilities

  • Antonios GkortzisEmail author
  • Daniel Feitosa
  • Diomidis Spinellis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11602)

Abstract

Reuse is a common and often-advocated software development practice. Significant efforts have been invested into facilitating it, leading to advancements such as software forges, package managers, and the widespread integration of open source components into proprietary software systems. Reused software can make a system more secure through its maturity and extended vetting, or increase its vulnerabilities through a larger attack surface or insecure coding practices. To shed more light on this issue, we investigate the relationship between software reuse and potential security vulnerabilities, as assessed through static analysis. We empirically investigated 301 open source projects in a holistic multiple-case methods study. In particular, we examined the distribution of potential vulnerabilities between the native code created by a project’s development team and external code reused through dependencies, as well as the correlation between the ratio of reuse and the density of vulnerabilities. The results suggest that the amount of potential vulnerabilities in both native and reused code increases with larger project sizes. We also found a weak-to-moderate correlation between a higher reuse ratio and a lower density of vulnerabilities. Based on these findings it appears that code reuse is neither a frightening werewolf introducing an excessive number of vulnerabilities nor a silver bullet for avoiding them.

Keywords

Software reuse Security vulnerabilities Case study 

Notes

Acknowledgments

We express our appreciation to Paris Avgeriou for reviewing the manuscript and providing us with feedback that improved its quality. The research described has been carried out as part of the CROSSMINER Project, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No. 732223.

References

  1. 1.
  2. 2.
    Cybersecurity Incident & Important Consumer Information—Equifax. https://www.equifaxsecurity2017.com/
  3. 3.
    Ayewah, N., Pugh, W.: The Google FindBugs fixit. In: Proceedings of 19th International Symposium on Software Testing and Analysis (ISSTA 2010), pp. 241–252. ACM, Trento (2010).  https://doi.org/10.1145/1831708.1831738
  4. 4.
    Ayewah, N., Pugh, W., Morgenthaler, J.D., Penix, J., Zhou, Y.: Evaluating static analysis defect warnings on production software. In: Proceedings of 7th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (PASTE 2007), pp. 1–8. ACM Press, San Diego (2007).  https://doi.org/10.1145/1251535.1251536
  5. 5.
    Feitosa, D., Ampatzoglou, A., Avgeriou, P., Chatzigeorgiou, A., Nakagawa, E.: What can violations of good practices tell about the relationship between GoF patterns and run-time quality attributes? Inf. Softw. Technol. (2018).  https://doi.org/10.1016/j.infsof.2018.07.014CrossRefGoogle Scholar
  6. 6.
    Feitosa, D., Ampatzoglou, A., Avgeriou, P., Nakagawa, E.Y.: Investigating quality trade-offs in open source critical embedded systems. In: Proceedings of 11th International ACM SIGSOFT Conference on the Quality of Software Architectures (QoSA 2015), pp. 113–122. ACM, Montreal (2015).  https://doi.org/10.1145/2737182.2737190
  7. 7.
    Field, A.: Discovering Statistics Using IBM SPSS Statistics, 4th edn. SAGE Publications Ltd., Thousand Oaks (2013)Google Scholar
  8. 8.
    Gousios, G., Spinellis, D.: GHTorrent: GitHub’s data from a firehose. In: Proceedings of 9th IEEE Working Conference on Mining Software Repositories (MSR 2012), pp. 12–21. IEEE, June 2012.  https://doi.org/10.1109/MSR.2012.6224294
  9. 9.
    Heinemann, L., Deissenboeck, F., Gleirscher, M., Hummel, B., Irlbeck, M.: On the extent and nature of software reuse in open source Java projects. In: Schmid, K. (ed.) ICSR 2011. LNCS, vol. 6727, pp. 207–222. Springer, Heidelberg (2011).  https://doi.org/10.1007/978-3-642-21347-2_16CrossRefGoogle Scholar
  10. 10.
    Hovemeyer, D., Pugh, W.: Finding bugs is easy. ACM SIGPLAN Not. 39(12), 92–106 (2004).  https://doi.org/10.1145/1052883.1052895CrossRefGoogle Scholar
  11. 11.
    Khalid, H., Nagappan, M., Hassan, A.E.: Examining the relationship between FindBugs warnings and app ratings. IEEE Softw. 33(4), 34–39 (2016).  https://doi.org/10.1109/MS.2015.29CrossRefGoogle Scholar
  12. 12.
    Kula, R.G., German, D.M., Ouni, A., Ishio, T., Inoue, K.: Do developers update their library dependencies? Empirical Softw. Eng. 23(1), 384–417 (2018).  https://doi.org/10.1007/s10664-017-9521-5CrossRefGoogle Scholar
  13. 13.
    Kulenovic, M., Donko, D.: A survey of static code analysis methods for security vulnerabilities detection. In: Proceedings of 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO 2014), pp. 1381–1386, May 2014.  https://doi.org/10.1109/MIPRO.2014.6859783
  14. 14.
    Meneely, A., Williams, L.: Secure open source collaboration: an empirical study of Linus’ law. In: Proceedings of 16th ACM Conference on Computer and Communications Security, CCS 2009, pp. 453–462. ACM (2009).  https://doi.org/10.1145/1653662.1653717
  15. 15.
    Mitropoulos, D., Karakoidas, V., Louridas, P., Gousios, G., Spinellis, D.: The bug catalog of the Maven ecosystem. In: Proceedings of 11th Working Conference on Mining Software Repositories (MSR 2014), pp. 372–375. ACM, Hyderabad (2014).  https://doi.org/10.1145/2597073.2597123
  16. 16.
    Mohagheghi, P., Conradi, R., Killi, O.M., Schwarz, H.: An empirical study of software reuse vs. defect-density and stability. In: Proceedings of 26th International Conference on Software Engineering (ICSE 2004), pp. 282–292. IEEE Computer Society, Washington, DC (2004). http://dl.acm.org/citation.cfm?id=998675.999433
  17. 17.
    Munaiah, N., Kroh, S., Cabrey, C., Nagappan, M.: Curating GitHub for engineered software projects. Empirical Softw. Eng. 22(6), 3219–3253 (2017).  https://doi.org/10.1007/s10664-017-9512-6CrossRefGoogle Scholar
  18. 18.
    Neuhaus, S., Zimmermann, T.: The beauty and the beast: vulnerabilities in red hat’s packages. In: Proceedings of 2009 USENIX Annual Technical Conference (USENIX 2009) (2009)Google Scholar
  19. 19.
    Pashchenko, I., Plate, H., Ponta, S.E., Sabetta, A., Massacci, F.: Vulnerable open source dependencies: counting those that matter. In: Proceedings of 12th ACM/IEEE Internatinal Symposium on Empirical Software Engineering and Measurement (ESEM 2018), pp. 42:1–42:10. ACM, Oulu (2018).  https://doi.org/10.1145/3239235.3268920
  20. 20.
    Pham, N.H., Nguyen, T.T., Nguyen, H.A., Wang, X., Nguyen, A.T., Nguyen, T.N.: Detecting recurring and similar software vulnerabilities. In: Proceedings of 32nd ACM/IEEE International Conference on Software Engineering (ICSE 2010), pp. 227–230. ACM, Cape Town (2010).  https://doi.org/10.1145/1810295.1810336
  21. 21.
    Ponta, S.E., Plate, H., Sabetta, A.: Beyond metadata: code-centric and usage-based analysis of known vulnerabilities in open-source software. In: Proceedings of 34th IEEE International Conference on Software Maintenance and Evolution (ICSME 2018), September 2018.  https://doi.org/10.1109/ICSME.2018.00054
  22. 22.
    Runeson, P., Host, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering: Guidelines and Examples. Wiley, Hoboken (2012)CrossRefGoogle Scholar
  23. 23.
    Shin, Y., Meneely, A., Williams, L., Osborne, J.A.: Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities, 37(6), 772–787.  https://doi.org/10.1109/TSE.2010.81CrossRefGoogle Scholar
  24. 24.
    van Solingen, R., Basili, V., Caldiera, G., Rombach, H.D.: Goal question metric (GQM) approach. In: Encyclopedia of Software Engineering, pp. 528–532. Wiley, Hoboken (2002).  https://doi.org/10.1002/0471028959.sof142
  25. 25.
    Tomassi, D.A.: Bugs in the wild: examining the effectiveness of static analyzers at finding real-world bugs. In: Proceedings of 2018 26th ACM Joint Meeting on European Software Engineering Conference on and Symposium on the Foundations of Software Engineering (ESEC/FSE 2018), pp. 980–982. ACM, Lake Buena Vista (2018).  https://doi.org/10.1145/3236024.3275439
  26. 26.
    Tripathi, A.K., Gupta, A.: A controlled experiment to evaluate the effectiveness and the efficiency of four static program analysis tools for Java programs. In: Proceedings of 18th Interantional Conference on Evaluation and Assessment in Software Engineering (EASE 2014), pp. 23:1–23:4. ACM, London (2014).  https://doi.org/10.1145/2601248.2601288
  27. 27.
    Zheng, J., Williams, L., Nagappan, N., Snipes, W., Hudepohl, J.P., on Vouk, M.A.S.E.I.T.: On the value of static analysis for fault detection in software. IEEE Trans. Softw. Eng. 32(4), 240–253 (2006).  https://doi.org/10.1109/TSE.2006.38CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Management Science and TechnologyAthens University of Economics and BusinessAthensGreece
  2. 2.Data Research CentreUniversity of GroningenGroningenThe Netherlands

Personalised recommendations