Advertisement

Empirical Software Engineering

, Volume 23, Issue 3, pp 1383–1421 | Cite as

Are vulnerabilities discovered and resolved like other defects?

  • Patrick J. Morrison
  • Rahul Pandita
  • Xusheng Xiao
  • Ram Chillarege
  • Laurie Williams
Article

Abstract

Software defect data has long been used to drive software development process improvement. If security defects (vulnerabilities) are discovered and resolved by different software development practices than non-security defects, the knowledge of that distinction could be applied to drive process improvement. The goal of this research is to support technical leaders in making security-specific software development process improvements by analyzing the differences between the discovery and resolution of defects versus that of vulnerabilities. We extend Orthogonal Defect Classification (ODC), a scheme for classifying software defects to support software development process improvement, to study process-related differences between vulnerabilities and defects, creating ODC + Vulnerabilities (ODC + V). We applied ODC + V to classify 583 vulnerabilities and 583 defects across 133 releases of three open-source projects (Firefox, phpMyAdmin, and Chrome). Compared with defects, vulnerabilities are found later in the development cycle and are more likely to be resolved through changes to conditional logic. In Firefox, vulnerabilities are resolved 33% more quickly than defects. From a process improvement perspective, these results indicate opportunities may exist for more efficient vulnerability detection and resolution.

Keywords

Software development Measurement Process improvement Security Orthogonal Defect Classification (ODC) 

Notes

Acknowledgments

This work is supported, in part, by IBM and by the USA National Security Agency (NSA) Science of Security Lablet at NCSU. Any opinions expressed in this report are those of the author(s) and do not necessarily reflect the views of IBM or the NSA. We thank Marc Delisle of the phpMyadmin for providing us with the snapshot of defect repostitories for this study, and for kindly answering many questions and offering his perspective. We also thank Dr. Alyson Wilson for providing helpful feedback on designing the classification assignments for the raters. We are grateful to Dr. Andy Meneely for providing the Chrome database snapshot, and to Dr. Fabio Massacci and the University of Trento for granting access to their curated Chrome vulnerability list. Finally, we thank the RealSearch36 research group for providing helpful feedback on this work.

References

  1. Agresti A (2007) An introduction to categorical data analysis, vol 135. Wiley, New YorkCrossRefzbMATHGoogle Scholar
  2. Alhazmi O H, Malaiya Y K (2005) Modeling the vulnerability discovery process. In: 16th IEEE international symposium on software reliability engineering, 2005. ISSRE 2005. IEEE, p 10Google Scholar
  3. Anbalagan P (2011) A study of software security problem disclosure, correction and patching processes. PhD thesis, North Carolina State UniversityGoogle Scholar
  4. Basili V R, Rombach H D (1987) Tailoring the software process to project goals and environments. In: Proceedings of ICSE. IEEE, pp 345–357Google Scholar
  5. Bhandari I, Halliday M J, Chaar J, Chillarege R, Jones K, Atkinson J, Lepori-Costello C, Jasper P, Tarver E, Lewis C et al (1994) In-process improvement through defect data interpretation. IBM Syst J 33(1):182–214CrossRefGoogle Scholar
  6. Boehm B (1981) Software engineering economics. Prentice Hall PTR, Upper Saddle RiverzbMATHGoogle Scholar
  7. Bridge N, Miller C (1998) Orthogonal defect classification using defect data to improve software development. Softw Qual 3(1):1–8Google Scholar
  8. Butcher M, Munro H, Kratschmer T (2002) Improving software testing via odc: three case studies. IBM Syst J 41(1):31–44CrossRefGoogle Scholar
  9. Camilo F, Meneely A, Nagappan M (2015) Do bugs foreshadow vulnerabilities?: A study of the chromium project. In: Proceedings of the 12th working conference on mining software repositories, MSR ’15. IEEE Press, Piscataway, pp 269–279Google Scholar
  10. Chillarege R (2006) ODC-a 10x for root cause analysis. Available online at: http://www.chillarege.com/articles/odc-10x-root-cause-analysis.html
  11. Chillarege R, Bhandari I S, Chaar J K, Halliday M J, Moebus D S, Ray B K, Wong M-Y (1992) Orthogonal defect classification-a concept for in-process measurements. IEEE Trans Softw Eng 18(11):943–956CrossRefGoogle Scholar
  12. Chowdhury I, Zulkernine M (2011) Using complexity, coupling, and cohesion metrics as early indicators of vulnerabilities. J Syst Archit 57(3):294–313CrossRefGoogle Scholar
  13. Cochran W G (1954) Some methods for strengthening the common chi-squared tests. Biometrics 10(4):417–451MathSciNetCrossRefzbMATHGoogle Scholar
  14. Deming W E (1986) Out of the crisis. MIT Press, CambridgeGoogle Scholar
  15. Gegick M, Williams L, Osborne J, Vouk M (2008) Prioritizing software security fortification throughcode-level metrics. In: Proceedings of the 4th ACM workshop on quality of protection, QoP ’08. ACM. New York, pp 31–38Google Scholar
  16. Howard M, Lipner S (2009) The security development lifecycle. O’Reilly Media, Incorporated, New YorkGoogle Scholar
  17. Hunny U, Zulkernine M, Weldemariam K (2013) Osdc: adapting odc for developing more secure software. In: Proceedings of the 28th SAC. ACM, pp 1131–1136Google Scholar
  18. Landis J R, Koch G G (1977) The measurement of observer agreement for categorical data. Biometrics 33(1):159–174CrossRefzbMATHGoogle Scholar
  19. Massacci F, Nguyen V H (2014) An empirical methodology to evaluate vulnerability discovery models. IEEE Trans Softw Eng 40(12):1147–1162CrossRefGoogle Scholar
  20. Massacci F, Neuhaus S, Nguyen V H (2011) After-life vulnerabilities: a study on firefox evolution, its vulnerabilities, and fixes. In: Engineering secure software and systems, pp 195–208. Springer, BerlinGoogle Scholar
  21. Mays R, Jones C, Holloway G, Studinski D (1990) Experiences with defect prevention. IBM Syst J 29(1):4–32CrossRefGoogle Scholar
  22. McGraw G (2006) Software security: building security in, volume 1. Addison-Wesley ProfessionalGoogle Scholar
  23. Neuhaus S, Zimmermann T, Holler C, Zeller A (2007) Predicting vulnerable software components. In: Proceedings of the 14th ACM conference on computer and communications security, CCS ’07. ACM, New York, pp 529–540Google Scholar
  24. Nguyen V H, Massacci F (2013) The (un)reliability of nvd vulnerable versions data: an empirical experiment on google chrome vulnerabilities. In: Proceedings of the 8th ACM SIGSAC symposium on information, computer and communications security, ASIA CCS ’13. ACM, New York, pp 493–498Google Scholar
  25. Ott L (1988) An introduction to statistical methods and data analysis. Duxbury PressGoogle Scholar
  26. Ozment J A (2007) Vulnerability discovery & software security. PhD thesis, CiteseerGoogle Scholar
  27. Paulk M C, Weber C V, Curtis B, Chrissis M B (1995) The capability maturity model: guidelines for improving the software process. Addison-Wesley, ReadingGoogle Scholar
  28. Ray B, Posnett D, Filkov V, Devanbu P (2014) A large scale study of programming languages and code quality in github. In: Proceedings of the 22Nd ACM SIGSOFT international symposium on foundations of software engineering, FSE 2014. ACM, New York, pp 155–165Google Scholar
  29. Riaz M, King J, Slankas J, Williams L (2014) Hidden in plain sight: automatically identifying security requirements from natural language artifacts. In: Proceedings of the 22nd RE. IEEE, pp 183–192Google Scholar
  30. Robinson B, Francis P, Ekdahl F (2008) A defect-driven process for software quality improvement. In: Proceedings of the 2nd ESEM. ACM, pp 333–335Google Scholar
  31. Shewhart W (1930) Economic quality control of manufactured product. Bell Syst Tech J 9(2):364–389CrossRefGoogle Scholar
  32. Shin Y, Meneely A, Williams L, Osborne J A (2011) Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Trans Softw Eng 37(6):772–787CrossRefGoogle Scholar
  33. Shostack A (2014) Threat modeling: designing for security. Wiley, New YorkGoogle Scholar
  34. Souza R, Silva B (2017) Sentiment analysis of travis ci builds. In: Proceedings of the 14th international conference on mining software repositories, MSR ’17. IEEE Press, Piscataway, pp 459–462Google Scholar
  35. Syed-Mohamad S M, McBride T (2008) A comparison of the reliability growth of open source and in-house software. In: Proceedings of the 15th APSEC. IEEE, pp 229–236Google Scholar
  36. Theisen C, Herzig K, Morrison P, Murphy B, Williams L A (2015) Approximating attack surfaces with stack traces. In: 37th IEEE/ACM international conference on software engineering, ICSE 2015, Florence, Italy, May 16–24, vol 2. IEEE, pp 199–208Google Scholar
  37. Walden J, Stuckman J, Scandariato R (2014) Predicting vulnerable components: software metrics vs text mining. In: 2014 IEEE 25th international symposium on software reliability engineering, pp 23–33Google Scholar
  38. Zaman S, Adams B, Hassan A E (2011) Security versus performance bugs: a case study on firefox. In: Proceedings of the 8th working conference on mining software repositories, MSR ’11. ACM, New York, pp 93–102Google Scholar
  39. Zheng J, Williams L, Nagappan N, Snipes W, Hudepohl J P, Vouk M A (2006) On the value of static analysis for fault detection in software. IEEE Trans Softw Eng 32(4):240–253CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceNorth Carolina State UniversityRaleighUSA
  2. 2.Phase Change Software LLCDenverUSA
  3. 3.Chillarege Inc.RaleighUSA

Personalised recommendations