Advertisement

A systematic classification of security regression testing approaches

  • Michael FeldererEmail author
  • Elizabeta Fourneret
ESE

Abstract

The openness of modern IT systems and their permanent change make it challenging to keep these systems secure. A combination of regression and security testing called security regression testing, which ensures that changes made to a system do not harm its security, are therefore of high significance and the interest in such approaches has steadily increased. In this article we present a systematic classification of available security regression testing approaches based on a solid study of background and related work to sketch which parts of the research area seem to be well understood and evaluated, and which ones require further research. For this purpose we extract approaches relevant to security regression testing from computer science digital libraries based on a rigorous search and selection strategy. Then, we provide a classification of these according to security regression approach criteria: abstraction level, security issue, regression testing techniques, and tool support, as well as evaluation criteria, for instance evaluated system, maturity of the system, and evaluation measures. From the resulting classification we derive observations with regard to the abstraction level, regression testing techniques, tool support as well as evaluation, and finally identify several potential directions of future research.

Keywords

Regression testing Security testing  Security regression testing Software evolution Security engineering  Software testing Classification Survey 

Notes

Acknowledgments

This research was partially funded by the research projects QE LaB—Living Models for Open Systems (FFG 822740) and MOBSTECO (FWF P 26194-N15). In addition, financial support for this research was also provided by the project ANR Astrid Maturation MBT_Sec (ANR-13-ASMA-0003). Finally, we are grateful to Fabien Peureux for his review and suggested improvements.

References

  1. 1.
    Lehman, M.: On understanding laws, evolution, and conservation in the large-program lifecycle. J. Syst. Softw. 1, 213–221 (1980)Google Scholar
  2. 2.
    Lehman, M.: Software’s future: managing evolution. IEEE Softw. 15(1), 40–44 (1998)CrossRefGoogle Scholar
  3. 3.
    Felderer, M., Katt, B., Kalb, P., Jürjens, J., Ochoa, M., Paci, F., Tran, L.M.S., Tun, T.T., Yskout, K., Scandariato, R., Piessens, F., Vanoverberghe, D., Fourneret, E., Gander, M., Solhaug, B., Breu, R.: Evolution of security engineering artifacts: a state of the art survey. Int. J. Secure Softw. Eng. 5(4), 48–97 (2014)Google Scholar
  4. 4.
    Graves, T.L., Harrold, M.J., Kim, J.M., Porter, A., Rothermel, G.: An empirical study of regression test selection techniques. ACM Trans. Softw. Eng. Methodol. 10(2), 184–208 (2001)CrossRefzbMATHGoogle Scholar
  5. 5.
    Mehta, D.M.: Effective software security management. Technical Report, OWASP. https://www.owasp.org/images/2/28/Effective_Software_Security_Management.pdf (2007)
  6. 6.
    Alnatheer, A., Gravell, A.M., Argles, D.: Agile security issues: an empirical study. In: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ESEM ’10, pp. 58:1–58:1. New York, NY, USA, ACM (2010)Google Scholar
  7. 7.
    Kongsli, V.: Towards agile security in web applications. In: Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications. OOPSLA ’06, pp. 805–808. New York, NY, USA, ACM (2006)Google Scholar
  8. 8.
    Rothermel, G., Harrold, M.J.: Analyzing regression test selection techniques. IEEE Trans. Softw. Eng. 22(8), 529–551 (1996)CrossRefGoogle Scholar
  9. 9.
    Graves, T.L., Harrold, M.J., Kim, J.M., Porter, A., Rothermel, G.: An empirical study of regression test selection techniques. ACM Trans. Softw. Eng. Methodol. 10, 184–208 (2001)CrossRefzbMATHGoogle Scholar
  10. 10.
    Fahad, M., Nadeem, A.: A survey of uml based regression testing. In: Intelligent Information Processing IV. Springer pp. 200–210 (2008)Google Scholar
  11. 11.
    Biswass, S., Mall, R., Satpathy, M., Sukurman, S.: Regression test selection techniques: a survey. Informatica 35(3), 289–321 (2011)Google Scholar
  12. 12.
    Yoo, S., Harman, M.: Regression testing minimisation, selection and prioritisation: a survey. Softw. Test. Verif. Reliab. 1(1), 121–141 (2010)Google Scholar
  13. 13.
    Singh, Y., Kaur, A., Suri, B., Singhal, S.: Systematic literature review on regression test prioritization techniques. Informatica (Slovenia) 36(4), 379–408 (2012)Google Scholar
  14. 14.
    Schieferdecker, I., Grossmann, J., Schneider, M.: Model-based security testing. In: Proceedings 7th Workshop on Model-based testing (2012)Google Scholar
  15. 15.
    Ammann, P., Offutt, J.: Introduction to Software Testing. Cambridge University Press, Cambridge (2008)CrossRefzbMATHGoogle Scholar
  16. 16.
    Bourque, P., Dupuis, R. (eds.): Software Engineering Body of Knowledge (SWEBOK). IEEE Computer Society, EUA (2004)Google Scholar
  17. 17.
    ISO/IEC: Information technology—open systems interconnection—conformance testing methodology and framework. International ISO/IEC multi-part standard No. 9646 (1994)Google Scholar
  18. 18.
    Xie, Q., Memon, A.M.: Designing and comparing automated test oracles for GUI-based software applications. ACM Trans. Softw. Eng. Methodol. (TOSEM) 16(1), 4 (2007)CrossRefGoogle Scholar
  19. 19.
    Utting, M., Legeard, B.: Practical Model-Based Testing: A Tools Approach. Morgan Kaufmann Publishers Inc., San Francisco (2007)Google Scholar
  20. 20.
    Zander, J., Schieferdecker, I., Mosterman, P.J.: Model-Based Testing for Embedded Systems. CRC Press, USA (2011)Google Scholar
  21. 21.
    IEEE: Standard Glossary of Software Engineering Terminology. IEEE (1990)Google Scholar
  22. 22.
    Leung, H.K.N., White, L.: Insights into regression testing (software testing). In: Proceedings Conference on Software Maintenance 1989, pp. 60–69. IEEE (1989)Google Scholar
  23. 23.
    Chen, Y., Probert, R.L., Sims, D.P.: Specification-based regression test selection with risk analysis. In: Proceedings of the 2002 conference of the Centre for Advanced Studies on Collaborative research, IBM Press, 1. (2002)Google Scholar
  24. 24.
    Frankl, P.G., Rothermel, G., Sayre, K., Vokolos, F.I.: An empirical comparison of two safe regression test selection techniques. In: Proceedings of the 2003 International Symposium on Empirical Software Engineering. ISESE ’03, pp. 195. Washington, DC, USA, IEEE Computer Society (2003)Google Scholar
  25. 25.
    Rothermel, G., Harrold, M.J., Dedhia, J.: Regression test selection for c++ software. Softw. Test. Verif. Reliab. 10(2), 77–109 (2000)Google Scholar
  26. 26.
    Mansour, N., Statieh, W.: Regression test selection for c# programs. Adv. Soft. Eng. 2009, pp. 1:1–1:16 (2009)Google Scholar
  27. 27.
    Harrold, M.J., Jones, J.A., Li, T., Liang, D., Orso, A., Pennings, M., Sinha, S., Spoon, S.A., Gujarathi, A.: Regression test selection for java software. SIGPLAN Not. 36(11), 312–326 (2001)CrossRefGoogle Scholar
  28. 28.
    Briand, L., Labiche, Y., He, S.: Automating regression test selection based on uml designs. Inf. Softw. Technol. (Elsevier) 51(1), 16–30 (2009)CrossRefGoogle Scholar
  29. 29.
    Ural, H., Probert, R.L., Chen, Y.: Model based regression test suite generation using dependence analysis. In: Proceedings of the third international workshop on Advances in model-based testing. pp. 54–62 (2007)Google Scholar
  30. 30.
    Fourneret, E., Bouquet, F., Dadeau, F., Debricon, S.: Selective test generation method for evolving critical systems. In: Proceedings of the 2011 IEEE 4th International Conference on Software Testing, Verification and Validation Workshops. ICSTW ’11, pp. 125–134. Washington, DC, USA, IEEE Computer Society (2011)Google Scholar
  31. 31.
    Tahat, L.H., Bader, A., Vaysburg, B., Korel, B.: Requirement-based automated black-box test generation. In: Proceedings of the 25th International Computer Software and Applications Conference on Invigorating Software Development. COMPSAC ’01, Washington, DC, USA, IEEE Computer Society pp. 489–495 (2001)Google Scholar
  32. 32.
    Yoo, S., Harman, M.: Pareto efficient multi-objective test case selection. In: Proceedings of the 2007 International Symposium on Software Testing and Analysis. ISSTA ’07, pp. 140–150. New York, NY, USA, ACM (2007)Google Scholar
  33. 33.
    Kim, J.M., Porter, A.: A history-based test prioritization technique for regression testing in resource constrained environments. In: Proceedings of the 24th International Conference on Software Engineering. ICSE ’02, pp. 119–129. New York, NY, USA, ACM (2002)Google Scholar
  34. 34.
    Fourneret, E., Cantenot, J., Bouquet, F., Legeard, B., Botella, J.: Setgam: Generalized technique for regression testing based on uml/ocl models. In: Software Security and Reliability (SERE), 2014 Eighth International Conference on. pp. 147–156. (2014)Google Scholar
  35. 35.
    Fischer, K., Raji, F., Chrusckicki, A.: A methodology for retesting modified software. In: National Tele. Conference B-6-3. pp. 1–6 (1981)Google Scholar
  36. 36.
    J., L., W., S.: Identification of program modifications and its applications in software maintentance. In: Conference on Software Maintenance. pp. 282–290 (1992)Google Scholar
  37. 37.
    Rothermel, G., Harrold, M.J.: A safe, efficient regression test selection technique. ACM Trans. Softw. Eng. Methodol. pp. 173–210 (1997)Google Scholar
  38. 38.
    Vokolos, F.I., Frankl, P.G.: Empirical evaluation of the textual differencing regression testing technique. In: International Conference on Software Maintenance pp. 44–53 (1998)Google Scholar
  39. 39.
    Instruction, C.: 4009 national information assurance glossary, committee on national security systems, May 2003. Formerly NSTISSI 4009 (2003)Google Scholar
  40. 40.
    Tian-yang, G., Yin-sheng, S., You-yuan, F.: Research on software security testing. World Acad. Sci. Eng. Technol. Issure 69, 647–651 (2010)Google Scholar
  41. 41.
    Fink, G., Bishop, M.: Property-based testing: a new approach to testing for assurance. ACM SIGSOFT Softw. Eng. Notes 22, 74–80 (1997)CrossRefGoogle Scholar
  42. 42.
    Arkin, B., Stender, S., McGraw, G.: Software penetration testing. Secur. Priv. IEEE 3(1), 84–87 (2005)CrossRefGoogle Scholar
  43. 43.
    Potter, B., McGraw, G.: Software security testing. Secur. Priv. IEEE 2(5), 81–85 (2004)CrossRefGoogle Scholar
  44. 44.
    Botella, J., Legeard, B., Peureux, F., Vernotte, A.: Risk-based vulnerability testing using security test patterns. In: Proceedings of the 6\(^{th}\) International Symposium on Leveraging applications of formal methods, verification and validation (ISoLA’14), Part II. Volume 8803 of LNCS., pp. 337–352. Corfu, Greece, Springer (2014)Google Scholar
  45. 45.
    Felderer, M., Schieferdecker, I.: A taxonomy of risk-based testing. Int. J. Softw. Tools Technol. Transf. 16(5), 559–568 (2014)CrossRefGoogle Scholar
  46. 46.
    Engström, E., Runeson, P., Skoglund, M.: A systematic review on regression test selection techniques. Inf. Softw. Technol. 52(1), 14–30 (2010)CrossRefGoogle Scholar
  47. 47.
    Engström, E., Runeson, P.: A qualitative survey of regression testing practices. In: Product-Focused Software Process Improvement (PROFES 2010). pp. 3–16. Springer (2010)Google Scholar
  48. 48.
    Felderer, M., Agreiter, B., Zech, P., Breu, R.: A classification for model-based security testing. In: The Third International Conference on Advances in system testing and validation lifecycle(VALID 2011). pp. 109–114 (2011)Google Scholar
  49. 49.
    Schieferdecker, I., Grossmann, J., Schneider, M.: Model-based security testing. In: Proceedings 7th Workshop on Model-Based Testing. (2012)Google Scholar
  50. 50.
    Pretschner, A., Holling, D., Eschbach, R., Gemmar, M.: A generic fault model for quality assurance. In: Proc. ACM/IEEE 16th Intl. Conf. on Model Driven Engineering Languages and Systems. (2013)Google Scholar
  51. 51.
    Brereton, P., Kitchenham, B.A., Budgen, D., Turner, M., Khalil, M.: Lessons from applying the systematic literature review process within the software engineering domain. J. Syst. Softw. 80(4), 571–583 (2007)CrossRefGoogle Scholar
  52. 52.
    Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: 12th International Conference on Evaluation and Assessment in Software Engineering. Vol. 17 (2008)Google Scholar
  53. 53.
    Felderer, M., Kalb, P., Agreiter, B., Breu, R., Buyens, K., Farwick, M., Fourneret, E., Gander, M., Hafner, M., Innerhofer-Oberperfler, F., Jurjens, J., Martinelli, F., Ochoa, M., Paci, F., Scandariato, R., Schatz, B., Solhaug, B., Spitz, B., Steffen, M., Tran, T.M.T., Wagner, S., Yskout, K.: Survey on state of the art time awareness and management. Technical report, Deliverable 1.2 of the EternalS coordination action (FP7-247758) (2011)Google Scholar
  54. 54.
    Qi, D., Roychoudhury, A., Liang, Z., Vaswani, K.: Darwin: An approach for debugging evolving programs. In: Proceedings of the the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering. ESEC/FSE ’09, pp. 33–42. ACM (2009)Google Scholar
  55. 55.
    Qi, D., Roychoudhury, A., Liang, Z., Vaswani, K.: Darwin: an approach to debugging evolving programs. ACM Trans. Softw. Eng. Methodol. 21(3), 19:1–19:29 (2012)CrossRefGoogle Scholar
  56. 56.
    Vetterling, M., Wimmel, G., Wisspeintner, A.: Secure systems development based on the common criteria: The palme project. In: Proceedings of the 10th ACM SIGSOFT Symposium on Foundations of Software Engineering. SIGSOFT ’02/FSE-10, pp. 129–138. ACM (2002)Google Scholar
  57. 57.
    Jaeger, T., Edwards, A., Zhang, X.: Consistency analysis of authorization hook placement in the linux security modules framework. ACM Trans. Inf. Syst. Secur. 7(2), 175–205 (2004)CrossRefGoogle Scholar
  58. 58.
    Bruno, M., Canfora, G., Penta, M., Esposito, G., Mazza, V.: Using test cases as contract to ensure service compliance across releases. In: Benatallah, B., Casati, F., Traverso, P., (eds) Service-Oriented Computing-ICSOC 2005. Volume 3826 of Lecture Notes in Computer Science. pp. 87–100 (2005)Google Scholar
  59. 59.
    Tóth, G., Kőszegi, G., Hornák, Z.: Case study: Automated security testing on the trusted computing platform. In: Proceedings of the 1st European Workshop on System Security. EUROSEC ’08, pp. 35–39. ACM (2008)Google Scholar
  60. 60.
    He, T., Jing, X., Kunmei, L., Ying, Z.: Research on strong-association rule based web application vulnerability detection. In: Computer Science and Information Technology, 2009. ICCSIT 2009. 2nd IEEE International Conference on. pp. 237–241 (2009)Google Scholar
  61. 61.
    Huang, C., Sun, J., Wang, X., Si, Y.: Selective regression test for access control system employing rbac. In: Park, J., Chen, H.H., Atiquzzaman, M., Lee, C., Kim, T.h., Yeo, S.S., (eds), Advances in Information Security and Assurance. Volume 5576 of Lecture Notes in Computer Science. pp. 70–79 (2009)Google Scholar
  62. 62.
    Felderer, M., Agreiter, B., Breu, R.: Evolution of security requirements tests for service-centric systems. In: Engineering Secure Software and Systems: Third International Symposium, ESSoS 2011, pp. 181–194. Springer (2011)Google Scholar
  63. 63.
    Garvin, B.J., Cohen, M.B., Dwyer, M.B.: Using feature locality: Can we leverage history to avoid failures during reconfiguration? In: Proceedings of the 8th Workshop on Assurances for Self-adaptive Systems. ASAS ’11, pp. 24–33. ACM (2011)Google Scholar
  64. 64.
    Kassab, M., Ormandjieva, O., Daneva, M.: Relational-model based change management for non-functional requirements: Approach and experiment. In: Research Challenges in Information Science (RCIS), 2011 Fifth International Conference on. pp. 1–9 (2011)Google Scholar
  65. 65.
    Anisetti, M., Ardagna, C., Damiani, E.: A low-cost security certification scheme for evolving services. In: Web Services (ICWS), 2012 IEEE 19th International Conference on. pp. 122–129 (2012)Google Scholar
  66. 66.
    Huang, Y.C., Peng, K.L., Huang, C.Y.: A history-based cost-cognizant test case prioritization technique in regression testing. J. Syst. Softw. 85(3), 626–637 (2012). Novel approaches in the design and implementation of systems/software architectureCrossRefMathSciNetGoogle Scholar
  67. 67.
    Hwang, J., Xie, T., El Kateb, D., Mouelhi, T., Le Traon, Y.: Selection of regression system tests for security policy evolution, ACMGoogle Scholar
  68. 68.
    Kim, T., Chandra, R., Zeldovich, N.: Efficient patch-based auditing for web application vulnerabilities. In: Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation. OSDI’12, pp. 193–206. USENIX Association (2012)Google Scholar
  69. 69.
    Yu, Y.T., Lau, M.F.: Fault-based test suite prioritization for specification-based testing. Inf. Softw. Technol. 54(2), 179–202 (2012)CrossRefGoogle Scholar
  70. 70.
    Viennot, N., Nair, S., Nieh, J.: Transparent mutable replay for multicore debugging and patch validation. In: Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems. ASPLOS ’13, pp. 112–126. ACM (2013)Google Scholar
  71. 71.
    Kitchenham, B.A., Charters, S.: Guidelines for performing systematic literature reviews in software engineering. (2007)Google Scholar
  72. 72.
    Korel, B., H.Tahat, L., Vaysburg, B.: Model based regression test reduction using dependence analysis. In: IEEE ICSM’02, 10. (2002)Google Scholar
  73. 73.
    Zech, P., Felderer, M., Kalb, P., Breu, R.: A generic platform for model-based regression testing. In: Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change. pp. 112–126. Springer (2012)Google Scholar
  74. 74.
    Windmüller, S., Neubauer, J., Steffen, B., Howar, F., Bauer, O.: Active continuous quality control. In: Proceedings of the 16th International ACM Sigsoft symposium on Component-based software engineering, pp. 111–120. ACM (2013)Google Scholar
  75. 75.
    Di Nardo, D., Alshahwan, N., Briand, L.C., Labiche, Y.: Coverage-based test case prioritisation: An industrial case study. In: ICST. pp. 302–311 (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.University of InnsbruckInnsbruckAustria
  2. 2.DISC, FEMTO-STUniversité de Franche-ComtéBesançonFrance

Personalised recommendations