Integrating Manual and Automatic Risk Assessment for Risk-Based Testing

  • Michael Felderer
  • Christian Haisjackl
  • Ruth Breu
  • Johannes Motz
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 94)

Abstract

In this paper we define a model-based risk assessment procedure that integrates automatic risk assessment by static analysis, semi-automatic risk assessment and guided manual risk assessment. In this process probability and impact criteria are determined by metrics which are combined to estimate the risk of specific system development artifacts. The risk values are propagated to the assigned test cases providing a prioritization of test cases. This supports to optimize the allocation of limited testing time and budget in a risk-based testing methodology. Therefore, we embed our risk assessment process into a generic risk-based testing methodology. The calculation of probability and impact metrics is based on system and requirements artifacts which are formalized as model elements. Additional time metrics consider the temporal development of the system under test and take for instance the bug and version history of the system into account. The risk assessment procedure integrates several stakeholders and is explained by a running example.

Keywords

Aggregation Function Risk Assessment Model Product Risk Code Complexity Misuse Case 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Merriam-Webster: Merriam-Webster Online Dictionary (2009), http://www.merriam-webster.com/dictionary/clear (accessed: July 12, 2011)
  2. 2.
    Bach, J.: Troubleshooting risk-based testing. Software Testing and Quality Engineering 5(3), 28–33 (2003)Google Scholar
  3. 3.
    Ropponen, J., Lyytinen, K.: Components of software development risk: How to address them? a project manager survey. IEEE Transactions on Software Engineering 26(2), 98–112 (2000)CrossRefGoogle Scholar
  4. 4.
    Pfleeger, S.: Risky business: what we have yet to learn about risk management. Journal of Systems and Software 53(3), 265–273 (2000)CrossRefGoogle Scholar
  5. 5.
    Boehm, B.: A spiral model of software development and enhancement. Computer 21(5), 61–72 (1988)CrossRefGoogle Scholar
  6. 6.
    Kontio, J.: Risk management in software development: a technology overview and the riskit method. In: Proceedings of the 21st International Conference on Software Engineering, pp. 679–680. ACM (1999)Google Scholar
  7. 7.
    Karolak, D., Karolak, N.: Software Engineering Risk Management: A Just-in-Time Approach. IEEE Computer Society Press, Los Alamitos (1995)Google Scholar
  8. 8.
    Amland, S.: Risk-based testing: Risk analysis fundamentals and metrics for software testing including a financial application case study. Journal of Systems and Software 53(3), 287–295 (2000)CrossRefGoogle Scholar
  9. 9.
    Bach, J.: Heuristic risk-based testing. Software Testing and Quality Engineering Magazine 11, 99 (1999)Google Scholar
  10. 10.
    Carr, M., Konda, S., Monarch, I., Ulrich, F., Walker, C.: Taxonomy-based risk identification. Carnegie-Mellon University of Pittsburgh (1993)Google Scholar
  11. 11.
    Stallbaum, H., Metzger, A., Pohl, K.: An automated technique for risk-based test case generation and prioritization. In: Proceedings of the 3rd International Workshop on Automation of software Test. ACM (2008)Google Scholar
  12. 12.
    Stallbaum, H., Metzger, A.: Employing Requirements Metrics for Automating Early Risk Assessment. In: Proc. of MeReP 2007, Palma de Mallorca, Spain, pp. 1–12 (2007)Google Scholar
  13. 13.
    Lund, M.S., Solhaug, B., Stolen, K.: Model-driven Risk Analysis. Springer, Heidelberg (2011)CrossRefMATHGoogle Scholar
  14. 14.
    Lee, W., Grosh, D., Tillman, F.: Fault tree analysis, methods, and applications - a review. IEEE Transactions on Reliability (1985)Google Scholar
  15. 15.
    Mauw, S., Oostdijk, M.: Foundations of Attack Trees. In: Won, D.H., Kim, S. (eds.) ICISC 2005. LNCS, vol. 3935, pp. 186–198. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  16. 16.
    Alexander, I.: Misuse cases: Use cases with hostile intent. IEEE Software 20(1), 58–66 (2003)CrossRefGoogle Scholar
  17. 17.
    Asnar, Y., Giorgini, P.: Modelling Risk and Identifying Countermeasure in Organizations. In: López, J. (ed.) CRITIS 2006. LNCS, vol. 4347, pp. 55–66. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  18. 18.
    McCall, J., Richards, P.K., Walters, G.F.: Factors in software quality. Technical report, NTIS, Vol 1, 2 and 3 (1997)Google Scholar
  19. 19.
    Haimes, Y.Y.: Risk Modeling, Assessment, and Management. Wiley (2004)Google Scholar
  20. 20.
    Nagappan, N., Ball, T., Zeller, A.: Mining metrics to predict component failures. In: Proceedings of the 28th International Conference on Software Engineering. ACM (2006)Google Scholar
  21. 21.
    Illes-Seifert, T., Paech, B.: Exploring the relationship of a file’s history and its fault-proneness: An empirical method and its application to open source programs. Information and Software Technology 52(5) (2010)Google Scholar
  22. 22.
    McCabe, T.: A complexity measure. IEEE Transactions on software Engineering, 308–320 (1976)Google Scholar
  23. 23.
    Jiang, Y., Cuki, B., Menzies, T., Bartlow, N.: Comparing design and code metrics for software quality prediction. In: Proceedings of the 4th International Workshop on Predictor Models in Software Engineering, pp. 11–18. ACM (2008)Google Scholar
  24. 24.
    NIST: National Vulnerability Database, http://nvd.nist.gov/ (accessed: July 12, 2011)
  25. 25.
    The Open Source Vulnerability Database, http://osvdb.org/ (accessed: July 12, 2011)
  26. 26.
    Frei, S., May, M., Fiedler, U., Plattner, B.: Large-scale vulnerability analysis. In: Proceedings of the 2006 SIGCOMM Workshop on Large-Scale Attack Defense, pp. 131–138. ACM (2006)Google Scholar
  27. 27.
    Mell, P., Scarfone, K., Romanosky, S.: Common vulnerability scoring system. IEEE Security & Privacy 4(6), 85–89 (2006)CrossRefGoogle Scholar
  28. 28.
    Spillner, A., Linz, T., Rossner, T., Winter, M.: Software Testing Practice: Test Management. Dpunkt (2007)Google Scholar
  29. 29.
    van Veenendaal, E.: Practical risk–based testing, product risk management: the prisma method. Technical report, Improve Quality Services BV (2009)Google Scholar
  30. 30.
    CAST, http://www.castsoftware.com/ (accessed: July 12, 2011)
  31. 31.
    Understand, http://www.scitools.com/ (accessed: July 12, 2011)
  32. 32.
    Sonar, http://www.sonarsource.org/ (accessed: July 12, 2011)
  33. 33.
    iPlasma, http://loose.upt.ro/iplasma/index.html (accessed: July 12, 2011)
  34. 34.
    Zhao, M., Ohlsson, N., Wohlin, C., Xie, M.: A comparison between software design and code metrics for the prediction of software fault content. Information and Software Technology 40(14), 801–810 (1998)CrossRefGoogle Scholar
  35. 35.
    Nagappan, N., Ball, T.: Static analysis tools as early indicators of pre-release defect density. In: Proceedings of the 27th International Conference on Software Engineering, pp. 580–586. ACM (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Michael Felderer
    • 1
  • Christian Haisjackl
    • 1
  • Ruth Breu
    • 1
  • Johannes Motz
    • 2
  1. 1.Institute of Computer ScienceUniversity of InnsbruckAustria
  2. 2.Kapsch CarrierCom AGViennaAustria

Personalised recommendations