Software Quality Journal

, Volume 25, Issue 3, pp 601–640 | Cite as

Toward automatically quantifying the impact of a change in systems

  • Nada Almasri
  • Luay Tahat
  • Bogdan Korel


Software maintenance is becoming more challenging with the increased complexity of the software and the frequently applied changes. Performing impact analysis before the actual implementation of a change is a crucial task during system maintenance. While many tools and techniques are available to measure the impact of a change at the code level, only a few research work is done to measure the impact of a change at an earlier stage in the development process. Measuring the impact of a change at the model level speeds up the maintenance process allowing early discovery of critical components of the system before applying the actual change at the code level. In this paper, we present model-based impact analysis approach for state-based systems such as telecommunication or embedded systems. The proposed approach uses model dependencies to automatically measure the expected impact for a requested change instead of relying on the expertise of system maintainers, and it generates two impact sets representing the lower bound and the upper bound of the impact. Although it can be extended to other behavioral models, the presented approach mainly addresses extended finite-state machine (EFSM) models. An empirical study is conducted on six EFSM models to investigate the usefulness of the proposed approach. The results show that on average the size of the impact after a single modification (a change in a one EFSM transition) ranges between 14 and 38 % of the total size of the model. For a modification involving multiple transitions, the average size of the impact ranges between 30 and 64 % of the total size of the model. Additionally, we investigated the relationships (correlation) between the structure of the EFSM model, and the size of the impact sets. Upon preliminary analysis of the correlation, the concepts of model density and data density were defined, and it was found that they could be the major factors influencing the sizes of impact sets for models. As a result, these factors can be used to determine the types of models for which the proposed approach is the most appropriate.


Impact analysis EFSM Maintenance Model-based analysis 



This work is supported by Kuwait Foundation for Advancement of Science (KFAS) Number P114-18EO-03.


  1. Abrial, J. R., Butler, M., Hallerstede, S., Hoang, T. S., Mehta, F., & Voisin, L. (2010). Rodin: An open toolset for modelling and reasoning in event-B. International Journal on Software Tools for Technology Transfer, 12(6), 447–466.CrossRefGoogle Scholar
  2. Aizenbud-Reshef, N., Nolan, B. T., Rubin, J., & Shaham-Gafni, Y. (2006). Model traceability. IBM Systems Journal, 45(3), 515–526.CrossRefGoogle Scholar
  3. Androutsopoulos, K. et al. (2009). Control dependence for extended finite state machines. In Proceedings of the fundamental approaches to software Engineering (FASE ‘09), York, UK. Springer LNCS (Vol. 5503, pp. 216–230). March, 22nd–29th 2009.Google Scholar
  4. Androutsopoulos, K., Clark, D., Harman, M., Krinke, J., Li, Z., & Tratt, L. (2013). State-based model slicing: A survey. ACM Computing Surveys, 45(4), 53.CrossRefGoogle Scholar
  5. Androutsopoulos, K., Gold, N., Harman, M., Li, Z. & Tratt, L. (2009). A theoretical and empirical study of EFSM dependence. In Proceeding of the 25th IEEE international conference on software maintenance (ICSM 2009), Edmonton, Alberta, Canada, pp. 287–296. September 23rd–26th, 2009.Google Scholar
  6. Au, P. K. & Atlee, J. M. (1997). Evaluation of a state-based model of feature interactions. In Feature interactions in telecommunications networks IV (FIW 97) (pp. 153–167), Montreal, Canada: IOS Press.Google Scholar
  7. Basanieri, F., Bertolino, A. & Marchetti, E. (2002). The cow suite approach to planning and deriving test suites in UML projects. In International conference on the unified modeling language—The language and its applications (Lecture Notes in Computer Science) (pp. 383–397). Berlin:Springer.Google Scholar
  8. Bennett, K. H., & Rajlich, V. T. (2000). Software maintenance and evolution: A roadmap. In Proceedings of the conference on the future of software engineering. ACM.Google Scholar
  9. Bertolino, A., Angelis, G. D., Frantzen, L. & Polini, A. (2008). Model based generation of testbeds for web services. In The 8th international workshop on formal approaches to testing software (FATES’08), Volume 5047 of Lecture Notes in Computer Science, pp. 266–282. Springer.Google Scholar
  10. Bohner, S. A., & Arnold, R. S. (1996). Software change impact analysis. Los Alamitos, CA: IEEE Computer Society Press.Google Scholar
  11. Bourhfir, C., Dssouli, R., Aboulhamid, E., & Rico N. (2013). Specification and description language (SDL), WebPro Forum Tutorial, International Engineering Consortium.Google Scholar
  12. Briand, L., Labiche, Y. & O’Sullivan, L. (2003). Impact analysis and change management of UML models. In Proceedings of the 19th international conference on software maintenance, pp. 256–265.Google Scholar
  13. Briand, L., Labiche, Y., O’Sullivan, L., & Sowka, M. (2006). Automated impact analysis of UML models. Journal of Systems and Software, 79, 339–352.CrossRefGoogle Scholar
  14. Briand, L.C., Labiche, Y., Soccar, G. (2002). Automating impact analysis and regression test selection based on UML designs. In International conference on software maintenance (ICSM), Montreal, Canada, pp. 252–261.Google Scholar
  15. Chen, Y.-P., Probert, R., & Ural, H. (2009). Regression test suite reduction based on SDL models of system requirements. Journal of Software Maintenance and Evolution, 21(6), 379–405.CrossRefGoogle Scholar
  16. Cheng, K. & Krishnakumar, A. (1993). Automatic functional test generation using the extended finite state machine model. In The 30th ACM/IEEE design automation conference, pp. 86–91.Google Scholar
  17. Dalal, S., Jain, A., Karunanithi, N., Leaton, J., Lott, C., Patton, G. & Horowitz, B. (1999). Model-based testing and practice. In Proceedings of the international conference on software engineering (ICSE), pp 185–194.Google Scholar
  18. Dantas, C., Murta, L. & Werner, C. (2007). Mining change traces from versioned UML repositories. In Proceedings of the Brazilian symposium on software engineering (SBES’07), pp. 236–252.Google Scholar
  19. Dick, J. (2005). Design traceability. Software, IEEE, 22(6), 14–16.CrossRefGoogle Scholar
  20. Dick, J. & Faivre, A. (1992). Automating the generation and sequencing of test case from model-based specification. In Proceedings of the industrial strength formal methods, 5th international symposium on formal methods, pp. 268–284.Google Scholar
  21. Ferrante, K., Ottenstein, K., & Warren, J. (1987). The program dependence graph and its use in optimization. ACM Transactions on Programming Languages and Systems, 9(5), 319–349.CrossRefzbMATHGoogle Scholar
  22. Fox, C., & Luangsodsai, A. (2005). And-or dependence graphs for slicing statecharts. Dagstuhl, Germany: In Beyond Program Slicing.Google Scholar
  23. Garousi, V., Briand, L. C. & Labiche, Y. (2006). Traffic-aware stress testing of distributed systems based on UML models. In International conference on software engineering (ICSE), pp. 391–400.Google Scholar
  24. Gotel, O. C. Z. & Finkelstein, A. C. W. (1994). An analysis of the requirements traceability problem. In Proceedings of the first international conference on requirements engineering, Utrecht, The Netherlands, pp. 94–101.Google Scholar
  25. Harel, D. & Gery, E. (1996). Executable object modeling with statecharts. In Proceedings of the 18th international conference on Software engineering. IEEE Computer Society.Google Scholar
  26. Heimdahl, M. P. E. & Whalen, M. W. (1997). Reduction and slicing of hierarchical state machines. In Proceedings of the Fifth ACM SIGSOFT symposium on the foundations of software engineering, Switzerland. Google Scholar
  27. Hull, E., Jackson, K., & Dick, J. (2011). Requirements engineering (3rd ed.). London: Springer-Verlag.CrossRefzbMATHGoogle Scholar
  28. Julliand, J., Stouls, N., Bue, P.-C., & Masson, P.-A. (2013). B model slicing and predicate abstraction to generate tests. Software Quality Journal, 21, 127–158.CrossRefGoogle Scholar
  29. Kaur, P., Bansal, P, & Sibal, R. (2012). Prioritization of test scenarios derived from UML activity diagram using path complexity. In Proceedings of the CUBE international information technology conference (CUBE ‘2012). New York, NY:ACM, pp. 355-359.Google Scholar
  30. Korel, B. & Tahat, L. (2004). Understanding modification in state-based system. In Proceeding of the 12th IEEE international conference on program comprehension (IWPC’04), London, UK, September 2004: pp. 246–250.Google Scholar
  31. Korel, B. & Koutsogiannakis, G. (2009). Experimental comparison of code-based and model-based test prioritization. In 5th workshop on advances in model based testing, A-MOST 2009, Denver, April 2009. IEEE digital library.Google Scholar
  32. Korel, B., Koutsogiannakis, G. & Tahat, L. (2007). Prioritization algorithms for regression testing in model based systems. In Proceedings of the 3rd ACM workshop on advances in model based testing (A-MOST), London, United Kingdom, pp. 34–43.Google Scholar
  33. Korel, B., Koutsogiannakis, G., Tahat, L. (2008). Application of system models in regression test suite prioritization. In IEEE international conference on software maintenance. ICSM 2008, Beijing, China, pp. 247–256. September 2008.Google Scholar
  34. Korel, B., Singh, I., Tahat, L. & Vaysburg, B. (2003). Slicing of state -based models In IEEE international conference on software maintenance, pp. 34–43.Google Scholar
  35. Korel, B., Tahat, L., & Harman, M. (2005). Test prioritization using system models. In Proceedings of the IEEE international conference on software maintenance (pp. 559–568). Budapest, Hungary.Google Scholar
  36. Korel, B., Tahat, L. & Vaysburg, B. (2002). Model based regression test reduction using dependence analysis. In Proceedings of the international IEEE conference on software maintenance, pp. 214–223.Google Scholar
  37. Kuck, D. J., Kuhn, R. H., Padua, D. A., Leasure, B. & Wolfe, M. (1981). Dependence graphs and compiler optimizations. In POPL’81: Proceedings of the 8th ACM SIGPLAN-SIGACT symposium on principles of programming languages, (pp. 207–218). New York, NY:ACM.Google Scholar
  38. Labbé, S., & Gallois, J.-P. (2008). Slicing communicating automata specifications: Polynomial algorithms for model reduction. Formal Aspects of Computing, 20(6), 563–595.CrossRefzbMATHGoogle Scholar
  39. Le Traon, Y., Jéron, T., Jézéquel, J.-M., & Morel, P. (2000). Efficient object-oriented integration and regression testing. IEEE Transactions on Reliability, 49(1), 12–25.CrossRefGoogle Scholar
  40. Lehnert, S. (2011). A review of software change impact analysis. Ilmenau University of Technology, technical report.Google Scholar
  41. Li, B., Sun, X., Leung, H., & Zhang, S. (2013). A survey of code-based change impact analysis techniques. Software Testing, Verification and Reliability, 23(8), 613–646.CrossRefGoogle Scholar
  42. Luangsodsai, A. & Fox, C. (2010). Concurrent statechart slicing. In Computer science and electronic engineering conference (CEEC), pp. 1–7. September 2010.Google Scholar
  43. Maheshwari, A., Kenley, C. R. & DeLaurentis, D. A. Creating executable agent‐based models using SysML. In INCOSE international symposium 2015 (Vol. 25, No. 1, pp. 1263–1277). Oct 1, 2015.Google Scholar
  44. Pilskalns, O. & Uyan, G. (2006). Anneliese Andrews. regression testing UML designs. In 22nd IEEE international conference on software maintenance, pp. 254–264.Google Scholar
  45. Pretschner, A., Prenninger, W., Wagner, S., Kühnel, C., Baumgartner, M., Sostawa, B., Zölch, R. & Stauner, T. (2005). One evaluation of model-based testing and its automation. In Proceedings of 27th international conference on software engineering, number ICSE’05, pp. 392–401.Google Scholar
  46. Ramesh, Balasubramaniam, & Jarke, Matthias. (2001). Toward reference models for requirements traceability. IEEE Transactions on Software Engineering, 27(1), 58–93.CrossRefGoogle Scholar
  47. Sangal, N., et al. (2005). Using dependency models to manage complex software architecture. ACM Sigplan Notices, 40(10), 167–176.CrossRefGoogle Scholar
  48. Schmidt, D. C. (2006). Model-driven engineering. IEEE Computer, 39(2), 25–31.CrossRefGoogle Scholar
  49. Selic, B. (2003). The pragmatics of model-driven development. IEEE Software, 20(5), 19–25.CrossRefGoogle Scholar
  50. Snook, C., & Butler, M. (2006). UML-B: Formal modeling and design aided by UML. ACM Transactions on Software Engineering and Methodology (TOSEM)., 15(1), 92–122.CrossRefGoogle Scholar
  51. Tahat, L., & Almasri, N. (2012). Identifying the effect of model modifications in State-Based models and systems. Journal of Advanced Computer Science and Technology, 2(1), 9–27.CrossRefGoogle Scholar
  52. Tahat, L., Korel, B., Harman, M., & Ural, H. (2012). Regression test suite prioritization using system models. Journal of Software Testing, Verification, and Reliability, 22, 481–506.CrossRefGoogle Scholar
  53. Tahat, L., Vaysburg, B., Korel, B. & Bader, A. (2001). Requirement-based automated black-box test generation. In Proceedings of the 25th annual IEEE international computer software and applications conference (COMPSAC), pp. 489–495.Google Scholar
  54. Ural, H., & Yenigun, H. (2013). Regression test suite selection using dependence analysis. Journal of Software: Evolution and Process, 25(7), 681–709.zbMATHGoogle Scholar
  55. Vaysburg, B., Tahat, L. & Korel, B. (2002). Dependence analysis in reduction of requirement based test suites. In Proceedings of the ACM international symposium on software testing and analysis, pp. 107–111.Google Scholar
  56. Vaysburg, B., Tahat, L., Korel, B. & Bader, A. (2001). Automating test case generation from SDL specifications. In Proceedings of 18th international conference on testing computer software, pp. 130–139.Google Scholar
  57. Völter, M., Stahl, T., Bettin, J., Haase, A., & Helsen, S. (2006). Model-driven software development: Technology, engineering, management. London: Wiley.Google Scholar
  58. Wagner, F. (1992). VFSM executable specification. In Proceedings CompEuro, pp. 226–231.Google Scholar
  59. Wang, J., Dong, W. & Qi, Z.-C. (2002). Slicing hierarchical automata for model checking UML statecharts. In Proceedings of the 4th international conference on formal engineering methods (ICFEM) (pp. 435–446), UK:Springer.Google Scholar
  60. Wu, Y. & Offutt, J. (2003). Maintaining evolving component-based software with UML. In Seventh European conference on software maintenance and reengineering (CSMR), pp. 133–142.Google Scholar
  61. Xing, Z. & Stroulia, E. (2004). “Understanding class evolution in object-oriented software,” Program Comprehension. In Proceedings. 12th IEEE international workshop on program comprehension, (Vol. 34, pp. 43). June, 24–26 2004.Google Scholar
  62. Xing, Z. & Stroulia, E. (2005). UMLDiff: An algorithm for object-oriented design differencing. In Proceedings of the 20th IEEE/ACM international conference on automated software engineering (ASE’05), Long Beach, California, USA, pp. 54–65. November 2005.Google Scholar
  63. Xing, Z., & Stroulia, E. (2004) Data-mining in support of detecting class co-evolution. In Proceedings of the 16th International Conference on Software Engineering & Knowledge Engineering (SEKE’04), pp. 123–128. June 2004.Google Scholar
  64. Yenigun, H. (2014). Identifying the effects of modifications as data dependencies. Software Quality Journal, 22, 701–716.CrossRefGoogle Scholar
  65. Yoo, S., & Harman, M. (2012). Regression testing minimisation, selection and prioritization—A survey. Journal of Software testing, verification, and Reliability, 22(2), 67–120.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Management Information SystemGulf University for Science and TechnologyWest MishrefKuwait
  2. 2.Computer Science DepartmentIllinois Institute of TechnologyChicagoUSA

Personalised recommendations