Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

High-confidence software evolution

Abstract

Software continues to evolve due to changing requirements, platforms and other environmental pressures. Modern software is dependent on frameworks, and if the frameworks evolve, the software has to evolve as well. On the other hand, the software may be changed due to changing requirements. Therefore, in high-confidence software evolution, we must consider both framework evolution and client evolution, each of which may incur faults and reduce software quality. In this article, we present a set of approaches to address some problems in high-confidence software evolution. In particular, to support framework evolution, we propose a history-based matching approach to identify a set of transformation rules between different APIs, and a transformation language to support automatic transformation. To support client evolution for high-confidence software, we propose a path-exploration-based approach to generate tests efficiently by pruning paths irrelevant to changes between versions, several coverage-based approaches to optimize test execution, and approaches to locate faults and fix memory leaks automatically. These approaches facilitate high-confidence software evolution from various aspects.

This is a preview of subscription content, log in to check access.

References

  1. 1

    Godfrey M W, German D M. The past, present, and future of software evolution. In: Proceedings of Frontiers of Software Maintenance, Beijing, 2008. 129–138

  2. 2

    Kemerer C, Slaughter S. An empirical approach to studying software evolution. IEEE Trans Softw Eng, 1999, 25: 493–509

  3. 3

    Lehman M M, Ramil J F, Wernick P D, et al. Metrics and laws of software evolution - the nineties view. In: Proceedings of the 4th International Software Metrics Symposium, Albuquerque, 1997. 20–32

  4. 4

    Böhme M, Oliveira B C d S, Roychoudhury A. Regression tests to expose change interaction errors. In: Proceedings of the 9th Joint Meeting on Foundations of Software Engineering. New York: ACM, 2013. 334–344

  5. 5

    Dig D, Manzoor K, Johnson R, et al. Refactoring-aware configuration management for object-oriented programs. In: Proceedings of the 29th International Conference on Software Engineering, Minneapolis, 2007. 427–436

  6. 6

    Harrold M J, Gupta R, Soffa M L. A methodology for controlling the size of a test suite. ACM Trans Softw Eng Methodol, 1993, 2: 270–285

  7. 7

    Henkel J, Diwan A. Catchup! capturing and replaying refactorings to support API evolution. In: Proceedings of the 27th International Conference on Software Engineering. New York: ACM, 2005. 274–283

  8. 8

    Ren X, Ryder B G. Heuristic ranking of Java program edits for fault localization. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2007. 239–249

  9. 9

    Santelices R, Harrold M J, Orso A. Precisely detecting runtime change interactions for evolving software. In: Proceedings of the 3rd International Conference on Software Testing, Verification and Validation, Paris, 2010. 429–438

  10. 10

    Stoerzer M, Ryder B G, Ren X, et al. Finding failure-inducing changes in Java programs using change classification. In: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2006. 57–68

  11. 11

    Wu W, Guéhéneuc Y-G, Antoniol G, et al. Aura: a hybrid approach to identify framework evolution. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. New York: ACM, 2010. 325–334

  12. 12

    Yoo S, Harman M. Using hybrid algorithm for pareto efficient multi-objective test suite minimisation. J Syst Softw, 2010, 83: 689–701

  13. 13

    Zhang W, Yi L, Zhao H Y, et al. Feature-oriented stigmergy-based collaborative requirements modeling: an exploratory approach for requirements elicitation and evolution based on web-enabled collective intelligence. Sci China Inf Sci, 2013, 56: 082108

  14. 14

    Kim M, Notkin D, Grossman D. Automatic inference of structural changes for matching across program versions. In: Proceedings of the International Conference on Software Engineering, Minneapolis, 2007. 333–343

  15. 15

    Xing Z, Stroulia E. API-evolution support with Diff-CatchUp. IEEE Trans Softw Eng, 2007, 33: 818–836

  16. 16

    Malpohl G, Hunt J, Tichy W. Renaming detection. In: Proceedings of the 15th IEEE International Conference on Automated Software Engineering, Grenoble, 2000. 73–80

  17. 17

    Weissgerber P, Diehl S. Identifying refactorings from source-code changes. In: Proceedings of the 21st IEEE/ACM International Conference on Automated Software Engineering (ASE’06), Tokyo, 2006. 231–240

  18. 18

    Godfrey M, Zou L. Using origin analysis to detect merging and splitting of source code entities. IEEE Trans Softw Eng, 2005, 31: 166–181

  19. 19

    Nguyen H A, Nguyen T T, Wilson G, et al. A graph-based approach to API usage adaptation. In: Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications. New York: ACM, 2010. 302–321

  20. 20

    Meng S, Wang X, Zhang L, et al. A history-based matching approach to identification of framework evolution. In: Proceedings of the 34th International Conference on Software Engineering (ICSE), Zurich, 2012. 353–363

  21. 21

    Li J, Wang C, Xiong Y, et al. SWIN: towards type-safe Java program adaptation between APIs. In: Proceedings of the Workshop on Partial Evaluation and Program Manipulation. New York: ACM, 2015. 91–102

  22. 22

    Wang C L, Li J, Xiong Y F, et al. Formal Definition of SWIN Language. https://github.com/Mestway/SWINProject/ blob/master/docs/pepm-15/TR/TR.pdf. 2014

  23. 23

    Nita M, Notkin D. Using twinning to adapt programs to alternative APIs. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering, New York: ACM, 2010. 205–214

  24. 24

    Cordy J R. The TXL source transformation language. Sci Comput Program, 2006, 61: 190–210

  25. 25

    Bravenboer M, Kalleberg K T, ermaas R V, et al. Stratego/XT 0.17. a language and toolset for program transformation. Sci Comput Program, 2008, 72: 52–70

  26. 26

    Wasserman L. Scalable, example-based refactorings with refaster. In: Proceedings of the ACM Workshop on Workshop on Refactoring Tools. New York: ACM, 2013. 25–28

  27. 27

    Erwig M, Ren D. A rule-based language for programming software updates. In: Proceedings of the ACM SIGPLAN Workshop on Rule-Based Programming. New York: ACM, 2002. 67–78

  28. 28

    Erwig M, Ren D. An update calculus for expressing type-safe program updates. Sci Comput Program, 2007, 67: 199–222

  29. 29

    Dig D, Negara S, Mohindra V, et al. Reba: refactoring-aware binary adaptation of evolving libraries. In: Proceedings of the 30th International Conference on Software Engineering. New York: ACM, 2008. 441–450

  30. 30

    Leather S, Jeuring J, Löh A, et al. Type-changing rewriting and semantics-preserving transformation. In: Proceedings of the ACM SIGPLAN Workshop on Partial Evaluation and Program Manipulation. New York: ACM, 2014. 109–120

  31. 31

    Saff D, Ernst M D. Reducing wasted development time via continuous testing. In: Proceedings of the 14th International Symposium on Software Reliability Engineering, Denver, 2003. 281–292

  32. 32

    Voas J. PIE: a dynamic failure-based technique. IEEE Trans Softw Eng, 1992, 18: 717–727

  33. 33

    Clarke L. A system to generate test data and symbolically execute programs. IEEE Trans Softw Eng, 1976, 2: 215–222

  34. 34

    Csallner C, Smaragdakis Y. JCrasher: an automatic robustness tester for Java. Softw Pract Exp, 2004, 34: 1025–1050

  35. 35

    Godefroid P, Klarlund N, Sen K. DART: directed automated random testing. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2005. 213–223

  36. 36

    Pacheco C, Lahiri S K, Ernst M D, et al. Feedback-directed random test generation. In: Proceedings of the 29th International Conference on Software Engineering (ICSE’07), Minneapolis, 2007. 75–84

  37. 37

    Sen K, Marinov D, Agha G. CUTE: a concolic unit testing engine for C. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with the 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2005. 263–272

  38. 38

    Tian T, Gong D-W. Test data generation for path coverage of message-passing parallel programs based on coevolutionary genetic algorithms. Aut Softw Eng, 2014, 22: 1–32

  39. 39

    Tillmann N, Halleux J de. Pex-white box test generation for.NET. In: Tests and Proofs. Berlin: Springer, 2008. 134–153

  40. 40

    Tonella P. Evolutionary testing of classes. In: Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2004. 119–128

  41. 41

    Visser W, Pasareanu C S, Khurshid S. Test input generation with Java pathfinder. In: Proceedings of the ACM SIGSOFT International Symposium on Software Testing and Analysis. New York: ACM, 2004. 97–107

  42. 42

    Zhang W-Q, Gong D-W, Yao X-J, et al. Evolutionary generation of test data for many paths coverage. In: Proceedings of Chinese Control and Decision Conference, Xuzhou, 2010. 230–235

  43. 43

    Inkumsah K, Xie T. Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering, L’Aquila, 2008. 297–306

  44. 44

    Taneja K, Xie T, Tillmann N, et al. eXpress: guided path exploration for efficient regression test generation. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2011. 1–11

  45. 45

    Taneja K, Xie T, Tillmann N, et al. Guided path exploration for regression test generation. In: Proceedings of the 31st International Conference on Software Engineering, Vancouver, 2009. 311–314

  46. 46

    Marinescu P D, Cadar C. Make test-zesti: a symbolic execution solution for improving regression testing. In: Proceedings of the 34th International Conference on Software Engineering. Piscataway: IEEE Press, 2012. 716–726

  47. 47

    Böhme M, Oliveira B C d S, Roychoudhury A. Partition-based regression verification. In: Proceedings of the International Conference on Software Engineering, Piscataway, 2013. 302–311

  48. 48

    Chipounov V, Georgescu V, Zamfir C, et al. Selective symbolic execution. In: Proceedings of the 5th Workshop on Hot Topics in System Dependability, Lisbon, 2009. 1–6

  49. 49

    Chipounov V, Kuznetsov V, Candea G. The S2E platform: design, implementation, and applications. ACM Trans Comput Syst, 2012, 30: 1–49

  50. 50

    Fraser G, Arcuri A. Sound empirical evidence in software testing. In: Proceedings of the 34th International Conference on Software Engineering (ICSE), Zurich, 2012. 178–188

  51. 51

    Jaygarl H, Kim S, Xie T, et al. OCAT: object capture-based automated testing. In: Proceedings of the 19th International Symposium on Software Testing and Analysis. New York: ACM, 2010. 159–170

  52. 52

    Xiao X, Li S, Xie T, et al. Characteristic studies of loop problems for structural test generation via symbolic execution. In: Proceedings of IEEE/ACM 28th International Conference on Automated Software Engineering (ASE), Silicon Valley, 2013. 246–256

  53. 53

    Xiao X, Xie T, Tillmann N, et al. Precise identification of problems for structural test generation. In: Proceedings of the 33rd International Conference on Software Engineering. New York: ACM, 2011. 611–620

  54. 54

    Thummalapenta S, Xie T, Tillmann N, et al. Synthesizing method sequences for high-coverage testing. In: Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications. New York: ACM, 2011. 189–206

  55. 55

    Thummalapenta S, Xie T, Tillmann N, et al. MSeqGen: object-oriented unit-test generation via mining source code. In: Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering. New York: ACM, 2009. 193–202

  56. 56

    Qi D, Sumner W N, Qin F, et al. Modeling software execution environment. In: Proceedings of the 19th Working Conference on Reverse Engineering, Kingston, 2012. 415–424

  57. 57

    Samimi H, Hicks R, Fogel A, et al. Declarative mocking. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2013. 246–256

  58. 58

    Zhang L, Ma X, Lu J, et al. Environmental modeling for automated cloud application testing. IEEE Softw, 2012, 29: 30–35

  59. 59

    Godefroid P, Luchaup D. Automatic partial loop summarization in dynamic test generation. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2011. 23–33

  60. 60

    Xie T, Tillmann N, de Halleux P, et al. Fitness-guided path exploration in dynamic symbolic execution. In: Proceedings of IEEE/IFIP International Conference on Dependable Systems & Networks, Lisbon, 2009. 359–368

  61. 61

    Böhme M. Automated regression testing and verification of complex code changes. Dissertation for Ph.D. Degree. Singapore: National University of Singapore, 2014

  62. 62

    Jones J A, Harrold M J. Test-suite reduction and prioritization for modified condition/decision coverage. IEEE Trans Softw Eng, 2003, 29: 195–209

  63. 63

    Jeffrey D, Gupta N. Improving fault detection capability by selectively retaining test cases during test suite reduction. IEEE Trans Softw Eng, 2007, 33: 108–123

  64. 64

    Lin J-W, Huang C-Y. Analysis of test suite reduction with enhanced tie-breaking techniques. Inf Softw Tech, 2009, 51: 679–690

  65. 65

    Chen T Y, Lau M F. A new heuristic for test suite reduction. Inf Softw Tech, 1998, 40: 347–354

  66. 66

    Hao D, Zhang L, Wu X, et al. On-demand test suite reduction. In: Proceedings of the 34th International Conference on Software Engineering, Piscataway, 2012. 738–748

  67. 67

    Do H, Rothermel G. A controlled experiment assessing test case prioritization techniques via mutation faults. In: Proceedings of the 21st IEEE International Conference on Software Maintenance, Budapest, 2005. 411–420

  68. 68

    Elbaum S, Malishevsky A, Rothermel G. Prioritizing test cases for regression testing. In: Proceedings of International Symposium of Software Testing and Analysis, Portland, 2000. 102–112

  69. 69

    Elbaum S, Malishevsky A, Rothermel G. Incorporating varying test costs and fault severities into test case prioritization. In: Proceedings of the 23rd International Conference on Software Engineering, Washington, 2001. 329–338

  70. 70

    Hou S S, Zhang L, Xie T, et al. Quota-constrained test-case prioritization for regression testing of service-centric systems. In: Proceedings of IEEE International Conference on Software Maintenance, Beijing, 2008. 257–266

  71. 71

    Jiang B, Zhang Z, Chan W K, et al. Adaptive random test case prioritization. In: Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering, Auckland, 2009. 257–266

  72. 72

    Li Z, Harman M, Hierons R. Search algorithms for regression test case prioritisation. IEEE Trans Softw Eng, 2007, 33: 225–237

  73. 73

    Zhang L, Hou S, Guo C, et al. Time-aware test-case prioritization using integer linear programming. In: Proceedings of the 18th International Symposium on Software Testing and Analysis. New York: ACM, 2009. 213–224

  74. 74

    Hao D, Zhang L, Zhang L, et al. A unified test case prioritization approach. ACM Trans Softw Eng Methodol, 2014, 24: 1–31

  75. 75

    Zhang L, Hao D, Zhang L, et al. Bridging the gap between the total and additional test-case prioritization strategies. In: Proceedings of the 35th International Conference on Software Engineering (ICSE), San Francisco, 2013. 192–201

  76. 76

    Hao D, Zhao X, Zhang L. Adaptive test-case prioritization guided by output inspection. In: Proceedings of the 37th Annual Computer Software and Applications Conference (COMPSAC), Kyoto, 2013. 169–179

  77. 77

    Mei H, Hao D, Zhang L, et al. A static approach to prioritizing JUnit test cases. IEEE Trans Softw Eng, 2012, 38: 1258–1275

  78. 78

    Jones J A, Harrold M J, Stasko J. Visualization of test information to assist fault localization. In: Proceedings of the 24th International Conference on Software Engineering. New York: ACM, 2002. 467–477

  79. 79

    Hao D, Zhang L, Pan Y, et al. On similarity-awareness in testing-based fault localization. Aut Softw Eng, 2008, 15: 207–249

  80. 80

    Liblit B, Naik M, Zheng A X, et al. Scalable statistical bug isolation. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2005. 15–26

  81. 81

    Yu Y, Jones J A, Harrold M J. An empirical study of the effects of test-suite reduction on fault localization. In: Proceedings of the 30th International Conference on Software Engineering. New York: ACM, 2008. 201–210

  82. 82

    Abreu R, Zoeteweij P, van Gemund A J C. On the accuracy of spectrum-based fault localization. In: Proceedings of Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION, Windsor, 2007. 89–98

  83. 83

    Papadakis M, Traon Y L. Using mutants to locate “unknown” faults. In: Proceedings of IEEE 5th International Conference on Software Testing, Verification and Validation, Montreal, 2012. 691–700

  84. 84

    Zhang X, Gupta N, Gupta R. Locating faults through automated predicate switching. In: Proceedings of the 28th International Conference on Software Engineering. New York: ACM, 2006. 272–281

  85. 85

    Jeffrey D, Gupta N, Gupta R. Fault localization using value replacement. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2008. 167–178

  86. 86

    Zhang S, Zhang C, Ernst M D. Automated documentation inference to explain failed tests. In: Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering (ASE), Lawrence, 2011. 63–72

  87. 87

    Xuan J, Monperrus M. Test case purification for improving fault localization. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2014. 52–63

  88. 88

    Zeller A. Yesterday, my program worked. today, it does not. why? In: Software Engineering — ESEC/FSE’99. Berlin: Springer, 1999. 253–267

  89. 89

    Zeller A, Hildebrandt R. Simplifying and isolating failure-inducing input. IEEE Trans Softw Eng, 2002, 28: 183–200

  90. 90

    Zhang L, Kim M, Khurshid S. Localizing failure-inducing program edits based on spectrum information. In: Proceedings of the 27th IEEE International Conference on Software Maintenance (ICSM), Williamsburg, 2011. 23–32

  91. 91

    Alves E, Gligoric M, Jagannath V, et al. Fault-localization using dynamic slicing and change impact analysis. In: Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering, Washington, 2011. 520–523

  92. 92

    Zhang L, Zhang L, Khurshid S. Injecting mechanical faults to localize developer faults for evolving software. In: Proceedings of the ACM SIGPLAN International Conference on Object Oriented Programming Systems Languages & Applications. New York: ACM, 2013. 765–784

  93. 93

    Kim D, Nam J, Song J, et al. Automatic patch generation learned from human-written patches. In: Proceedings of the International Conference on Software Engineering. Piscataway: IEEE Press, 2013. 802–811

  94. 94

    Goues C L, Nguyen T, Forrest S, et al. Genprog: a generic method for automatic software repair. IEEE Trans Softw Eng, 2012, 38: 54–72

  95. 95

    Nguyen H D T, Qi D, Roychoudhury A, et al. Semfix: program repair via semantic analysis. In: Proceedings of the International Conference on Software Engineering. Piscataway: IEEE Press, 2013. 772–781

  96. 96

    Qi Y, Mao X, Lei Y, et al. The strength of random search on automated program repair. In: Proceedings of the 36th International Conference on Software Engineering. New York: ACM, 2014. 254–265

  97. 97

    Nguyen T T, Nguyen H A, Pham N H, et al. Recurring bug fixes in object-oriented programs. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. New York: ACM, 2010. 315–324

  98. 98

    Coker Z, Hafiz M. Program transformations to fix C integers. In: Proceedings of the International Conference on Software Engineering. Piscataway: IEEE Press, 2013. 792–801

  99. 99

    Jin G, Song L, Zhang W, et al. Automated atomicity-violation fixing. In: Proceedings of the 32nd ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2011. 389–400

  100. 100

    Jin G, Zhang W, Deng D, et al. Automated concurrency-bug fixing. In: Proceedings of the 10th USENIX Symposium on Operating Systems Design and Implementation, Hollywood, 2012. 221–236

  101. 101

    Li H H, Qi J, Liu F, et al. The research progress of fuzz testing technology (in Chinese). Sci China Inform, 2014, 44: 1305–1322

  102. 102

    Xie T, Zhang L, Xiao X, et al. Cooperative software testing and analysis: advances and challenges. J Comput Sci Tech, 2014, 29: 713–723

  103. 103

    Cherem S, Princehouse L, Rugina R. Practical memory leak detection using guarded value-flow analysis. In: Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2007. 480–491

  104. 104

    Hackett B, Rugina R. Region-based shape analysis with tracked locations. In: Proceedings of the 32nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. New York: ACM, 2005. 310–323

  105. 105

    Heine D L, Lam M S. A practical flow-sensitive and context-sensitive C and C++ memory leak detector. In: Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2003. 168–181

  106. 106

    Jung Y, Yi K. Practical memory leak detector based on parameterized procedural summaries. In: Proceedings of the 7th International Symposium on Memory Management. New York: ACM, 2008. 131–140

  107. 107

    Sui Y, Ye D, Xue J. Static memory leak detection using full-sparse value-flow analysis. In: Proceedings of the International Symposium on Software Testing and Analysis. New York: ACM, 2012. 254–264

  108. 108

    Torlak E, Chandra S. Effective interprocedural resource leak detection. In: Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering. New York: ACM, 2010. 535–544

  109. 109

    Xie Y, Aiken A. Context- and path-sensitive memory leak detection. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering. New York: ACM, 2005. 115–125

  110. 110

    Gao Q, Xiong Y, Mi Y, et al. Safe memory-leak fixing for C programs. In: Proceedings of the 37th IEEE International Conference on Software Engineering (ICSE), Florence, 2015. 459–470

  111. 111

    Lattner C, Lenharth A, Adve V. Making context-sensitive points-to analysis with heap cloning practical for the real world. In: Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation. New York: ACM, 2007. 278–289

  112. 112

    Hardekopf B, Lin C. Semi-sparse flow-sensitive pointer analysis. In: Proceedings of the 36th Annual ACM SIGPLANSIGACT Symposium on Principles of Programming Languages. New York: ACM, 2009. 226–238

  113. 113

    Hardekopf B, Lin C. Flow-sensitive pointer analysis for millions of lines of code. In: Proceedings of the 9th Annual IEEE/ACM International Symposium on Code Generation and Optimization (CGO), Chamonix, 2011. 289–298

  114. 114

    Wang J, Ma X-D, Dong W, et al. Demand-driven memory leak detection based on flow- and context-sensitive pointer analysis. J Comput Sci Tech, 2009, 24: 347–356

  115. 115

    Kam J, Ullman J. Monotone data flow analysis frameworks. Act Inf, 1977, 7: 305–317

Download references

Author information

Correspondence to Yingfei Xiong.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gao, Q., Li, J., Xiong, Y. et al. High-confidence software evolution. Sci. China Inf. Sci. 59, 071101 (2016). https://doi.org/10.1007/s11432-016-5572-2

Download citation

Keywords

  • software evolution
  • high confidence
  • software quality
  • software development
  • program analysis