Journal of Electronic Testing

, Volume 17, Issue 2, pp 175–183 | Cite as

Constraint Based Criteria: An Approach for Test Case Selection in the Structural Testing

  • Silvia Regina Vergilio
  • José Carlos Maldonado
  • Mario Jino
Article

Abstract

The selection of test cases to satisfy a structural testing criterion is a very important task because it concerns the quality of the generated test cases. The question is “How to select a test case or path to cover an element required by a structural criterion?” The Constraint Based Criteria are proposed with the goal of answering this question and improving the efficacy, that is, the number of revealed faults. These criteria permit the use of different testing strategies by associating constraints to the required elements. The constraints describe faults generally not revealed by the structural technique. The Constraint Based Criteria can also be used to assess test data sets adequacy.

structural testing criteria constraint based testing mutation analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    M.L. Chaim, “POKE-TOOL—Uma Ferramenta para Suporte ao Teste Estrutural de Programas Baseado em Análise de Fluxo de Dados,” Master Thesis, DCA/FEEC/Unicamp, Campinas-SP, Brazil, April 1991 (in Portuguese).Google Scholar
  2. 2.
    M.E. Delamaro and J.C. Maldonado, “A Tool for the Assessment of Test Adequacy for C Programs,” Proceedings of the Conference on Performability in Computing Systems, East Brunswick, NJ, USA, July 1996, pp. 79-95.Google Scholar
  3. 3.
    R.A. De Millo and A.J. Offutt, “Constraint-based Automatic Test Data Generation,” IEEE Transactions on Software Engineering, vol. SE-17(9), pp. 900-910, September 1991.Google Scholar
  4. 4.
    R.A. De Millo and A.J. Offutt, “Experimental Results on Automatic Test Case Generation,” ACM Transactions on Software Engineering and Methodology, vol. SE-2(2), pp. 109-127, April 1993.Google Scholar
  5. 5.
    F.G. Frankl and E.J. Weyuker, “Data Flow Testing in the Presence of Unexecutable Paths.” Proceedings of the Workshop on Software Testing, July 1986, Banff, Canada Computer Science Press: pp. 4-13.Google Scholar
  6. 6.
    P.G. Frankl and E.J. Weyuker, “A Formal Analysis of the Fault-Detecting Ability of Testing Methods,” IEEE Transactions on Software Engineering, vol. SE-19(3), pp. 202-213, March 1993.Google Scholar
  7. 7.
    J.R. Horgan and S. London, ATAC—Automatic Test Coverage Analysis for C Programs, June 1990.Google Scholar
  8. 8.
    J.W. Laski and B. Korel, “A Data Flow Oriented Program Testing Strategy,” IEEE Transactions on Software Engineering, vol. SE-9(3), pp. 347-354, May 1983.Google Scholar
  9. 9.
    J.C. Maldonado, “Critérios Potenciais Usos: Uma Contribuição ao Teste Estrutural de Software,” Doctorate Dissertation, DCA/FEEC/Unicamp, Campinas-SP, Brazil, July 1991 (in Portuguese).Google Scholar
  10. 10.
    J.C. Maldonado, M.L. Chaim, and M. Jino, “Briding the Gap in the Presence of Infeasible Paths: Potential Uses Testing Criteria,” XII International Conference of the Chilean Science Computer Society, Santiago, Chile, October 1992, pp. 323-340.Google Scholar
  11. 11.
    S.C. Ntafos, “On Required Element Testing,” IEEE Transactions on Software Engineering, vol. SE-10(6), pp. 795-803, November 1984.Google Scholar
  12. 12.
    R.B. Pressman, Software Engineering: A Practitioner's Approach, 3rd Ed., New-York: McGraw-Hill, EUA, 1992.Google Scholar
  13. 13.
    S. Rapps and E.J. Weyuker, “Selecting Software Test Data using Data Flow Information,” IEEE Transactions on Software Engineering, vol. SE-11(4), pp. 367-375, April 1985.Google Scholar
  14. 14.
    S.R.S. Souza, J.C. Maldonado, and S.R. Vergilio, “Análise de Mutantes e Potenciais-usos: Uma Avaliação Empírica,” VIII Cconferência Internacional de Tecnologia de Software: Qualidade de Software, Curitiba—PR, June 1997 (in Portuguese).Google Scholar
  15. 15.
    K.C. Tai, “Predicate-Based Test Generation for Computer Programs,” Proceedings of International Conference on Software Engineering, May 1993, IEEE Press, pp. 267-276.Google Scholar
  16. 16.
    K.C. Tai, “Theory of Fault-Based Predicate Testing for Computer Programs,” IEEE Transactions on Software Engineering, vol. 22(8), pp. 552-562, August 1996.Google Scholar
  17. 17.
    H. Ural and B. Yang, “A Structural Test Selection Criterion,” Information Processing Letters, vol. 28(3), pp. 157-163, July 1988.Google Scholar
  18. 18.
    S.R. Vergilio, “Critérios Restritos de Teste de Software: Uma Contribuição para Gerar Dados de Teste mais Efieazes,” Doctorate Dissertation, DCA/FEEC/Unicamp, Campinas-SP, Brazil, July 1997 (in Portuguese).Google Scholar
  19. 19.
    S.R. Vergilio, J.C. Maldonado, and M. Jino, “Resultados da Aplicação de Diferentes Técnicas de Geração de Dados de Teste Sensíveis a Erros,” Technical Report, DCA/FEEC/Unicamp, Campinas-SP, Brazil, 1996 (in Portuguese).Google Scholar
  20. 20.
    S.R. Vergilio, J.C. Maldonado, and M. Jino, “An Experimental Evaluation of Different Testing Techniques,” Technical Report, DCA/FEEC/Unicamp, Campinas-SP, Brazil, 1996.Google Scholar
  21. 21.
    S.R. Vergilio, J.C. Maldonado, and M. Jino, “Infeasible Paths Within the Context of Data Flow Based Criteria,” VI International Conference on Software Quality, October 1996, Ottawa-Canada, pp. 310-321.Google Scholar
  22. 22.
    S.R. Vergilio, J.C. Maldonado, and M. Jino, “Constraint Based Selection of Test Sets to Satisfy Structural Software Testing Criteria,” XVII International Conference of the Chilean Computer Science Society, Valparaiso, Chile, November 1997, Los Alamitos, CA: IEEE-Press.Google Scholar
  23. 23.
    E.J. Weyuker, “An Empirical Study of the Complexity of Data Flow Testing,” Proceedings of the Second Workshop on Software Testing, Verification and Analysis, July 1988, Banff, Canada: Computer Science Press, pp. 188-195.Google Scholar
  24. 24.
    E.J. Weyuker, “The Cost of Data Flow Testing: An Empirical Study,” IEEE Transactions on Software Engineering, vol. SE-16(2), pp. 121-128, February 1990.Google Scholar
  25. 25.
    L.J. White and E.I. Cohen, “A Domain Strategy for Computer Program Testing,” IEEE Transactions on Software Engineering, vol. SE-6(3), pp. 247-257, May 1980.Google Scholar
  26. 26.
    W.E. Wong, A.P. Mathur, and J.C. Maldonado, “Mutation Versus All-Uses; An Empirical Evaluation of Cost, Strength and Effectiveness,” Software Quality and Productivity—Theory, Practice, Education and Training, Hong Kong, December 1994.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Silvia Regina Vergilio
    • 1
  • José Carlos Maldonado
    • 2
  • Mario Jino
    • 3
  1. 1.UFPR—Curitiba CP:Brazil
  2. 2.ICMSC-USP—São CarlosBrazil. j
  3. 3.DCA-FEEC-UNICAMP—Campinas, CP:Brazil

Personalised recommendations