Advertisement

Test Data Variance as a Test Quality Measure: Exemplified for TTCN-3

  • Diana Vega
  • Ina Schieferdecker
  • George Din
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4581)

Abstract

Test effectiveness is a central quality aspect of a test specification which reflects its ability to demonstrate system quality levels and to discover system faults. A well-known approach for its estimatation is to determine coverage metrics for the system code or system model. However, often these are not available as such but the system interface only, which basically define structural aspects of the stimuli and responses to the system.

Therefore, this paper focuses on the idea of using test data variance analysis as another analytical approach to determine test quality. It presents a method for the quantitative evaluation of structural and semantical variance of test data. Test variance is defined as the test data distribution over the system interface data domain. It is expected that the more the test data varies, the better the system is tested by a given test suite. The paper instantiates this method for black-box test specifications written in TTCN-3 and the structural analysis of send templates. Distance metrics and similarity relations are used to determine the data variance.

Keywords

Test Suite Session Initiation Protocol System Interface System Under Test Eclipse Modelling Framework 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Sneed, H.M.: Measuring the Effectiveness of Software Testing. In: Beydeda, S., Gruhn, V., Mayer, J., Reussner, R., Schweiggert, F. (eds.) Proceedings of SOQUA 2004 and TECOS 2004. Lecture Notes in Informatics, vol. 58, Gesellschaft für Informatik (2004)Google Scholar
  2. 2.
    Vega, D.E., Schieferdecker, I.: Towards quality of TTCN-3 tests. In: Gotzhein, R., Reed, R. (eds.) SAM 2006. LNCS, vol. 4320, Springer, Heidelberg (2006)Google Scholar
  3. 3.
    Zeiss, B., Neukirchen, H., Grabowski, J., Evans, D., Baker, P.: Refactoring and Metrics for TTCN-3 Test Suites. In: Gotzhein, R., Reed, R. (eds.) SAM 2006. LNCS, vol. 4320, Springer, Heidelberg (2006)Google Scholar
  4. 4.
    Zeiß, B., Vega, D., Schieferdecker, I., Neukirchen, H., Grabowski, J.: Applying the ISO 9126 Quality Model to Test Specifications Exemplified for TTCN-3 Test Specifications. In: SE 2007. Software Engineering 2007. Lecture Notes in Informatics, Copyright Gesellschaft für Informatik, Köllen Verlag, Bonn (2007)Google Scholar
  5. 5.
    ISO/IEC: ISO/IEC Standard No. 9126: Software engineering – Product quality; Parts 1–4. International Organization for Standardization (ISO) / International Electrotechnical Commission (IEC), Geneva, Switzerland (2001-2004)Google Scholar
  6. 6.
    ETSI: ETSI Standard ES 201 873-1 V3.2.1 (2007-03): The Testing and Test Control Notation version 3; Part 1: TTCN-3 Core Language. European Telecommunications Standards Institute (ETSI), Sophia-Antipolis, France (2007)Google Scholar
  7. 7.
    Weyuker, E.J.: The evaluation of program-based software test data adequacy criteria. Commun. ACM 31, 668–675 (1988)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Davis, M., Weyuker, E.: Metric space-based test-base adequacy criteria. Comput. J. 31, 17–24 (1988)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Weiss, S.N.: Comparing test data adequacy criteria. SIGSOFT Softw. Eng. Notes 14, 42–49 (1989)CrossRefGoogle Scholar
  10. 10.
    Vilkomir, S.A., Bowen, J.P.: Reinforced condition/decision coverage (RC/DC): A new criterion for software testing. In: ZB, pp. 291–308 (2002)Google Scholar
  11. 11.
    Ciupa, I., Leitner, A., Oriol, M., Meyer, B.: Object distance and its application to adaptive random testing of object-oriented programs. In: RT 2006. Proceedings of the 1st international workshop on Random testing, pp. 55–63. ACM Press, New York (2006)Google Scholar
  12. 12.
    Howden, W.E.: Systems testing and statistical test data coverage. In: COMPSAC 1997. Proceedings of the 21st International Computer Software and Applications Conference, pp. 500–504. IEEE Computer Society, Washington, DC, USA (1997)Google Scholar
  13. 13.
    Grochtmann, M., Grimm, K.: Classification trees for partition testing. Software Testing, Verification and Reliability 3, 63–82 (1993)CrossRefGoogle Scholar
  14. 14.
    Zeiss, B., Neukirchen, H., Grabowski, J., Evans, D., Baker, P.: Refactoring for TTCN-3 Test Suites. In: Gotzhein, R., Reed, R. (eds.) SAM 2006. LNCS, vol. 4320, Springer, Heidelberg (2006)Google Scholar
  15. 15.
    Vouffo-Feudjio, A., Schieferdecker, I.: Test patterns with TTCN-3. In: Grabowski, J., Nielsen, B. (eds.) FATES 2004. LNCS, vol. 3395, pp. 170–179. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  16. 16.
    TestingTechnologies: TTworkbench: an Eclipse based TTCN-3 IDE (2007), http://www.testingtech.de/products/ttwb_intro.php

Copyright information

© IFIP International Federation for Information Processing 2007

Authors and Affiliations

  • Diana Vega
    • 1
  • Ina Schieferdecker
    • 1
    • 2
  • George Din
    • 2
  1. 1.Technical University Berlin, Franklinstr. 28/29, D-10623 BerlinGermany
  2. 2.Fraunhofer FOKUS, Kaiserin-Augusta-Allee 31, D-10589 BerlinGermany

Personalised recommendations