Test Case Description Language (TCDL): Test Case Metadata for Conformance Evaluation

  • Christophe Strobbe
  • Sandor Herramhof
  • Evangelos Vlachogiannis
  • Carlos A. Velasco
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4061)


Automatic benchmarking of evaluation and repair tools (ERT) has been recently the subject of several studies as there is a growing interest because of legal and commercial issues on Web compliance with different criteria and standards. This paper addresses the development of a description language targeted to formally represent test case metadata. This language was used to develop a WCAG 2.0 test suite that will support the benchmarking of ERT with regard to the aforementioned W3C recommendation.


Test Suite Success Criterion User Agent Screen Reader Test File 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Brajnik, G.: Comparing accessibility evaluation tools: a method for tool effectiveness. Univ. Access Inf. Soc. 3, 252–263 (2004)CrossRefGoogle Scholar
  2. 2.
    Bray, T., Hollander, D., Layman, A.: Namespaces in XML — World Wide Web Consortium 14-January-1999 (1999), Available at
  3. 3.
    Caldwell, B., Chisholm, W., Slatin, J., Vanderheiden, G., White, J. (eds.): Web Content Accessibility Guidelines 2.0, W3C Working Draft June 30, 2005. World Wide Web Consortium (2005), Available at
  4. 4.
    Chisholm, W., Vanderheiden, G., Jacobs, I. (eds.): Web Content Accessibility Guidelines 1.0, W3C Recommendation 5-May-1999. World Wide Web Consortium (1999), Available at
  5. 5.
    Clark, J, DeRose, S.: XML Path Language (XPath) 1.0 — W3C Recommendation November 16, 1999 (1999) Available at:
  6. 6.
    Curran, P., Dubost, K.: Test Metadata — W3C Working Group Note September 14, 2005 (2005), Available at
  7. 7.
    Herramhof, S., Petrie, H., Strobbe, S., Vlachogiannis, E., Weimann, K., Weber, G., Velasco, C.A.: Test Case Management Tools for Accessibility Testing. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A.I. (eds.) ICCHP 2006. LNCS, vol. 4061, Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Marston, D.: Test Case Description Language 1.0 — Submisson to QA Working Group October 12, 2003 (2003), Available at
  9. 9.
    Strobbe, C.: Test-suites’ State of the Art and Quality Assurance Methods for W3C Recommendations. BenToWeb deliverable D 4.1 (2005), Available at
  10. 10.
    Ivory, M.Y., Sinh, R.R., Hearst, M.A.: Empirically validated web page design metrics. In: Proceedings of the Conference on Human Factors in Computing Systems, Seattle, WA, March, pp. 53–60. ACM Press, New York (2001)Google Scholar
  11. 11.
    Ivory, M.Y., Chevalier, A.: A Study of Automated Web Site Evaluation Tools. Technical Report UW-CSE-02-10-01. University of Washington, Department of Computer Science and Engineering (2002),
  12. 12.
    Ivory-Ndiaye, M.Y.: An Empirical Approach to Automated Web Site Evaluation. Journal of Digital Information Management 1(2), 75–102 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Christophe Strobbe
    • 1
  • Sandor Herramhof
    • 2
  • Evangelos Vlachogiannis
    • 3
  • Carlos A. Velasco
    • 4
  1. 1.Katholieke Universiteit LeuvenHeverlee-LeuvenBelgium
  2. 2.University of Linz “integriert studieren – integrated study” (i3s3)LinzAustria
  3. 3.University of the AegeanExarchia, AthensGreece
  4. 4.Fraunhofer-Institut für Angewandte Informationstechnik (FIT)Sankt AugustinGermany

Personalised recommendations