Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result?

  • Carlos Casado Martínez
  • Loïc Martínez-Normand
  • Morten Goodwin Olsen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5616)


The most adequate approach for benchmarking web accessibility is manual expert evaluation supplemented by automatic analysis tools. But manual evaluation has a high cost and is impractical to be applied on large web sites. In reality, there is no choice but to rely on automated tools when reviewing large web sites for accessibility. The question is: to what extent the results from automatic evaluation of a web site and individual web pages can be used as an approximation for manual results? This paper presents the initial results of an investigation aimed at answering this question. He have performed both manual and automatic evaluations of the accessibility of web pages of two sites and we have compared the results. In our data set automatically retrieved results could most definitely be used as an approximation manual evaluation results.


Manual Evaluation Automatic Evaluation Accessibility Evaluation Automatic Result Page Level 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    World Wide Web Consortium: Evaluating Web Sites for Accessibility: Overview,
  2. 2.
    Vigo, M., Arrue, M., Brajnik, G., Lomuscio, R., Abascal, J.: Quantitative metrics for measuring web accessibility. In: 2007 International Cross-Disciplinary Conference on Web accessibility (W4A), pp. 99–107. ACM, New York (2007)CrossRefGoogle Scholar
  3. 3.
    Arrue, M., Vigo, M.: Considering Web Accessibility in Information Retrieval Systems. In: Baresi, L., Fraternali, P., Houben, G. (eds.) ICWE 2007. LNCS, vol. 4607, pp. 370–384. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  4. 4.
    Web Accessibility Benchmarking Cluster: Unified Web Evaluation Methodology 1.2,
  5. 5.
    Snaprud, M.: European Internet Accessibility Observatory,
  6. 6.
    World Wide Web Concortium: Web Content Accessibility Guidelines 1.0.,
  7. 7.
    Nietzio, A., Strobbe, C., Velleman, E.: The Unified Web Evaluation Methodology (UWEM) 1.2 for WCAG. In: Miesenberger, K., Klaus, J., Zagler, W., Karshmer, A. (eds.) ICCHP 2008. LNCS, vol. 5105, pp. 394–401. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  8. 8.
    World Wide Web Concortium: Web Content Accessibility Guidelines 2.0.,
  9. 9.
    Petrie, H., Kheir, O.: The relationship between accessibility and usability of websites. In: 2007 SIGCHI conference on Human factors in computing systems, pp. 397–406. ACM, New York (2007)Google Scholar
  10. 10.
    Beirekdar, A., Keita, M., Noirhomme, M., Randolet, F., Vanderdonckt, J., Mariage, C.: Flexible Reporting for Automated Usability and Accessibility Evaluation of Web Sites. In: Costabile, M.F., Paternó, F. (eds.) INTERACT 2005. LNCS, vol. 3585, pp. 281–294. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. 11.
    Olsen, M.G.: How Accessible is the Public European Web,

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Carlos Casado Martínez
    • 1
  • Loïc Martínez-Normand
    • 2
  • Morten Goodwin Olsen
    • 3
  1. 1.Universitat Oberta de CatalunyaSpain
  2. 2.DLSIISUniversidad Politécnica de MadridSpain
  3. 3.Tingtun ASLillesandNorway

Personalised recommendations