Capturing the Ineffable: Collecting, Analysing, and Automating Web Document Quality Assessments

  • Davide Ceolin
  • Julia Noordegraaf
  • Lora Aroyo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10024)


Automatic estimation of the quality of Web documents is a challenging task, especially because the definition of quality heavily depends on the individuals who define it, on the context where it applies, and on the nature of the tasks at hand. Our long-term goal is to allow automatic assessment of Web document quality tailored to specific user requirements and context. This process relies on the possibility to identify document characteristics that indicate their quality. In this paper, we investigate these characteristics as follows: (1) we define features of Web documents that may be indicators of quality; (2) we design a procedure for automatically extracting those features; (3) develop a Web application to present these results to niche users to check the relevance of these features as quality indicators and collect quality assessments; (4) we analyse user’s qualitative assessment of Web documents to refine our definition of the features that determine quality, and establish their relevant weight in the overall quality, i.e., in the summarizing score users attribute to a document, determining whether it meets their standards or not. Hence, our contribution is threefold: a Web application for nichesourcing quality assessments; a curated dataset of Web document assessments; and a thorough analysis of the quality assessments collected by means of two case studies involving experts (journalists and media scholars). The dataset obtained is limited in size but highly valuable because of the quality of the experts that provided it. Our analyses show that: (1) it is possible to automate the process of Web document quality estimation to a level of high accuracy; (2) document features shown in isolation are poorly informative to users; and (3) related to the tasks we propose (i.e., choosing Web documents to use as a source for writing an article on the vaccination debate), the most important quality dimensions are accuracy, trustworthiness, and precision.


Quality Assessment Quality Dimension Document Quality Media Scholar Digital Humanity 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This work was supported by the Amsterdam Academic Alliance Data Science (AAA-DS) Program Award to the UvA and VU Universities. We thank the students of the UvA journalism course and the RMeS summer school participants for participating our user studies.


  1. 1.
    Amento, B., Terveen, L., Hill, W.: Does authority mean quality? Predicting expert quality ratings of web documents. In: SIGIR, pp. 296–303. ACM (2000)Google Scholar
  2. 2.
    Bharat, K., Curtiss, M., Schmitt, M.: Method and apparatus for clustering news online content based on content freshness and quality of content source. US Patent 9,361,369 (2016).
  3. 3.
    Ceolin, D., Groth, P., Maccatrozzo, V., Fokkink, W., van Hage, W.R., Nottamkandath, A.: Combining user reputation and provenance analysis for trust assessment. J. Data Inf. Qual. 7(1–2), 6:1–6:28 (2016)Google Scholar
  4. 4.
    Ceolin, D., Noordegraaf, J., Aroyo, L., van Son, C.: Towards web documents quality assessment for digital humanities scholars. In: WebSci, pp. 315–317. ACM (2016)Google Scholar
  5. 5.
    De Jong, M., Schellens, P.: Toward a document evaluation methodology: what does research tell us about the validity and reliability of evaluation methods? (2000)Google Scholar
  6. 6.
    Hartig, O., Zhao, J.: Using web data provenance for quality assessment. In: SWPM (2009)Google Scholar
  7. 7.
    Howell, M., Prevenier, W.: From Reliable Sources: An Introduction to Historical Methods. Cornell University Press, Ithaca (2001)Google Scholar
  8. 8.
    Inel, O., et al.: CrowdTruth: machine-human computation framework for harnessing disagreement in gathering annotated data. In: Mika, P., et al. (eds.) ISWC 2014, Part II. LNCS, vol. 8797, pp. 486–504. Springer, Heidelberg (2014)Google Scholar
  9. 9.
    International Organization for Standardization: ISO/IEC 25012: 2008 software engineering - software product quality requirements and evaluation (SQuaRE) - data quality model. Technical report, ISO (2008)Google Scholar
  10. 10.
    Kang, I.H., Kim, G.: Query type classification for web document retrieval. In: SIGIR, pp. 64–71. ACM (2003)Google Scholar
  11. 11.
    Lee, Y.W., Strong, D.M., Kahn, B.K., Wang, R.Y.: AIMQ: a methodology for information quality assessment. Inf. Manage. 40(2), 133–146 (2002)CrossRefGoogle Scholar
  12. 12.
    Nottamkandath, A., Oosterman, J., Ceolin, D., de Vries, G.K.D., Fokkink, W.: Predicting quality of crowdsourced annotations using graph kernels. In: Jensen, C.D., Marsh, S., Dimitrakos, T., Murayama, Y. (eds.) IFIPTM 2015. IFIPAICT, vol. 454, pp. 134–148. Springer International Publishing, New York (2015)Google Scholar
  13. 13.
    Zaveri, A., Rula, A., Maurino, A., Pietrobon, R., Lehmann, J., Auer, S.: Quality assessment for linked data: a survey. Seman. Web J. 7(1), 63–93 (2015). CrossRefGoogle Scholar
  14. 14.
    Zhu, H., Ma, Y., Su, G.: Collaboratively assessing information quality on the web. In: ICIS sigIQ Workshop (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.VU University AmsterdamAmsterdamThe Netherlands
  2. 2.University of AmsterdamAmsterdamThe Netherlands

Personalised recommendations