Information Retrieval

, Volume 8, Issue 3, pp 417–447

A General Evaluation Framework for Topical Crawlers

Authors

    • School of Library & Information Science and Department of Management SciencesThe University of Iowa
  • F. Menczer
    • School of Informatics and Department of Computer ScienceIndiana University
  • G. Pant
    • School of Accounting and Information SystemsUniversity of Utah
Article

DOI: 10.1007/s10791-005-6993-5

Cite this article as:
Srinivasan, P., Menczer, F. & Pant, G. Inf Retrieval (2005) 8: 417. doi:10.1007/s10791-005-6993-5

Abstract

Topical crawlers are becoming important tools to support applications such as specialized Web portals, online searching, and competitive intelligence. As the Web mining field matures, the disparate crawling strategies proposed in the literature will have to be evaluated and compared on common tasks through well-defined performance measures. This paper presents a general framework to evaluate topical crawlers. We identify a class of tasks that model crawling applications of different nature and difficulty. We then introduce a set of performance measures for fair comparative evaluations of crawlers along several dimensions including generalized notions of precision, recall, and efficiency that are appropriate and practical for the Web. The framework relies on independent relevance judgements compiled by human editors and available from public directories. Two sources of evidence are proposed to assess crawled pages, capturing different relevance criteria. Finally we introduce a set of topic characterizations to analyze the variability in crawling effectiveness across topics. The proposed evaluation framework synthesizes a number of methodologies in the topical crawlers literature and many lessons learned from several studies conducted by our group. The general framework is described in detail and then illustrated in practice by a case study that evaluates four public crawling algorithms. We found that the proposed framework is effective at evaluating, comparing, differentiating and interpreting the performance of the four crawlers. For example, we found the IS crawler to be most sensitive to the popularity of topics.

Keywords

Web crawlersevaluationtaskstopicsprecisionrecallefficiency

Copyright information

© Springer Science + Business Media, Inc. 2005