Advertisement

Web Crawling Agents

  • George Chang
  • Marcus J. Healey
  • James A. M. McHugh
  • Jason T. L. Wang
Part of the The Information Retrieval Series book series (INRE, volume 10)

Abstract

An essential component of information mining and pattern discovery on the Web is the Web Crawling Agent (WCA). General-purpose Web Crawling Agents, which were briefly described in Chapter 1, are intended to be used for building generic portals. The diverse and voluminous nature of Web documents presents formidable challenges to the design of high performance WCAs. They require both powerful processors and a tremendous amount of storage, and yet even then can only cover restricted portions of the Web. Nonetheless, despite their fundamental importance in providing Web services, the design of WCAs is not well-documented in the literature. This chapter describes the conceptual design and implementation of Web crawling agents.

Keywords

Search Engine Pattern Discovery Importance Measure Front Page Inverse Document Frequency 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • George Chang
    • 1
  • Marcus J. Healey
    • 2
  • James A. M. McHugh
    • 3
  • Jason T. L. Wang
    • 3
  1. 1.Kean UniversityUnionUSA
  2. 2.MobilocityNew YorkUSA
  3. 3.New Jersey Institute of TechnologyNewarkUSA

Personalised recommendations