Advertisement

A New Approach for Verifying URL Uniqueness in Web Crawlers

  • Wallace Favoreto Henrique
  • Nivio Ziviani
  • Marco Antônio Cristo
  • Edleno Silva de Moura
  • Altigran Soares da Silva
  • Cristiano Carvalho
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7024)

Abstract

The Web has become a huge repository of pages and search engines allow users to find relevant information in this repository. Web crawlers are an important component of search engines. They find, download, parse content and store pages in a repository. In this paper, we present a new algorithm for verifying URL uniqueness in a large-scale web crawler. The verifier of uniqueness must check if a URL is present in the repository of unique URLs and if the corresponding page was already collected. The algorithm is based on a novel policy for organizing the set of unique URLs according to the server they belong to, exploiting a locality of reference property. This property is inherent in Web traversals, which follows from the skewed distribution of links within a web page, thus favoring references to other pages from a same server. We select the URLs to be crawled taking into account information about the servers they belong to, thus allowing the usage of our algorithm in the crawler without extra cost to pre-organize the entries. We compare our algorithm with a state-of-the-art algorithm found in the literature. We present a model for both algorithms and compare their performances. We carried out experiments using a crawling simulation of a representative subset of the Web which show that the adopted policy yields to a significant improvement in the time spent handling URL uniqueness verification.

Keywords

Disk Access Baseline Algorithm Central Repository Reference Property Update Management 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Berlt, K., Moura, E., Carvalho, A., Cristo, M., Ziviani, N., Couto, T.: Modeling the web as a hypergraph to compute page reputation. Information Systems 35(5), 530–543 (2010)CrossRefGoogle Scholar
  2. 2.
    Heydon, A., Najork, M.: Mercator: A scalable, extensible web crawler. World Wide Web 2(4), 219–229 (1999)CrossRefGoogle Scholar
  3. 3.
    Lee, H.-T., Leonard, D., Wang, X., Loguinov, D.: Irlbot: Scaling to 6 billion pages and beyond. ACM Transactions on the Web 3(3), 1–34 (2009)CrossRefGoogle Scholar
  4. 4.
    Najork, M., Heydon, A.: High-performance web crawling. Technical report, SRC Research Report 173, Compaq Systems Research, Palo Alto, CA (2001)Google Scholar
  5. 5.
    Pinkerton, B.: Finding what people want: Experiences with the web crawler. In: WWW, pp. 30–40 (1994)Google Scholar
  6. 6.
    Shkapenyuk, V., Suel, T.: Design and implementation of a high-performance distributed web crawler. In: ICDE, pp. 357–368 (2002)Google Scholar
  7. 7.
    Xue, G.-R., Yang, Q., Zeng, H.-J., Yu, Y., Chen, Z.: Exploiting the hierarchical structure for link analysis. In: SIGIR, pp. 186–193 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Wallace Favoreto Henrique
    • 1
  • Nivio Ziviani
    • 1
  • Marco Antônio Cristo
    • 2
  • Edleno Silva de Moura
    • 2
  • Altigran Soares da Silva
    • 2
  • Cristiano Carvalho
    • 1
  1. 1.Department of Computer ScienceUniversidade Federal de Minas GeraisBelo HorizonteBrazil
  2. 2.Department of Computer ScienceUniversidade Federal do AmazonasManausBrazil

Personalised recommendations