A Novel Approach for Evaluating Web Crawler Performance Using Content-relevant Metrics

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 336)

Abstract

Most of the search engines will have Web crawler as an important component to index the Web pages to provide more relevant results. Web crawlers are programs used to download and index documents from the Internet. A focused crawler is a specialized crawler which will search for and index the Web page of a particular topic, thus reducing the amount of network traffic and download. In this paper, we present a novel approach for focused Web crawler to download Web pages related to particular topic. This paper also experimented a set of factors to compute the relevancy of Web documents and utilizes the contextual metadata framework (CMF) to summarize the captured relevancy data that can be used to categorize and sort results and in essence improve the quality of the result set presented to the end user. A base comparison has made with a classical crawler, and the appreciable results have been achieved using our approach.

Keywords

Focused crawler Crawling techniques Content metrics Base URLs 

References

  1. 1.
    Faniel, I.M., Yakel, E.: Significant properties as contextual metadata. J. Libr. Metadata 11(3–4), 155–165 (2011)CrossRefGoogle Scholar
  2. 2.
    Qin, J., Zhou, Y., Chau, M.: Building domain-specific web collections for scientific digital libraries: a meta search enhanced focused crawling method. Digital Libraries, 2004. Proceedings of the 2004 Joint ACM/IEEE Conference on. IEEE (2004)Google Scholar
  3. 3.
    Shkapenyuk, V., Suel, T.: Design and implementation of a high-performance distributed web crawler (2002)Google Scholar
  4. 4.
    Liu, B.: Web Data Mining, Chap. 6, 7, 8. Springer, Berlin, pp. 183–235, 237–270, 273–31 (2007)Google Scholar
  5. 5.
    Boldi, P., Codenotti, B., Santini, M., Vigna, S.: Ubicrawler: a scalable fully distributed web crawler (2002)Google Scholar
  6. 6.
    Lawrence, S., Giles, C.: Accessibility of information on the web. Comput. J. Nat. 400(6740), 107–109 (1999)CrossRefGoogle Scholar
  7. 7.
    Menczer, F., Pant, G., Srinivasan, P.: Topical web crawlers: evaluating adaptive algorithms. ACM Trans. Internet Technol. (TOIT) 4(4), 378–419 (2004)CrossRefGoogle Scholar
  8. 8.
    Yuan, X., Harms, J.: An efficient scheme to remove crawler traffic from the internet. In: Proceedings of the 11th International Conferences on Computer Communications and Networks, pp. 90–95 (2002)Google Scholar
  9. 9.
    Chakrabarti, S.: Mining the Web: Discovering knowledge from hypertext data. Morgan Kaufmann (2003)Google Scholar
  10. 10.
    Cho, J., Garc´ıa-Molina, H., Page, L.: Efficient crawling through URL ordering. Comput. Netw. ISDN Syst. 30(1–7), 161–172 (1998)Google Scholar
  11. 11.
    Chakrabarti, S., van den Berg, M., Dom, B.: Focused crawling: a new approach to topic-specific Web resource discovery. Comput. Netw. (Amsterdam, Netherlands) 31, 1623–1640 (1999)Google Scholar

Copyright information

© Springer India 2015

Authors and Affiliations

  1. 1.Department of CSASCSVMV UniversityEnathur, KanchipuramIndia
  2. 2.Department of MCASt. Joseph’s College of EngineeringChennaiIndia

Personalised recommendations