Sampling the National Deep Web

  • Denis Shestakov
Conference paper

DOI: 10.1007/978-3-642-23088-2_24

Part of the Lecture Notes in Computer Science book series (LNCS, volume 6860)
Cite this paper as:
Shestakov D. (2011) Sampling the National Deep Web. In: Hameurlain A., Liddle S.W., Schewe KD., Zhou X. (eds) Database and Expert Systems Applications. DEXA 2011. Lecture Notes in Computer Science, vol 6860. Springer, Berlin, Heidelberg

Abstract

A huge portion of today’s Web consists of web pages filled with information from myriads of online databases. This part of the Web, known as the deep Web, is to date relatively unexplored and even major characteristics such as number of searchable databases on the Web or databases’ subject distribution are somewhat disputable. In this paper, we revisit a problem of deep Web characterization: how to estimate the total number of online databases on the Web? We propose the Host-IP clustering sampling method to address the drawbacks of existing approaches for deep Web characterization and report our findings based on the survey of Russian Web. Obtained estimates together with a proposed sampling technique could be useful for further studies to handle data in the deep Web.

Keywords

deep Web web databases web characterization DNS load balancing virtual hosting Host-IP clustering random sampling national web domain 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Denis Shestakov
    • 1
  1. 1.Department of Media TechnologyAalto UniversityEspooFinland

Personalised recommendations