Information Extraction from Web Sources Based on Multi-aspect Content Analysis
- Cite this paper as:
- Milicka M., Burget R. (2015) Information Extraction from Web Sources Based on Multi-aspect Content Analysis. In: Gandon F., Cabrio E., Stankovic M., Zimmermann A. (eds) Semantic Web Evaluation Challenges. Communications in Computer and Information Science, vol 548. Springer, Cham
Information extraction from web pages is often recognized as a difficult task mainly due to the loose structure and insufficient semantic annotation of their HTML code. Since the web pages are primarily created for being viewed by human readers, their authors usually do not pay much attention to the structure and even validity of the HTML code itself. The CEUR Workshop Proceedings pages are a good illustration of this. Their code varies from an invalid HTML markup to fully valid and semantically annotated documents while preserving a kind of unified visual presentation of the contents. In this paper, as a contribution to the ESWC 2015 Semantic Publishing Challenge, we present an information extraction approach based on analyzing the rendered pages rather than their code. The documents are represented by an RDF-based model that allows to combine the results of different page analysis methods such as layout analysis and the visual and textual feature classification. This allows to specify a set of generic rules for extracting a particular information from the page independently on its code.