Advertisement

© 2013

Multi-source, Multilingual Information Extraction and Summarization

  • Thierry Poibeau
  • Horacio Saggion
  • Jakub Piskorski
  • Roman Yangarber
Book

Table of contents

  1. Front Matter
    Pages i-xx
  2. Background and Fundamentals

    1. Front Matter
      Pages 1-1
    2. Horacio Saggion, Thierry Poibeau
      Pages 3-21
    3. Jakub Piskorski, Roman Yangarber
      Pages 23-49
  3. Named Entity in a Multilingual Context

  4. Information Extraction

    1. Front Matter
      Pages 135-135
    2. Günter Neumann, Sven Schmeier
      Pages 137-161
    3. Silja Huttunen, Arto Vihavainen, Mian Du, Roman Yangarber
      Pages 163-176
    4. Heng Ji, Benoit Favre, Wen-Pin Lin, Dan Gillick, Dilek Hakkani-Tur, Ralph Grishman
      Pages 177-201
  5. Multi-Document Summarization

    1. Front Matter
      Pages 203-203
    2. Mijail Kabadjov, Josef Steinberger, Ralf Steinberger
      Pages 229-252
    3. Danushka Bollegala, Naoaki Okazaki, Mitsuru Ishizuka
      Pages 253-276
    4. Ricardo Ribeiro, David Martins de Matos
      Pages 277-297
    5. Ahmet Aker, Laura Plaza, Elena Lloret, Robert Gaizauskas
      Pages 299-320
  6. Back Matter
    Pages 321-323

About this book

Introduction

Information extraction (IE) and text summarization (TS) are powerful technologies for finding relevant pieces of information in text and presenting them to the user in condensed form. The ongoing information explosion makes IE and TS critical for successful functioning within the information society.

 

These technologies face particular challenges due to the inherent multi-source nature of the information explosion.  The technologies must now handle not isolated texts or individual narratives, but rather large-scale repositories and streams---in general, in multiple languages---containing a multiplicity of perspectives, opinions, or commentaries on particular topics, entities or events.  There is thus a need to adapt existing techniques and develop new ones to deal with these challenges.

 

This volume contains a selection of papers that present a variety of methodologies for content identification and extraction, as well as for content fusion and regeneration. The chapters cover various aspects of the challenges, depending on the nature of the information sought---names vs. events,--- and the nature of the sources---news streams vs. image captions vs. scientific research papers, etc. This volume aims to offer a broad and representative sample of studies from this very active research field.

Keywords

Content analysis Information extraction Multilinguality Text mining Text summarization

Editors and affiliations

  • Thierry Poibeau
    • 1
  • Horacio Saggion
    • 2
  • Jakub Piskorski
    • 3
  • Roman Yangarber
    • 4
  1. 1.Universite Sorbonne Nouvelle, LATTICE-CNRSEcole Normale Superieure andParisFrance
  2. 2., Information & Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
  3. 3.Institute for Computer SciencePolish Acadmey of ScienceWarsawPoland
  4. 4.Department of Computer ScienceUniversity of HelsinkiHelsinkiFinland

About the editors

*Thierry Poibeau* holds a PhD and an Habilitation in Computer Science from the University Paris 13.  From 1998 to 2003, he has worked for Thales Research and Technology, where he was responsible for research activities in information extraction.  Since 2003, he is a CNRS research fellow, working first at the Laboratoire d'Informatique de Paris-Nord (LIPN) and now at the LaTTiCe laboratory. He is also an affiliated lecturer at the Research Centre for English and Applied Linguistics (RCEAL) of the University of Cambridge (UK).  Thierry Poibeau has managed and/or participated in several national and European projects related to his research areas.  He has published one book on information extraction, 3 international patents and more than 50 papers in books, international journals and conferences.  He has organised several international workshops and acted as programme committee member for over 20 international conferences (e.g. IJCAI, COGSCI, COLING) and associated workshops.

*Horacio Saggion* is a Research Professor at the Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.  He obtained his PhD in computer Science from University of Montreal in 2000.  He works in the areas of information extraction, text summarization, and semantic analysis.  He has published over 50 works in journals, international conferences, workshops, and books.  He has been principal researcher and manager for a number of national and international projects, and organized a series of workshops in tha areas of information extraction and summarization. He has also acted in scientific committees for international conferences in Human Language Technology.

*Jakub Pislorski* received his M.Sc in Computer Science from the University of Saarbrücken, Germany in 1994 and PhD from the Polish Academy of Sciences in Warsaw, Poland in 2002. Jakub is a Research Associate at the Polish Academy of Sciences and he is also managing projects related to NLP in the R&D Unit of the Warsaw-based EU Border Security Agency Frontex. Previously he held the post of a Research Fellow at the Joint Research Centre of the European Commission in Ispra, Italy, and worked as a Senior Software Engineer and Researcher at the German Research Centre for Artificial Intelligence in Saarbruecken and the Department of Information Systems at Poznan University of Economics. He also has been consulting several companies on information extraction technology. His main areas of interest are centered around information extraction, finite-state methods in NLP, shallow text processing, efficient multilingual application-oriented NLP solutions. Jakub is author and co-author of around 80 peer-reviewed international conference and workshop papers, journal articles and book chapters in Computer Science and Computational Linguistics. He has co-organizered several scientific events and served as a program committee member for a number of international scientific events.

*Roman Yangarber* obtained his MS and PhD in 2000 at New York University (NYU), USA, in Computer Science, with concentration on Computational Linguistics.  Prior to moving to Finland in 2004, he held the post of Assistant Research Professor at the Courant Institute of Mathematical Sciences at NYU, where he specialized in Natural Language Processing.  His main research area has been machine learning for automatic acquisition of semantic knowledge from plain text, in particular, from large news streams. He has been an organizer, editorial board member and program committee member for a number international scientific events, conferences, organizations and journals.  He has authored or co-authored over 40 papers in Computational Linguistics.  At the University of Helsinki, he has held the post of Acting Professor, and currently leads two research projects, and participates in two others (nationally- and EU-funded), in text mining and linguistic analysis, where he also supervises PhD and MS students.

Bibliographic information

Reviews

From the reviews:

“This book is a compilation of chapters selected from a series of papers presented at Multi-source, Multilingual Information Extraction and Summarization (MMIES), a workshop series on these two topics. … This book could be useful for researchers and technicians interested in advances in these fields.” (Mercedes Martínez González, ACM Computing Reviews, March, 2013)