Advertisement

Using Word Sequences for Text Summarization

  • Esaú Villatoro-Tello
  • Luis Villaseñor-Pineda
  • Manuel Montes-y-Gómez
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4188)

Abstract

Traditional approaches for extractive summarization score/classify sentences based on features such as position in the text, word frequency and cue phrases. These features tend to produce satisfactory summaries, but have the inconvenience of being domain dependent. In this paper, we propose to tackle this problem representing the sentences by word sequences (n-grams), a widely used representation in text categorization. The experiments demonstrated that this simple representation not only diminishes the domain and language dependency but also enhances the summarization performance.

Keywords

Text Categorization Word Sequence Text Summarization Relevant Sentence Bilateral Shortfall 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Chuang, T.W., Yang, J.: Text Summarization by Sentence Segment Extraction Using Machine Learning Algorithms. In: Proceedings of the ACL 2004 Workshop, Barcelona, España (2004)Google Scholar
  2. 2.
    Kupiec, J., Pedersen, J.O., Chen, F.: A Trainable Document Summarizer. In: Proceedings of the 18th ACM-SIGIR Conference on Research and Development in Information Retrieval, Seattle, pp. 68–73 (1995)Google Scholar
  3. 3.
    Hovy, E.: Text Summarization. In: Mitkov, R. (ed.) The Oxford handbook of Computational Linguistics. Oxford, NY (2003)Google Scholar
  4. 4.
    Neto, L., Freitas, A.A., Kaestner, C.A.A.: Automatic Text Summarization using a Machine Learning Approach. In: Proceedings of the ACL 2004 Workshop, Barcelona, España (2004)Google Scholar
  5. 5.
    Sebastiani, F.: Machine Learning in Automated Text Categorization. ACM Computing Surveys 34, 1–47 (1999)CrossRefGoogle Scholar
  6. 6.
    Bekkerman, R., Allan, J.: Using Bigrams in Text Categorization. Technical Report IR-408. Departement of Computer Science, University of Masschusetts, USA (2003)Google Scholar
  7. 7.
    Canvar, W.B., Trenkle, J.M.: N-Gram-Based Text Categorization. In: Proceedings of the third Annual Symposium on Document Analysis and Information retrieval, Nevada, Las Vegas, pp. 161–169 (1994)Google Scholar
  8. 8.
    Fürnkranz, J.: A Study Using n −gram Features for Text Categorization. Technical report OEFAI-TR-98-30. Austrian Institute for Artificial Intelligence, Wien, Austria (1998)Google Scholar
  9. 9.
    Lin, C., Hovy, E.: Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics. In: Proceedings of the Human Technology Conference 2003, Edmonton, Canada (2003)Google Scholar
  10. 10.
    Banko, M., Vanderwende, L.: Using N-grams to Understand the Nature of Summaries. In: Proceedings of HLT/NAACL 2004, Boston, MA (2004)Google Scholar
  11. 11.
    Hasler, L., Orasan, C., Mitkov, R.: Building better corpora for summarisation. In: Proceedings of Corpus Linguistics 2003, Lancaster, UK, pp. 309–319 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Esaú Villatoro-Tello
    • 1
  • Luis Villaseñor-Pineda
    • 1
  • Manuel Montes-y-Gómez
    • 1
  1. 1.Language Technologies Group, Computer Science DepartmentNational Institute of Astrophysics, Optics and Electronics (INAOE)Mexico

Personalised recommendations