Using and Evaluating User Directed Summaries to Improve Information Access

  • Manuel J. Maña López
  • Manuel de Buenaga Rodríguez
  • José María Gómez Hidalgo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1696)

Abstract

Textual information available has grown so much as to make necessary to study new techniques that assist users in information access (IA). In this paper, we propose utilizing a user directed summarization system in an IA setting for helping users to decide about document relevance. The summaries are generated using a sentence extraction method that scores the sentences performing some heuristics employed successfully in previous works (keywords, title and location). User modeling is carried out exploiting user’s query to an IA system and expanding query terms using WordNet. We present an objective and systematic evaluation method oriented to measure the summary effectiveness in two IA significant tasks: ad hoc retrieval and relevance feedback. Results obtained prove our initial hypothesis, i.e., user adapted summaries are a useful tool assisting users in an IA context.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abraços, J., Lopes, G.P.: Statistical methods for retrieving most significant paragraphs in newspaper articles. In: Proceedings of ACL/EACL Workshop on Intelligent Scalable Text Summarization, Madrid, Spain (1997) 51–57Google Scholar
  2. 2.
    Allan, J.: Relevance Feedback with Too Much Data. In: Proceedings of the 19th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington (1995) 337–343Google Scholar
  3. 3.
    Buckley, C.: Implementation of the Smart Information Retrieval System. Technical Report 85-686, Cornell University (1985)Google Scholar
  4. 4.
    Buenaga Rodríguez, M., Gómez Hidalgo, J. M., Díaz Agudo, B.: Using WordNet to Complement Training Information in Text Categorization. In: Proceedings of the 2nd International Conference on Recent Advances in Natural Language Processing (RANLP), Tzigov Chark, Bulgaria (1997) 150–157Google Scholar
  5. 5.
    Chang, Y.K., Cirillo, C., Razon, J. Evaluation of Feedback Retrieval Using Modified Freezing, Residual Collection, and Test and Control Groups. In: Salton, G. (ed.): The SMART Retrieval System: Experiments in Automatic Document Processing. Prentice-Hall, Inc. (1971) 355–370Google Scholar
  6. 6.
    Edmundson, H.P.: Problems in automatic abstracting. Communications of the ACM 7,4 (1964) 259–263CrossRefGoogle Scholar
  7. 7.
    Edmundson, H.P.: New Methods in Automatic Abstracting. Journal of the ACM 16,2 (1969) 264–285MATHCrossRefGoogle Scholar
  8. 8.
    Haines, D., Croft, W.B.: Relevance Feedback and Inference Networks. In: Proceedings of the 16th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Pittsburgh, Pennsylvania (1993) 2–11Google Scholar
  9. 9.
    Hand, T.F.: A Proposal for a Task-Based Evaluation of Text Summarization Systems. In: Proceedings of ACL/EACL Workshop on Intelligent Scalable Text Summarization, Madrid, Spain (1997) 31–38Google Scholar
  10. 10.
    Harman, D.K.: Relevance Feedback Revisited. In: Proceedings of the 15th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Copenhagen, Denmark (1992) 1–10Google Scholar
  11. 11.
    Harman, D.K.: The first Text REtrieval Conference (TREC-1). Information Processing and Management 29,4 (1993) 411–414CrossRefMathSciNetGoogle Scholar
  12. 12.
    Ide, E.: New Experiments in Relevance Feedback. In: Salton, G. (ed.): The SMART Retrieval System: Experiments in Automatic Document Processing. Prentice-Hall, Inc. (1971) 337–354Google Scholar
  13. 13.
    Johnson, F.C., Paice, C.D., Black, W.J., Neal, A.P.: The Application of Linguistic Processing to Automatic Abstract Generation. Journal of Document and Text Management 1,3 (1993) 215–241Google Scholar
  14. 14.
    Knaus, D., Mittendorf, E., Schauble, P., Sheridan, P.: Highlighting Relevant Passages for Users of the Interactive SPIDER Retrieval System. In: Proceedings of the Fourth Text REtrieval Conference (TREC-4), Gaithersburg, Maryland (1995) 233–238Google Scholar
  15. 15.
    Kupiec, J., Pedersen, J.O., Chen, F.: A Trainable Document Summarizer. In: Proceedings of the 18th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington (1995) 68–73Google Scholar
  16. 16.
    Luhn, H.P.: The Automatic Creation of Literature Abstracts. IBM Journal of Research and Development 2,2 (1958) 159–165MathSciNetCrossRefGoogle Scholar
  17. 17.
    Mani, I., Bloedorn, E.: Multidocument Summarization by Graph Search and Matching. In: Proceedings of AAAI/IAAI, Providence, Rhode Island (1997) 622–628Google Scholar
  18. 18.
    Mani, I., House, D., Klein, G., Hirschman, L., Obrst, L.: The TIPSTER SUMMAC Text Summarization Evaluation. Final Report. MITRE Technical Report (MTR 98W0000138) (1998)Google Scholar
  19. 19.
    Maña López, M.J., Buenaga Rodríguez, M., Gómez Hidalgo, J.M.: Diseño y Evaluación de un Generador de Resúmenes de Texto con Modelado de Usuario en un Entorno de Recuperación de Información. Procesamiento de Lenguaje Natural 23 (1998) 32–39Google Scholar
  20. 20.
    Miike, S., Itoh, E., Ono, K., Sumita. K.: A Full-Text Retrieval System with a Dynamic Abstract Generation Function. In: Proceedings of the 17th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland (1994) 152–161Google Scholar
  21. 21.
    Miller, G.A. WordNet: A Lexical Database for English. Communications of the ACM 38,11 (1995) 39–41CrossRefGoogle Scholar
  22. 22.
    Oracle. Managing Text with Oracle8TM ConText Cartridge. An Oracle Technical White Paper (1997)Google Scholar
  23. 23.
    Paice, C. D.: The Automatic Generation of Literature Abstracts: An Approach Based on the Identification of Self-Indicating Phrases. In: Oddy, R.N., Robertson, S.E., van Rijbergen, C.J., Williams, P.W. (eds.): Information Retrieval Research. Butterworths, London (1981) 172–191Google Scholar
  24. 24.
    Paice, C.D., Jones, P.A.: The Identification of Important Concepts in Highly Structured Technical Papers. In: Proceedings of the 16th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Pittsburgh, Pennsylvania (1993) 69–78Google Scholar
  25. 25.
    Salton, G., Buckley, C.: Improving Retrieval Performance by Relevance Feedback. Journal of the American Society for Information Science 41,4 (1990) 288–297CrossRefGoogle Scholar
  26. 26.
    Salton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGraw-Hill, New York (1983)MATHGoogle Scholar
  27. 27.
    Teufel, S., Moens, M.: Sentence Extraction as a Classification Task. In: Proceedings of ACL/EACL Workshop on Intelligent Scalable Text Summarization, Madrid, Spain (1997) 58–65Google Scholar
  28. 28.
    Tombros, A., Sanderson, M.: Advantages of Query Biased Summaries in Information Retrieval. In: Proceedings of the 21st Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Melbourne, Australia (1998) 2–10Google Scholar
  29. 29.
    Voorhees, E.M.: Query Expansion Using Lexical-Semantic Relations. In: Proceedings of the 17th Annual International ACM/SIGIR Conference on Research and Development in Information Retrieval, Dublin, Ireland (1994) 61–69Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Manuel J. Maña López
    • 1
  • Manuel de Buenaga Rodríguez
    • 2
  • José María Gómez Hidalgo
    • 2
  1. 1.Departamento de Lenguajes y Sistemas InformáticosUniversidad de VigoOrenseSpain
  2. 2.Departamento de Inteligencia ArtificialUniversidad Europea de Madrid – CEESMadridSpain

Personalised recommendations