Using String Comparison in Context for Improved Relevance Feedback in Different Text Media

  • Adenike M. Lam-Adesina
  • Gareth J. F. Jones
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4209)

Abstract

Query expansion is a long standing relevance feedback technique for improving the effectiveness of information retrieval systems. Previous investigations have shown it to be generally effective for electronic text, to give proportionally better improvement for automatic transcriptions of spoken documents, and to be at best of questionable utility for optical character recognized scanned text documents. We introduce two corpus-based methods based on using a string-edit distance measure in context to automatically detect and correct transcription errors. One method operates at query-time and requires no modification of the document index file, and the other at index-time and operates using the standard query-time expansion process. Experimental investigations show these methods to produce improvements in relevance feedback for all three media types, but most significantly mean that relevance feedback can now successfully be applied to scanned text documents.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lam-Adesina, A.M., Jones, G.J.F.: Examining and Improving the Effectiveness of Relevance Feedback for Retrieval of Scanned Text Documents. Information Processing and Management 43(3), 633–649 (2006)CrossRefGoogle Scholar
  2. 2.
  3. 3.
  4. 4.
    Garafolo, J.S., Auzanne, C.G.P., Voorhees, E.M.: The TREC Spoken Document Retrieval Track: A Success Story. In: Proceedings of the RIAO 2000 Conference: Content-Based Multimedia Information Access, Paris, pp. 1–20 (2000)Google Scholar
  5. 5.
    Johnson, S.E., Jourlin, P., Sparck Jones, K., Woodland, P.C.: Spoken Document Retrieval for TREC-8 at Cambridge University. In: Proceedings of the Eighth Text REtrieval Conference (TREC-9), Gaithersburg, MD, pp. 157–168. NIST (2000)Google Scholar
  6. 6.
    Gonzalo, J., Clough, P., Vallin, A.: Overview of the CLEF 2005 Interactive Track. In: Peters, C., Gey, F.C., Gonzalo, J., Müller, H., Jones, G.J.F., Kluck, M., Magnini, B., de Rijke, M., Giampiccolo, D. (eds.) CLEF 2005. LNCS, vol. 4022, pp. 251–262. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  7. 7.
    Kantor, P.B., Voorhees, E.M.: The TREC-5 Confusion Track: Comparing Retrieval Methods for Scanned Text. In: Information Retrieval, vol. 2, pp. 165–176. Kluwer Academic Publishers, Dordrecht (2000)Google Scholar
  8. 8.
    Taghva, K., Borsack, J., Condit, A.: Evaluation of Model-Based Retrieval Effectiveness with OCR Text. ACM Transactions on Information Systems 14(1), 64–93 (1996)CrossRefGoogle Scholar
  9. 9.
    Jones, G.J.F., Lam-Adesina, A.M.: An Investigation of Mixed-Media Information Retrieval. In: Proceedings of the 6th European Conference on Research and Development for Digital Libraries, Rome, pp. 463–478. Springer, Heidelberg (2002)Google Scholar
  10. 10.
    Robertson, S.E., Walker, S., Jones, S., Hancock-Beaulieu, M.M., Gatford, M.: Okapi at TREC-3. In: Proceedings of the Third Text REtrieval Conference (TREC-3), pp. 109–126. NIST (1995)Google Scholar
  11. 11.
    Lam-Adesina, A.M., Jones, G.J.F.: Applying Summarization Techniques for Term Selection in Relevance Feedback. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, New Orleans, pp. 1–9. ACM Press, New York (2001)CrossRefGoogle Scholar
  12. 12.
    Auzanne, C., Garafolo, J.S., Fiscus, J.G., Fisher, W.M.: Automatic Language Model Adaptation for Spoken Document Retrieval. In: Proceedings of the RIAO 2000 Conference: Content-Based Multimedia Information Access, Paris, pp. 1–20 (2000)Google Scholar
  13. 13.
    Jones, G.J.F., Han, M.: Information Retrieval from Mixed-Media Collections: Report on Design and Indexing of a Scanned Document Collection. Technical Report 400, Department of Computer Science, University of Exeter (January 2001)Google Scholar
  14. 14.
    Mittendorf, E., Schauble, P.: Information Retrieval can Cope with Many Errors. Information Retrieval 3, 189–216 (2000)MATHCrossRefGoogle Scholar
  15. 15.
    Zobel, J., Dart, P.: Phonetic String Mathing: Lessons from Information Retrieval. In: Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Zurich, pp. 30–38. ACM Press, New York (1996)Google Scholar
  16. 16.
    Singhal, A., Pereira, F.C.N.: Document Expansion for Speech Retrieval. In: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, pp. 34–41. ACM Press, New York (1999)CrossRefGoogle Scholar
  17. 17.
    Tong, X., Evans, D.: A Statistical Approach to Automatic OCR Error Correction in Context. In: Proceedings of the Fourth Workshop on Very Large Corpora, Copenhagen, pp. 88–100 (1996)Google Scholar
  18. 18.
    Collins-Thompson, K., Schweizer, C., Dumais, S.: Improved String Matching Under Noisy Channel Conditions. In: Proceedings of the Tenth International Conference on Information and Knowledge Management (CIKM 2001), Atlanta, pp. 357–364. ACM Press, New York (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Adenike M. Lam-Adesina
    • 1
  • Gareth J. F. Jones
    • 1
  1. 1.Centre for Digital Video Processing & School of ComputingDublin City UniversityDublin 9Ireland

Personalised recommendations