Advertisement

Adapting the Lesk Algorithm for Calculating Term Similarity in the Context of Requirements Engineering

  • Jürgen VöhringerEmail author
  • Günther Fliedl
Conference paper

Abstract

Calculating similarity measures between schema concept terms has turned out to be an essential task in our requirements engineering research. The reason is that automatic integration of source texts and terms in the requirements engineering domain presupposes decisions about the similarity and the conflict potential of linguistic units, i.e. by default words. After shortly discussing several WordNet based similarity measures, we propose the Lesk algorithm as the most suitable choice for the integration task. Our test results show that both the Lesk algorithm and the underlying lexicon (i.e. WordNet) can be optimized for engineering purposes. We describe in detail how Lesk and WordNet can be applied for term conflict recognition and resolution during the engineering workflow.

References

  1. Bellström P, Vöhringer J (2009) Towards the automation of modeling language independent schema integration. In: International conference on information, process, and knowledge management, eKnow 2009, Cancun, Mexico, pp 110–115Google Scholar
  2. Budanitsky A, Hirst G (2001) Semantic distance in WordNet: an experimental, application-oriented evaluation of five measures. In: Workshop on WordNet and other lexical resources, 2nd meeting of the North American chapter of the Association for Computational Linguistics, PittsburghGoogle Scholar
  3. Banerjee S., Pederson T (2002) An adapted Lesk algorithm for word sense disambiguation using WordNet. In: Proceedings of the 3rd international conference on intelligent text processing and computational linguistics, Tokyo, Japan, pp 136–145Google Scholar
  4. Fliedl G, Kop C, Vöhringer J (2008) Guideline based evaluation and verbalization of OWL class and property labels. Data Knowl Eng 69(4):331–342 (April 2010)CrossRefGoogle Scholar
  5. Hirst G, St-Onge D (1998) Lexical chains as representations of context for the detection and correction of malapropisms. In: WordNet: an electronic lexical database (Language, Speech, and Communication). MIT Press, CambridgeGoogle Scholar
  6. Leacock C, Chodorow M (1998) Combining local context and WordNet similarity for word sense identification. In: Fellbaum. MIT Press, Cambridge, pp 265–283Google Scholar
  7. Miller GA (1995) WordNet: a lexical database for English. Commun ACM, 38:39-41CrossRefGoogle Scholar
  8. Pederson T, Patwardhan S, Michelizzi J (2004) Wordnet::similarity—measuring the relatedness of concepts. In: Proceedings of the 19th national conference on artificial intelligence, San Jose, pp 1024–1025Google Scholar
  9. Resnik P (1995) Using information content to evaluate semantic similarity in a taxonomy. In: Proceedings of international joint conference for artificial intelligence (IJCAI-95), Montreal, pp 448–453Google Scholar
  10. Warin M, Oxhammar H, Volk M (2004) Enriching an ontology with WordNet based on similarity measures. In: MEANING-2005 Workshop, TrentoGoogle Scholar
  11. Willett P (2006) The Porter stemming algorithm: then and now. Electron Libr Inform Syst 40(3): 219–223, ISSN 0033-0337Google Scholar
  12. Wu Z, Palmer M (1994) Verb semantics and lexical selection. In: Proceedings of the 32nd annual meeting of the association for computational linguistics, Las CrucesGoogle Scholar
  13. Vöhringer J, Gälle D, Fliedl G, Kop C, Bazhenov M (2010) Using linguistic knowledge for fine-tuning ontologies in the context of requirements engineering.International Journal of Computational Linguistics and Applications, Vol. 1, No. 1-2 2010, in: chapter 5 Applications, ISSN 0976-0962, Bahri PublicationsGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Institute of Applied InformaticsAlpen-Adria-Universität KlagenfurtKlagenfurtAustria
  2. 2.Institute of Applied InformaticsAlpen-Adria-Universität KlagenfurtKlagenfurtAustria

Personalised recommendations