Advertisement

VENSES – A Linguistically-Based System for Semantic Evaluation

  • Rodolfo Delmonte
  • Sara Tonelli
  • Marco Aldo Piccolino Boniforti
  • Antonella Bristot
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3944)

Abstract

The system for semantic evaluation VENSES (Venice Semantic Evaluation System) is organized as a pipeline of two subsystems: the first is a reduced version of GETARUN, our system for Text Understanding. The output of the system is a flat list of augmented head-dependent structures with Grammatical Relations and Semantic Roles labels. The evaluation system is made up of two main modules: the first is a sequence of linguistic rules; the second is a quantitatively based measurement of input structures and predicates. VENSES measures semantic similarity which may range from identical linguistic items, to synonymous, lexically similar, or just morphologically derivable. Both modules go through General Consistency checks which are targeted to high level semantic attributes like presence of modality, negation, and opacity operators, temporal and spatial location checks. Results in cws, recall and precision are homogeneous for both training and test corpus and fare higher than 60%.

Keywords

Semantic Similarity Semantic Role Main Predicate Semantic Evaluation Discourse Marker 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bos, J., Clark, S., Steedman, M., Curran, J., Hockenmaier, J.: Wide-coverage semantic representations from a ccg parser. In: Proc. of the 20th International Conference on Computational Linguistics, Geneva, Switzerland (2004)Google Scholar
  2. 2.
    Dagan, I., Glickman, O., Magnini, B.: The pascal recognising textual entailment challenge. In: Proceedings of the Recognising Textual Entailment Challenge, Southampton, UK, pp. 1–8 (2005)Google Scholar
  3. 3.
    Delmonte, R.: Evaluating GETARUNS Parser with GREVAL Test Suite. In: Proc. ROMAND - 20th International Conference on Computational Linguistics - COLING, University of Geneva, pp. 32–41 (2004)Google Scholar
  4. 4.
    Delmonte, R.: Text Understanding with GETARUNS for Q/A and Summarization. In: Proc. ACL 2004 - 2nd Workshop on Text Meaning & Interpretation, Barcelona, Columbia University, pp. 97–104 (2004)Google Scholar
  5. 5.
    Delmonte, R.: GETARUN PARSER - A parser equipped with Quantifier Raising and Anaphoric Binding based on LFG. In: Proc. LFG 2002 Conference, Athens, pp. 130–153 (2002), http://cslipublications.stanford.edu/hand/miscpubsonline.html
  6. 6.
    Delmonte, R.: Semantic Parsing with an LFG-based Lexicon and Conceptual Representations. Computers & the Humanities (5-6), 461–488 (1990)Google Scholar
  7. 7.
    Delmonte, R., Bianchi, D.: Binding Pronominals with an LFG Parser. In: Proceeding of the Second International Workshop on Parsing Technologies, Cancun, Mexico, pp. 59–72. ACL (1991)Google Scholar
  8. 8.
    Dolan, W.B., Quirk, C., Brockett, C.: Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In: Proceedings of the 20th International Conference on Computational Linguistics, Geneva, Switzerland (2004)Google Scholar
  9. 9.
    Harabagiu, S., Pasca, M., Maiorano, S.: Open-Domain Question Answering Techniques. Natural Language Engineering 1(1), 1–38 (2003)Google Scholar
  10. 10.
    Lin, D., Pantel, P.: DIRTdiscovery of inference rules from text. Knowledge Discovery and Data Mining, 323–328 (2001)Google Scholar
  11. 11.
    Punyakanok, V., Roth, D., Yih, W.: Natural language inference via dependency tree mapping: An application to question answering. Computational Linguistics (2004)Google Scholar
  12. 12.
    Raina, R., Haghighi, A., Cox, C., Finkel, J., Michels, J., Toutanova, K., MacCartney, B., Marneffe, M.C., Manning, C.D., Ng, A.Y.: Robust Textual Inference using Diverse Knowledge Sources. In: Proc. of the 1st. PASCAL Recognision Textual Entailment Challenge Workshop, Southampton, U.K., pp. 57–60 (2005)Google Scholar
  13. 13.
    Zaenen, A., Karttunen, L., Crouch, R.: Local Textual Inference: Can it be Defined or Circumscribed? In: Proc. Workshop Empirical Modeling of Semantic Equivalence and Entailment, ACL 2005, University of Michigan, Ann Arbor Michigan, pp. 31–36 (2005)Google Scholar
  14. 14.
    Glickman, O., Dagan, I., Koppel, M.: A Probabilistic Classification Approach for Lexical Textual Entailment. In: Twentieth National Conference on Artificial Intelligence, AAAI 2005 (2005)Google Scholar
  15. 15.
    Corley, C., Mihalcea, R.: Measuring the Semantic Similarity of Texts. In: Proceedings of the ACL 2005 Workshop on Empirical Modeling of Semantic Equivalence and Entailment, Ann Arbor, MI, pp. 13–18 (2005)Google Scholar
  16. 16.
    Braz, R., Girju, R., Punyakanok, V., Roth, D., Sammons, M.: An Inference Model for Semantic Entailment in Natural Language. In: Proc. of the 1st. PASCAL Recognision Textual Entailment Challenge Workshop, Southampton, U.K., pp. 29–32 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Rodolfo Delmonte
    • 1
  • Sara Tonelli
    • 1
  • Marco Aldo Piccolino Boniforti
    • 1
  • Antonella Bristot
    • 1
  1. 1.Department of Language Sciences, Laboratory of Computational LinguisticsUniversity Ca’ FoscariVeniceItaly

Personalised recommendations