Language Resources and Evaluation

, Volume 47, Issue 3, pp 639–659

Parser evaluation using textual entailments

Original Paper

DOI: 10.1007/s10579-012-9200-5

Cite this article as:
Yuret, D., Rimell, L. & Han, A. Lang Resources & Evaluation (2013) 47: 639. doi:10.1007/s10579-012-9200-5

Abstract

Parser Evaluation using Textual Entailments (PETE) is a shared task in the SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task involves recognizing textual entailments based on syntactic information alone. PETE introduces a new parser evaluation scheme that is formalism independent, less prone to annotation error, and focused on semantically relevant distinctions. This paper describes the PETE task, gives an error analysis of the top-performing Cambridge system, and introduces a standard entailment module that can be used with any parser that outputs Stanford typed dependencies.

Keywords

ParsingTextual entailments

Copyright information

© Springer Science+Business Media Dordrecht 2012

Authors and Affiliations

  1. 1.Koç UniversityIstanbulTurkey
  2. 2.Computer LaboratoryCambridgeUK