Information Extraction with Active Learning: A Case Study in Legal Text

  • Cristian Cardellino
  • Serena Villata
  • Laura Alonso Alemany
  • Elena Cabrio
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9042)

Abstract

Active learning has been successfully applied to a number of NLP tasks. In this paper, we present a study on Information Extraction for natural language licenses that need to be translated to RDF. The final purpose of our work is to automatically extract from a natural language document specifying a certain license a machine-readable description of the terms of use and reuse identified in such license. This task presents some peculiarities that make it specially interesting to study: highly repetitive text, few annotated or unannotated examples available, and very fine precision needed.

In this paper we compare different active learning settings for this particular application. We show that the most straightforward approach to instance selection, uncertainty sampling, does not provide a good performance in this setting, performing even worse than passive learning. Density-based methods are the usual alternative to uncertainty sampling, in contexts with very few labelled instances. We show that we can obtain a similar effect to that of density-based methods using uncertainty sampling, by just reversing the ranking criterion, and choosing the most certain instead of the most uncertain instances.

Keywords

Active Learning Ontology-based Information Extraction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cabrio, E., Aprosio, A.P., Villata, S.: These are your rights - A natural language processing approach to automated RDF licenses generation. In: Proceedings of the SemanticWeb: Trends and Challenges - 11th International Conference, ESWC 2014, Anissaras, Crete, Greece, May 25-29, pp. 255–269 (2008)Google Scholar
  2. 2.
    Chang, C.-C., Lin, C.-J.: Libsvm - A library for support vector machines, The Weka classifier works with version 2.82 of LIBSVM (2001)Google Scholar
  3. 3.
    Culotta, A., McCallum, A.: Reducing labeling effort for structured prediction tasks. In: Proceedings of the 20th National Conference on Artificial Intelligence, AAAI 2005, vol. 2, pp. 746–751. AAAI Press (2005)Google Scholar
  4. 4.
    Dligach, D., Palmer, M.: Good seed makes a good crop: Accelerating active learning using language modeling. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers, HLT 2011, vol. 2, pp. 6–10. Association for Computational Linguistics, Stroudsburg (2011)Google Scholar
  5. 5.
    Donmez, P., Carbonell, J.G., Bennett, P.N.: Dual strategy active learning. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 116–127. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. 6.
    Druck, G., Settles, B., McCallum, A.: Active learning by labeling features. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 81–90. ACL (2009)Google Scholar
  7. 7.
    Kearns, M.: Efficient noise-tolerant learning from statistical queries. J. ACM 45(6), 983–1006 (1998)CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Klein, D., Manning, C.D.: Accurate unlexicalized parsing. In: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, ACL 2003, vol. 1, pp. 423–430. Association for Computational Linguistics, Stroudsburg (2003)Google Scholar
  9. 9.
    Lewis, D.D., Catlett, J.: Heterogeneous uncertainty sampling for supervised learning. In: Proceedings of the Eleventh International Conference on Machine Learning, pp. 148–156. Morgan Kaufmann (1994)Google Scholar
  10. 10.
    Pujara, J., London, B., Getoor, L.: Reducing label cost by combining feature labels and crowdsourcing. In: ICML Workshop on Combining Learning Strategies to Reduce Label Cost (2011)Google Scholar
  11. 11.
    Settles, B.: Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison (2009)Google Scholar
  12. 12.
    Settles, B.: Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1467–1478. ACL (2011)Google Scholar
  13. 13.
    Settles, B.: Active Learning. In: Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan Kaufmann (2012)Google Scholar
  14. 14.
    Settles, B., Craven, M.: An analysis of active learning strategies for sequence labeling tasks. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1069–1078. ACL (2008)Google Scholar
  15. 15.
    Symons, C.T., Arel, I.: Multi-View Budgeted Learning under Label and Feature Constraints Using Label-Guided Graph-Based Regularization (2011)Google Scholar
  16. 16.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann (2005)Google Scholar
  17. 17.
    Zhu, J., Wang, H., Yao, T., Tsou, B.K.: Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. In: Proceedings of the 22nd International Conference on Computational Linguistics, vol. 1, pp. 1137–1144. Association for Computational Linguistics (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Cristian Cardellino
    • 1
  • Serena Villata
    • 2
  • Laura Alonso Alemany
    • 1
  • Elena Cabrio
    • 2
  1. 1.Universidad Nacional de CórdobaCórdobaArgentina
  2. 2.INRIASophia AntipolisFrance

Personalised recommendations