Skip to main content

Weakly Supervised Discriminative Training of Linear Models for Natural Language Processing

  • Conference paper
  • First Online:
Statistical Language and Speech Processing (SLSP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9449))

Included in the following conference series:

  • 654 Accesses

Abstract

This work explores weakly supervised training of discriminative linear classifiers. Such features-rich classifiers have been widely adopted by the Natural Language processing (NLP) community because of their powerful modeling capacity and their support for correlated features, which allow separating the expert task of designing features from the core learning method. However, unsupervised training of discriminative models is more challenging than with generative models. We adapt a recently proposed approximation of the classifier risk and derive a closed-form solution that greatly speeds-up its convergence time. This method is appealing because it provably converges towards the minimum risk without any labeled corpus, thanks to only two reasonable assumptions about the rank of class marginal and Gaussianity of class-conditional linear scores. We also show that the method is a viable, interesting alternative to achieve weakly supervised training of linear classifiers in two NLP tasks: predicate and entity recognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This work has been partly funded by the ANR ContNomina project.

  2. 2.

    http://nlp.stanford.edu/nlp.

References

  1. Balasubramanian, K., Donmez, P., Lebanon, G.: Unsupervised supervised learning II: margin-based classification without labels. J. Mach. Learn. Res. 12, 3119–3145 (2011)

    MathSciNet  MATH  Google Scholar 

  2. Björkelund, A., Hafdell, L., Nugues, P.: Multilingual semantic role labeling. In: Proceedings of CoNLL: Shared Task, pp. 43–48. Stroudsburg, PA, USA (2009)

    Google Scholar 

  3. Daumé III, H.: Unsupervised search-based structured prediction. In: Proceedings of ICML, Montreal, Canada (2009)

    Google Scholar 

  4. Druck, G., Mann, G., McCallum, A.: Semi-supervised learning of dependency parsers using generalized expectation criteria. In: Proceedings of ACL, pp. 360–368. Suntec, Singapore, August 2009

    Google Scholar 

  5. Galliano, S., Gravier, G., Chaubard, L.: The ester 2 evaluation campaign for the rich transcription of french radio broadcasts. In: Proceedings of INTERSPEECH, pp. 2583–2586 (2009)

    Google Scholar 

  6. Goldberg, A.B.: New directions in semi-supervised learning. Ph.D. thesis, University of Wisconsin-Madison (2010)

    Google Scholar 

  7. Gould, H., Tobochnik, J.: An Introduction to Computer Simulation Methods: Applications to Physical Systems. Addison-Wesley, Series in physics (1988)

    Google Scholar 

  8. Kaljahi, R.S.Z.: Adapting self-training for semantic role labeling. In: Proceedings Student Research Workshop, ACL, pp. 91–96. Uppsala, Sweden, July 2010

    Google Scholar 

  9. Kapoor, A.: Learning Discriminative Models with Incomplete Data. Ph.D. thesis, Massachusetts Institute of Technology, February 2006

    Google Scholar 

  10. Klein, D., Smarr, J., Nguyen, H., Manning, C.: Named entity recognition with character-level models. In: Proceedings of CoNLL, pp. 180–183. Stroudsburg, USA (2003)

    Google Scholar 

  11. Li, Z., Wang, Z., Eisner, J., Khudanpur, S., Roark, B.: Minimum imputed-risk: unsupervised discriminative training for machine translation. In: Proceedings of EMNLP, pp. 920–929 (2011)

    Google Scholar 

  12. Liu, X., Li, K., Zhou, M., Xiong, Z.: Enhancing semantic role labeling for tweets using self-training. In: Proceedings of AAAI, pp. 896–901 (2011)

    Google Scholar 

  13. van der Plas, L., Samardžić, T., Merlo, P.: Cross-lingual validity of propbank in the manual annotation of french. In: Proceedings of the Fourth Linguistic Annotation Workshop, ACL. pp. 113–117. Uppsala, Sweden, July 2010

    Google Scholar 

  14. Schmid, H.: Improvements in part-of-speech tagging with an application to german. In: Proceedings of the Workshop EACL SIGDAT, Dublin (1995)

    Google Scholar 

  15. Smith, N.A., Eisner, J.: Unsupervised search-based structured prediction. In: Proceedings of ACL (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christophe Cerisara .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Rojas-Barahona, L.M., Cerisara, C. (2015). Weakly Supervised Discriminative Training of Linear Models for Natural Language Processing. In: Dediu, AH., Martín-Vide, C., Vicsi, K. (eds) Statistical Language and Speech Processing. SLSP 2015. Lecture Notes in Computer Science(), vol 9449. Springer, Cham. https://doi.org/10.1007/978-3-319-25789-1_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-25789-1_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-25788-4

  • Online ISBN: 978-3-319-25789-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics