Advertisement

A Comparison of Character and Word Embeddings in Bidirectional LSTMs for POS Tagging in Italian

Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 98)

Abstract

Word representations are mathematical items capturing a word’s meaning and its grammatical properties in a machine-readable way. They map each word into equivalence classes including words sharing similar properties. Word representations can be obtained automatically by using unsupervised learning algorithms that rely on the distributional hypothesis, stating that the meaning of a word is strictly connected to its context in terms of surrounding words. This assessed notion of context has been recently reconsidered in order to include both distributional and morphological features of a word in terms of characters co-occurrence. This approach has evidenced very promising results, especially in NLP tasks, e.g, POS Tagging, where the representation of the so-called Out of Vocabulary (OOV) words represents a partially solved issue. This work is intended to face the problem of representing OOV words for a POS Tagging task, contextualized to the Italian language. Potential benefits and drawbacks of adopting a Bidirectional Long Short Term Memory (bi-LSTM) fed with a joint character and word embeddings representation to perform POS Tagging also considering OOV words have been investigated. Furthermore, experiments have been performed and discussed by estimating qualitative and quantitative indicators, and, thus, suggesting some possible future direction of the investigation.

Keywords

Deep neural network Natural Language Processing POS tagging Character and word embeddings 

References

  1. 1.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)Google Scholar
  2. 2.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
  3. 3.
    Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606 (2016)
  4. 4.
    Nivre, J., de Marneffe, M.C., Ginter, F., Goldberg, Y., Hajic, J., Manning, C.D., Tsarfaty, R.: Universal Dependencies v1: A Multilingual Treebank Collection. In: LREC (2016)Google Scholar
  5. 5.
    Young, T., et al.: Recent trends in deep learning based natural language processing. arXiv preprint arXiv:1708.02709 (2017)
  6. 6.
    Goldberg, Y.: A primer on neural network models for natural language processing. ArXiv, arXiv:1510.00726 (2015)
  7. 7.
    Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)Google Scholar
  8. 8.
    Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)MATHGoogle Scholar
  9. 9.
    Ballesteros, M., Dyer, C., Smith, N.A.: Improved transition-based parsing by modeling characters instead of words with LSTMs. In: EMNLP (2015)Google Scholar
  10. 10.
    Kiperwasser, E., Goldberg, Y.: Simple and accurate dependency parsing using bidirectional LSTM feature representations. ArXiv e-prints (2016)Google Scholar
  11. 11.
    Wang, P., Qian, Y., Soong, F.K., He, L., Zhao, H.: Part-of-speech tagging with bidirectional long short-term memory recurrent neural network. Pre-print, abs/1510.06168 (2015)Google Scholar
  12. 12.
    Ling, W., Dyer, C., Black, A.W., Trancoso, I., Fermandez, R., Amir, S., Marujo, L., Luis, T.: Finding function in form: compositional character models for open vocabulary word representation. In: EMNLP (2015)Google Scholar
  13. 13.
    Plank, B., Søgaard, A., Goldberg, Y.: Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. arXiv preprint arXiv:1604.05529 (2016)
  14. 14.
    Neubig, G., Dyer, C., Goldberg, Y., Matthews, A., Ammar, W., Anastasopoulos, A., Duh, K.: Dynet: the dynamic neural network toolkit. arXiv preprint arXiv:1701.03980 (2017)
  15. 15.
    Aprosio, A.P., Moretti, G.: Italy goes to Stanford: a collection of CoreNLP modules for Italian. CoRR abs/1609.06204 (2016)Google Scholar
  16. 16.
    Toutanova, K., Klein, D., Manning, C.D., Singer, Y.: Feature-rich part-of-speech tagging with a cyclic dependency network. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL 2003, vol. 1, pp. 173–180. Association for Computational Linguistics, Stroudsburg (2003)Google Scholar
  17. 17.
    Marulli, F., Pota, M., Esposito, M., Maisto, A., Guarasci, R.: Tuning SyntaxNet for POS tagging Italian sentences. In: Xhafa, F., Caballé, S. (eds.) Advances on P2P, Parallel, Grid, Cloud and Internet Computing, 3PGCIC 2017. LNDECT, vol. 13, pp. 314–324. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-69835-9_30CrossRefGoogle Scholar
  18. 18.
    SYNTAXNET: Announcing. The Worlds Most Accurate Parser Goes Open Source (2016)Google Scholar
  19. 19.
    Alberti, C., Andor, D., Bogatyy, I., Collins, M., Gillick, D., Kong, L., Thanapirom, C., et al.: SyntaxNet models for the CoNLL 2017 shared task. arXiv preprint arXiv:1703.04929 (2017)
  20. 20.
    Pinter, Y., Guthrie, R., Eisenstein, J.: Mimicking word embeddings using subword RNNs. arXiv preprint arXiv:1707.06961 (2017)
  21. 21.
    Rong, X.: word2vec parameter learning explained. arXiv preprint arXiv:1411.2738 (2014)
  22. 22.
    Cho, K.: Natural language understanding with distributed representation. ArXiv, arXiv:1511.07916 (2015)
  23. 23.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)Google Scholar
  24. 24.
    Santos, C.D., Zadrozny, B.: Learning character-level representations for part-of-speech tagging. In: Proceedings of the 31st International Conference on Machine Learning (ICML 2014), pp. 1818–1826 (2014)Google Scholar
  25. 25.
    Bosco, C., Dell’Orletta, F., Montemagni, S., Sanguinetti, M., Simi, M.: The Evalita 2014 dependency parsing task. In: CLiC-it 2014 and EVALITA 2014 Proceedings, pp. 1–8. Pisa University Press (2014). ISBN/EAN: 978-886741-472-7Google Scholar
  26. 26.
    Bosco, C., Montemagni, S., Simi, M.: Converting Italian treebanks: towards an Italian stanford dependency treebank. In: Proceedings of the 7th Linguistic Annotation Workshop & Interoperability with Discourse (LAW VII & ID at ACL-2013), Sofia, Bulgaria, 8–9 August, pp. 61–69 (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  • Fiammetta Marulli
    • 1
  • Marco Pota
    • 1
  • Massimo Esposito
    • 1
  1. 1.Institute for High Performance Computing and Networking - National Research Council of ItalyNaplesItaly

Personalised recommendations