Skip to main content

Abstract

In this book, we aimed at providing a high-level introduction to various types of embeddings used in NLP. We covered early works in word embeddings and more recent contextualized embeddings based on large pre-trained language models. The currently celebrated contextualized embeddings are the product of a long path of evolution. Since early works on lexical semantics, the distributional hypothesis has been the dominating basis for the field of semantic representation and prevailed even for recent models, however, the way of constructing representations has gone under a lot of change. The initial stage of this path is characterized by models that explicitly collected co-occurrence statistics, an approach that often required a subsequent dimensionality reduction step (Chapter 3). Together with the revival of neural networks and deep learning, the field of semantic representation experienced a massive boost. Neural networks provided an efficient way for processing large amounts of texts and for directly computing dense compact representations. Since then, the term representation has been almost fully substituted by their dense version, called embeddings. This development path has revolutionalized other fields of research such as graph embedding (Chapter 4) or resulted in the emergence of other fields of research, such as sense embedding (Chapter 5) and sentence embedding (Chapter 7).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Cite this chapter

Pilehvar, M.T., Camacho-Collados, J. (2021). Conclusions. In: Embeddings in Natural Language Processing. Synthesis Lectures on Human Language Technologies. Springer, Cham. https://doi.org/10.1007/978-3-031-02177-0_9

Download citation

Publish with us

Policies and ethics