Representation Learning - Implementing Word Embeddings

Machine learning is a science that is mainly based on statistics and linear algebra. Applying matrix operations is very common among most machine learning or deep learning architectures because of backpropagation. This is the main reason deep learning, or machine learning in general, accepts only real-valued quantities as input. This fact contradicts many applications, such as machine translation, sentiment analysis, and so on; they have text as an input. So, in order to use deep learning for this application, we need to have it in the form that deep learning accepts!

In this chapter, we are going to introduce the field of representation learning, which is a way to learn a real-valued representation from text while preserving the semantics of the actual text. For example, the representation of love should be very close to the representation of adore because they are used in very similar contexts.

So, the following topics will be covered in this chapter:

  • Introduction to representation learning
  • Word2Vec
  • A practical example of the skip-gram architecture
  • Skip-gram Word2Vec implementation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset