Summary

In this chapter, we discussed Word2vec and its variants, then walked through the code for developing a skip-gram model for understanding word relationships. We then used TensorBoard to visualize word embeddings, and looked at how its various projections can provide very useful visualizations. We then discussed a logical extension of Word2vec to produce a document representation, which we improved by leveraging the tf-idf weights. Finally, we discussed doc2vec and its variants for building document-level vector representations. We also looked at how document embeddings can discover the topics present in documents, using TensorBoard.

In the next chapter, we will look at using deep neural networks for text classification. We will look at the different neural networks that can be used to build a text classifier, and will discuss the pros and cons of each architecture.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset