Summary

In this chapter, we described the different commonly used machine translation methods with particular focus on neural machine translation. We briefly described classic statistical machine translation in the context of lexical alignment models. We showed a simple example for building an SMT alignment model using NLTK. SMT can be used when we have a large corpus of bilingual data.

The main shortcoming of such models is that they do not generalize well to domains (or contexts) other than the one in which they were trained. Recently, deep neural networks have become the popular approach to machine translation, mainly because of their effectiveness in producing close to human-level translations. We described in detail how to build an NMT model using RNNs with attention. We trained the model to translate French phrases into English from a real-world dataset of TED Talks. You can use a similar model for your own machine translation problems other than French-English translation. You can also improve on the model we developed by using a different attention mechanism or a deeper encoder/decoder network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset