Today's project was to build a deep learning computational linguistics model using Word2VEC to accurately classify text in a sentiment analysis paradigm. Our hypothetical use case was to apply deep learning to enable the management of a restaurant chain to understand the general sentiment of text responses their customers made in response to a phone text question asking about their experience after dining. Our specific task was to build the natural language processing model that would build the business intelligence from the data obtained in this simple (hypothetical) application.
What are the implications of this accuracy? In our hypothetical example, this means that we can take a body of data that is difficult to summarize outside of this deep learning NLP model and summarize it for actionable insights by the restaurant management. With summary data points of positive or negative sentiment to the questions asked in a phone text, the restaurant chain can track performance over time and make adjustments and possibly even reward the staff for improvements.
In the project in this chapter, we learned how to build word2vec models and analyze what characteristics we can learn about the provided corpus. We also learned how to build a language model using CNN using the trained word embeddings.
Finally, we looked at the model performance in testing and determined whether we succeeded in achieving our goals. In the next chapter's project we're going to leverage even more power from our computational linguistic skills to create a natural language pipeline that would power a chatbot for open domain question answering. This is exciting work, let's see what next!