Summary

In this chapter, we started by using regression for rating predictions. We saw a couple of different ways in which to do so, and then combined them all in a single prediction by learning a set of weights. This technique of ensemble learning—and in particular stacked learning—is a general technique that can be used in many situations, not just for regression. It allows you to combine different ideas, even if their internal mechanics are completely different—you can combine their final outputs.

In the second half of the chapter, we switched gears and looked at another mode of producing recommendations: shopping basket analysis, or association rule mining. In this mode, we try to discover (probabilistic) association rules of the form that customers who bought X are likely to be interested in Y. This takes advantage of the data that is generated from sales alone without requiring users to numerically rate items. This is not available in scikit-learn at the moment, so we wrote our own code.

If you are using association rule mining, then you need to be careful to not simply recommend bestsellers to every user (otherwise, what is the point of personalization?). In order to do this, we learned about measuring the value of rules in relation to the baseline, using a measure called the lift of a rule.

In Chapter 8, Artificial Neural Networks and Deep Learning, we will finally dive into deep learning with TensorFlow. We will learn about its API, then move on to learn about convolutional networks (and how they revolutionized image processing), and then recurrent networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset