A naive Bayes classification algorithm assigns a class to an element of a set which is most probable according to Bayes' theorem.
Let A and B be probabilistic events. P(A) the probability of A being true. P(A|B) the conditional probability of A being true given B is true. Then, Bayes' theorem states the following:
P(A|B)=(P(B|A) * P(A))/P(B)
P(A) is the prior probability of A being true without the knowledge of the probability of P(B) and P(B|A). P(A|B) is the posterior probability of A being true, taking into consideration additional knowledge about the probability of B being true.
In this chapter, you will learn the following:
- How to apply Bayes' theorem in a basic way to compute the probability of a medical test being correct in simple example Medical test
- To grasp Bayes' theorem by proving its statement above and its extension
- How to apply Bayes' theorem differently for independent and dependent variables in examples Playing chess
- How to apply Bayes' theorem for discrete random variables in examples Medical test and Playing chess; and for continuous random variables in example Gender classification using the probability distribution of the continuous random variable
- To implement in Python an algorithm calculating the posterior probabilities using Bayes' theorem in section Implementation of naive Bayes classifier
- By verifying your understanding through solving problems in the end of the chapter to discern in what situations Bayes' theorem is an appropriate method of analysis and when it is not