Multivariate Bernoulli classification

The previous example uses the Gaussian distribution for features that are essentially binary, {UP=1, DOWN=0}, to represent the change in value. The mean value is computed as the ratio of the number of observations for which xi = UP over the total number of observations.

As stated in the first section, the Gaussian distribution is more appropriate for either continuous features or binary features for very large labeled datasets. The example is the perfect candidate for the Bernoulli model.

Model

The Bernoulli model differs from Naïve Bayes classifier in that it penalizes the features x, which do not have any observations; the Naïve Bayes classifier ignores them [5:10].

Note

The Bernoulli mixture model

For a feature function fi, with fi = 1 if the feature is observed, and a value of 0 if the feature is not observed:

Model

Implementation

The implementation of the Bernoulli model consists of modifying the Likelihood.score scoring function by using the Bernoulli density defined in the Stats object:

object Stats {
  def bernoulli(mean: Double, p: Int): Double = mean*p + (1-mean)*(1-p)
  def bernoulli(x: Double*): Double = bernoulli(x(0), x(1).toInt)
…

The first version of the Bernoulli algorithm is the direct implementation of the mathematical formula. The second version uses the signature of the Density (Double*) => Double type.

The mean value is the same as in the Gaussian version. The binary feature is implemented as an Int type with the value UP =1 (with respect to DOWN= 0) for the upward (with respect to downward) direction of the financial technical indicator.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset