Sequences of models – AdaBoost

AdaBoost is a boosting algorithm based on the Gradient Descent optimization method. It fits a sequence of weak learners (originally stumps, that is, single-level decision trees) on re-weighted versions of the data. Weights are assigned based on the predictability of the case. Cases that are more difficult are weighted more. The idea is that the trees first learn easy examples and then concentrate more on the difficult ones. In the end, the sequence of weak learners is weighted to maximize the overall performance:

In: import numpy as np
from sklearn.ensemble import AdaBoostClassifier
hypothesis = AdaBoostClassifier(n_estimators=300, random_state=101)
scores = cross_val_score(hypothesis, covertype_X, covertype_y, cv=3,
scoring='accuracy', n_jobs=-1)
print ("Adaboost -> cross validation accuracy: mean = %0.3f
std = %0.3f" % (np.mean(scores), np.std(scores)))
Out: Adaboost -> cross validation accuracy: mean = 0.610 std = 0.014

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset