How it works...

In Step 1, we imported the required libraries to build our decision tree classifier model using the bagging classifier. In Step 2, we read our dataset, which was winedata.csv. In Step 3, we separated our feature set and the target variable. We also split our data into training and testing subsets. In Step 4, we created a decision tree classifier model and passed it to the BaggingClassifier(). In the DecisionTreeClassifier(), the default value for the criterion parameter was gini, but we changed it to entropy. We then passed our decision tree model to the BaggingClassfier()In the BaggingClassfier(), we have parameters including n_estimators and bootstrapn_estimators is the number of base estimators in the ensemble and has a default value of 10. The bootstrap parameter indicates whether samples are drawn with replacement or not and is set to True by default.

In Step 5 and Step 6, we fitted our model to the training data and looked at the score of the test set. In Step 7, we called the predict() method and passed the test feature set. In Step 8, we added the code for the plot_confusion_matrix() from http://scikit-learn.org, which takes the confusion matrix as one of its input parameters and plots the confusion matrix. In Step 9, we called the plot_confusion_matrix() function by passing the confusion matrix to generate the confusion matrix plot.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset