In Step 1, we trained a gradient boosting classifier model. In Step 2, we used the predict() method to make predictions on our test data.
In Step 3, we used classification_report() to see various metrics such as precision, recall, and f1-score for each class, as well as the average of each of the metrics. The classification_report() reports the averages for the total true positives, false negatives, false positives, unweighted mean per label, and support-weighted mean per label. It also reports a sample average for multi-label classification.
In Step 4, we used confusion_matrix() to generate the confusion matrix to see the true positives, true negatives, false positives, and false negatives.
In Step 5, we looked at the accuracy and the AUC values of our test data using the accuracy_score() and roc_auc_score() functions.
In the next section, we will tune our hyperparameters using a grid search to find the optimal model.