How it works...

In step 3, actuals are the actual output for the test input, and predictions are the observed output for the test input. 

The evaluation metrics are based on the difference between actuals and predictions. We used ROC evaluation metrics to find this difference. An ROC evaluation is ideal for binary classification problems with datasets that have a uniform distribution of the output classes. Predicting patient mortality is just another binary classification puzzle. 

thresholdSteps in the parameterized constructor of ROC is the number of threshold steps to be used for the ROC calculation. When we decrease the threshold, we get more positive values. It increases the sensitivity and means that the neural network will be less confident in uniquely classifying an item under a class. 

In step 4, we printed the ROC evaluation metrics by calling calculateAUC():

evaluation.calculateAUC();

The calculateAUC() method will calculate the area under the ROC curve plotted from the test data. If you print the results, you should see a probability value between 0 and 1. We can also call the stats() method to display the whole ROC evaluation metrics, as shown here:

The stats() method will display the AUC score along with the AUPRC (short for Area Under Precision/Recall Curve) metrics. AUPRC is another performance metric where the curve represents the trade-off between precision and recall values. For a model with a good AUPRC score, positive samples can be found with fewer false positive results.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset