How it works...

The performance of the model can be assessed using many metrics such as accuracy, AUC, misclassification error (%), misclassification error count, F1-score, precision, recall, specificity, and so on. However, in this chapter, the assessment of the model performance is based on AUC.

The following is the training and cross validation accuracy for the trained model. The training and cross validation AUC is 0.984 and 0.982 respectively:

# Get the training accuracy (AUC)
> train_performance <- h2o.performance(occupancy.deepmodel,train = T)
> train_performance@metrics$AUC
[1] 0.9848667

# Get the cross-validation accuracy (AUC)
> xval_performance <- h2o.performance(occupancy.deepmodel,xval = T)
> xval_performance@metrics$AUC
[1] 0.9821723

As we have already provided test data in the model (as a validation dataset), the following is its performance. The AUC on the test data is 0.991.

# Get the testing accuracy(AUC)
> test_performance <- h2o.performance(occupancy.deepmodel,valid = T)
> test_performance@metrics$AUC
[1] 0.9905056
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset