Step 9 - Evaluating the model

Well done! We have finished the training. How about now evaluating the test set:

  val (testLoss, testAcc) = test(testDataCount, batchSize, testData, testLabels, model)         
  println(s"TEST SET DISPLAY STEP:  Batch Loss = ${"%.6f".format(testLoss)}, Accuracy = $testAcc") 
        testAccuracies += testAcc 
        testLosses += testLoss 
      } 
      step += 1 
    }     
  val (finalLoss, accuracy) = test(testDataCount, batchSize, testData, testLabels, model) 
  println(s"FINAL RESULT: Batch Loss= $finalLoss, Accuracy= $accuracy") 
TEST SET DISPLAY STEP: Batch Loss = 0.065859, Accuracy = 0.9138107
TEST SET DISPLAY STEP: Batch Loss = 0.077047, Accuracy = 0.912114
TEST SET DISPLAY STEP: Batch Loss = 0.069186, Accuracy = 0.90566677
TEST SET DISPLAY STEP: Batch Loss = 0.059815, Accuracy = 0.93043774
TEST SET DISPLAY STEP: Batch Loss = 0.064162, Accuracy = 0.9192399
TEST SET DISPLAY STEP: Batch Loss = 0.063574, Accuracy = 0.9307771
TEST SET DISPLAY STEP: Batch Loss = 0.060209, Accuracy = 0.9229725
TEST SET DISPLAY STEP: Batch Loss = 0.062598, Accuracy = 0.9290804
TEST SET DISPLAY STEP: Batch Loss = 0.062686, Accuracy = 0.9311164
TEST SET DISPLAY STEP: Batch Loss = 0.059543, Accuracy = 0.9250085
TEST SET DISPLAY STEP: Batch Loss = 0.059646, Accuracy = 0.9263658
TEST SET DISPLAY STEP: Batch Loss = 0.062546, Accuracy = 0.92941976
TEST SET DISPLAY STEP: Batch Loss = 0.061765, Accuracy = 0.9263658
TEST SET DISPLAY STEP: Batch Loss = 0.063814, Accuracy = 0.9307771
TEST SET DISPLAY STEP: Batch Loss = 0.062560, Accuracy = 0.9324737
TEST SET DISPLAY STEP: Batch Loss = 0.061307, Accuracy = 0.93518835
TEST SET DISPLAY STEP: Batch Loss = 0.061102, Accuracy = 0.93281305
TEST SET DISPLAY STEP: Batch Loss = 0.054946, Accuracy = 0.9375636
TEST SET DISPLAY STEP: Batch Loss = 0.054461, Accuracy = 0.9365456
TEST SET DISPLAY STEP: Batch Loss = 0.050856, Accuracy = 0.9290804
TEST SET DISPLAY STEP: Batch Loss = 0.050600, Accuracy = 0.9334917
TEST SET DISPLAY STEP: Batch Loss = 0.057579, Accuracy = 0.9277231
TEST SET DISPLAY STEP: Batch Loss = 0.062409, Accuracy = 0.9324737
TEST SET DISPLAY STEP: Batch Loss = 0.050926, Accuracy = 0.9409569
TEST SET DISPLAY STEP: Batch Loss = 0.054567, Accuracy = 0.94027823
FINAL RESULT: Batch Loss= 0.0545671,
Accuracy= 0.94027823

Yahoo! We have managed to achieve 94% accuracy, which is really outstanding. In the previous code, test() is the method used for evaluating the performance of the model. The signature of the model is given in the following code:

def test(testDataCount: Int, batchSize: Int, testDatas: Array[Array[Array[Float]]], 
      testLabels: Array[Float], model: LSTMModel): (Float, Float) = { 
    var testLoss, testAcc = 0f 
    for (begin <- 0 until testDataCount by batchSize) { 
      val (testData, testLabel, dropNum) = { 
        if (begin + batchSize <= testDataCount) { 
          val datas = testDatas.drop(begin).take(batchSize) 
          val labels = testLabels.drop(begin).take(batchSize) 
          (datas, labels, 0) 
        } else { 
          val right = (begin + batchSize) - testDataCount 
          val left = testDataCount - begin 
          val datas = testDatas.drop(begin).take(left) ++ testDatas.take(right) 
          val labels = testLabels.drop(begin).take(left) ++ testLabels.take(right) 
          (datas, labels, right) 
        } 
      } 
      //feed the test data to the deepNN 
      model.data.set(testData.flatten.flatten) 
      model.label.set(testLabel) 
     
      model.exec.forward(isTrain = false) 
      val (acc, loss) = getAccAndLoss(model.exec.outputs(0), testLabel) 
      testLoss += loss 
      testAcc += acc 
    } 
    (testLoss / testDataCount, testAcc / testDataCount) 
  } 

When done, it's good practice to destroy the model to release resources:

model.exec.dispose() 

We saw earlier that we achieved up to 93% accuracy on the test set. How about seeing the previous accuracy and errors in a graph:

    // visualize 
    val xTrain = (0 until trainLosses.length * batchSize by batchSize).toArray.map(_.toDouble) 
    val yTrainL = trainLosses.toArray.map(_.toDouble) 
    val yTrainA = trainAccuracies.toArray.map(_.toDouble) 
val xTest = (0 until testLosses.length * displayIter by displayIter).toArray.map(_.toDouble) val yTestL = testLosses.toArray.map(_.toDouble) val yTestA = testAccuracies.toArray.map(_.toDouble)
var series = new MemXYSeries(xTrain, yTrainL, "Train losses") val data = new XYData(series) series = new MemXYSeries(xTrain, yTrainA, "Train accuracies") data += series
series = new MemXYSeries(xTest, yTestL, "Test losses") data += series series = new MemXYSeries(xTest, yTestA, "Test accuracies") data += series
val chart = new XYChart("Training session's progress over iterations!", data) chart.showLegend = true val plotter = new JFGraphPlotter(chart)
plotter.gui()
>>>
Figure 17: Training and test losses and accuracies per iteration

From the preceding graph, it is clear that with only a few iterations, our LSTM converged well and produced very good classification accuracy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset