How it works...

We have mentioned the mean square error (MSE) as the error function associated with the output layer. lossFunction, which is used in autoencoder architecture, is MSE in most cases. MSE is optimal in calculating how close the reconstructed input is to the original input. ND4J has an implementation for MSE, which is LossFunction.MSE.

In the output layer, we get the reconstructed input in their original dimensions. We will then use an error function to calculate the reconstruction error. In step 1, we're constructing an output layer that calculates the reconstruction error for anomaly detection. It is important to keep the incoming and outgoing connections the same at the input and output layers, respectively. Once the output layer definition is created, we need to add it to a stack of layer configurations that is maintained to create the neural network configuration. In step 2, we added the output layer to the previously maintained neural network configuration builder. In order to follow an intuitive approach, we have created configuration builders first, unlike the straightforward approach here: https://github.com/PacktPublishing/Java-Deep-Learning-Cookbook/blob/master/08_Performing_Anomaly_detection_on_unsupervised%20data/sourceCode/cookbook-app/src/main/java/MnistAnomalyDetectionExample.java.

You can obtain a configuration instance by calling the build() method on the Builder instance. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset