Performing Anomaly Detection on Unsupervised Data

In this chapter, we will perform anomaly detection with the Modified National Institute of Standards and Technology (MNIST) dataset using a simple autoencoder without any pretraining. We will identify the outliers in the given MNIST data. Outlier digits can be considered as most untypical or not normal digits. We will encode the MNIST data and then decode it back in the output layer. Then, we will calculate the reconstruction error for the MNIST data.

The MNIST sample that closely resembles a digit value will have low reconstruction error. We will then sort them based on the reconstruction errors and then display the best samples and the worst samples (outliers) using the JFrame window. The autoencoder is constructed using a feed-forward network. Note that we are not performing any pretraining. We can process feature inputs in an autoencoder and we won't require MNIST labels at any stage.

In this chapter, we will cover the following recipes:

  • Extracting and preparing MNIST data
  • Constructing dense layers for input
  • Constructing output layers
  • Training with MNIST images
  • Evaluating and sorting the results based on the anomaly score
  • Saving the resultant model

Let's begin.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset