How it works...

In step 1, we generated a random Gaussian noise with a mean of 0.5 and a standard deviation of 0.5. The shape of the noise data has to be similar to the shape of the data to which we add it. 

We want our pixel values to be in the range of 0 and 1, but after introducing noise in the input data, the pixel values might change and no longer be in the required range. To avoid this, in step 2, we clipped the values in the corrupted input data within a range of 0 and 1. Clipping converted all of the negative values into 0 and values greater than one into 1, while the rest of the values remain as is. In step 3, we created the encoder part of the autoencoder model. In our example, the encoder model was a stack of two convolutional layers.

The first convolutional layer had 32 filters of a size of 3x3, followed by another second convolutional layer with 64 filters of a size of 3x3. The activation function used was relu. In the next step, we built the decoder part of the autoencoder model. Note that, in the decoder model, the layer configuration is just the opposite of the encoder model. The input to the decoder model is the compressed representation of the data that was provided by the encoder. The output of the decoder model will have the same dimensions as the input dimension. In step 5, we combined the encoder and decoder and built an autoencoder model. In the next step, we compiled and trained the autoencoder. We used mean squared error as the loss function and adam as the optimizer. The overall objective is to make the model robust:

Noise + Data → Denoising Autoencoder  Data

In the last step, we generated predictions for the test data and visualized reconstructed images after predictions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset