Chapter 10 - Reconstructing Inputs Using Autoencoders

  1. Autoencoders are unsupervised learning algorithms. Unlike other algorithms, autoencoders learn to reconstruct the input, that is, an autoencoder takes the input and learns to reproduce the input as an output.
  2. We can define our loss function as a difference between the actual input and reconstructed input as follows:

    Here,  is the number of training samples. 

  3. Convolutional Autoencoder (CAE) that uses a convolutional network instead of a vanilla neural network. In the vanilla autoencoders, encoders and decoders are basically a feedforward network. But in CAEs, they are basically convolutional networks. This means the encoder consists of convolutional layers and the decoder consists of transposed convolutional layers, instead of a raw feedforward network. 
  4. Denoising Autoencoders (DAE) are another small variant of the autoencoder. They are mainly used to remove noise from the image, audio, and other inputs. So, we feed the corrupted input to the autoencoder and it learns to reconstruct the original uncorrupted input.
  5. The average activation of the  neuron in the hidden layer,  ,over the whole training set can be calculated as follows:

  6. The loss function of contractive autoencoders can be mathematically represented as follows:

    The first term represents the reconstruction error and the second term represents the penalty term or the regularizer, and it is basically the Frobenius norm of the Jacobian matrix. 

  7. The Frobenius norm, also called the Hilbert-Schmidt norm, of a matrix is defined as the square root of the sum of the absolute square of its elements. A matrix comprising a partial derivative of the vector-valued function is called the Jacobian matrix
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset