Reconstructing Inputs Using Autoencoders

Autoencoders are unsupervised learning algorithm. Unlike other algorithms, autoencoders learn to reconstruct the input, that is, an autoencoder takes the input and learns to reproduce the input as an output. We start the chapter by understanding what are autoencoders and how exactly they reconstruct the input. Then, we will learn how autoencoders reconstruct MNIST images.

Going ahead, we will learn about the different variants of autoencoders; first, we will learn about convolutional autoencoders (CAEs), which use convolutional layers; then, we will learn about how denoising autoencoders (DAEs) which learn to remove noise in the input. After this, we will understand sparse autoencoders and how they learn from sparse inputs. At the end of the chapter, we will learn about an interesting generative type of autoencoders called variational autoencoders. We will understand how variational autoencoders learn to generate new inputs and how they differ from other autoencoders.

In this chapter, we will cover the following topics:

  • Autoencoders and their architecture
  • Reconstructing MNIST images using autoencoders
  • Convolutional autoencoders
  • Building convolutional autoencoders
  • Denoising autoencoders
  • Removing noise in the image using denoising autoencoders
  • Sparse autoencoders
  • Contractive autoencoders
  • Variational autoencoders
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset