Denoising autoencoders

Autoencoders are widely used for feature selection and extraction. They try to apply transformations on the input data to reconstruct the input accurately. When the nodes of the hidden layers are equal to or more than the nodes in the input layer, autoencoders carry the risk of learning the identity function where the output simply equals the input, hence making the autoencoder of no use. Denoising refers to adding random noise to the raw input intentionally before feeding it to the network. By doing this, the identity-function risk is addressed, and the encoder learns significant features from the data and learns a robust representation of the input data. While working with denoising autoencoders, it is essential to note that the loss function is calculated by comparing the output values with the original input and not with the corrupted input.

Here is a sample representation of a denoising autoencoder:

In this recipe, we will implement a denoising autoencoder.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset