How it works...

As mentioned earlier, a regularized autoencoder extends the standard autoencoder by adding a regularization parameter to the cost function, shown as follows:

Here, λ is the regularization parameter and i and j are the node indexes with W representing the hidden layer weights for the autoencoder. The regularization autoencoder aims to ensure more robust encoding and prefers a low weight h function. The concept is further utilized to develop a contractive autoencoder, which utilizes the Frobenius norm of the Jacobian matrix on input, represented as follows:

where J(x) is the Jacobian matrix and is evaluated as follows:

For a linear encoder, a contractive encoder and regularized encoder converge to L2 weight decay. The regularization helps in making the autoencoder less sensitive to the input; however, the minimization of the cost function helps the model to capture the variation and remain sensitive to manifolds of high density. These autoencoders are also referred to as contractive autoencoders.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset