Autoencoder neural networks

Autoencoders are typically used for reducing the dimensionality of data in neural networks. Autoencoders are also successfully used for anomaly detection and novelty detection problems. Autoencoder neural networks come under the unsupervised learning category. Here the target values are set to be equal to the inputs. In other words, we want to learn the identity function. By doing so, we can gain a compact representation of the data.

The network is trained by minimizing the difference between input and output. A typical autoencoder architecture is a slight variant of the DNN architecture, where the number of units per hidden layer is progressively reduced until a certain point before being progressively increased, with the final layer dimension being equal to the input dimension. The key idea behind this is to introduce bottlenecks in the network and force it to learn a meaningful compact representation. The middle layer of hidden units (the bottleneck) is basically the dimension-reduced encoding of the input. The first half of the hidden layers is called the encoder, and the second half is called the decoder. The following depicts a simple autoencoder architecture. The layer named z is the representation layer here: 

Source: https://cloud4scieng.org/manifold-learning-and-deep-autoencoders-in-science/
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset