Sparse autoencoders

Sparse autoencoders are, in a way, the opposite of autoencoders. Instead of teaching a network to represent information in less space or fewer nodes, we try to encode information in more space. Instead of the network converging in the middle and then expanding back to the input size, we blow up the middle. These types of network can be used to extract many small features from a dataset. If you were to train a sparse autoencoder the same way as an autoencoder, you would in almost all cases end up with a pretty useless identity network (as in, what comes in is what comes out, without any transformation or decomposition). To prevent this, we feed back the input, plus what is known as a sparsity driver. This can take the form of a threshold filter, where only a certain error is passed back and trained. The other error will be irrelevant for that pass and will be set to zero. In a way, this resembles spiking neural networks, where not all neurons fire all the time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset