Variational autoencoders

Variational autoencoders have the same architecture as autoencoders but are taught something else: an approximated probability distribution of the input samples. This is because they are a bit more closely related to Boltzmann and Restricted Boltzmann Machines. They do however rely on Bayesian mathematics as well as a re-parametrization trick to achieve this different representation. The basics come down to this: taking influence into account. If one thing happens in one place, and something else happens somewhere else, they are not necessarily related. If they are not related, then error propagation should consider that. This is a useful approach, because neural networks are large graphs (in a way), so it helps to be able to rule out the influence some nodes have on other nodes as you dive into deeper layers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset