De-noising autoencoders

De-noising autoencoders reconstruct the input from a corrupted version of themselves. This does two things. First, it tries to encode the input while preserving as much information as possible. Second, it tries to undo the effect of the corruption process. The input is reconstructed after a percentage of the data has been randomly removed, which forces the network to learn robust features that tend to generalize better. De-noising autoencoders are autoencoders where we feed the input data with noise (such as making an image grainier). We compute the error the same way, so the output of the network is compared to the original input without noise. This encourages the network to learn not details but broader features, which are usually more accurate, as they are not affected by constantly changing noise.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset