How it works...

In step 1, we initialized a variable, encoded_dim, to set the dimensionality of the encoded representation of the input. Since we implemented an under-complete autoencoder, which compresses input feature space to a lower dimension, encoded_dim is less than the input dimension. Next, we defined the input layer of the autoencoder, which took an array of a size of 784 as input.

In the next step, we built an autoencoder model. We first defined an encoder and a decoder network and then combined them to create an autoencoder. Note that the number of units in the encoder layer is equal to encoded_dim because we wanted to compress the input feature space of 784 dimensions to 32 dimensions. The number of units in the decoder layer is the same as the input dimension because the decoder tries to reconstruct the input. After building the autoencoder, we visualized the summary of the model. In step 3, we configured our model to minimize binary cross-entropy loss with the Adadelta optimizer and then trained the model. We set the input and target value as x_train.

In the last step, we visualized the predicted images for a few sample images from the test dataset.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset