Reconstructed images

To obtain reconstructed images, we use predict_on_batch to predict the output using the autoencoder model. We do this with the following code:

# Reconstruct and plot images - train data
rc <- ae_model %>% keras::predict_on_batch(x = trainx) par(mfrow = c(2,5), mar = rep(0, 4)) for (i in 1:5) plot(as.raster(trainx[i,,,])) for (i in 1:5) plot(as.raster(rc[i,,,]))

The first five fashion images from the training data (first row) and the corresponding reconstructed images (second row) are as follows:

Here, as expected, the reconstructed images are seen to capture key features of the training images. However, it ignores certain finer details. For example, the logos that are more clearly visible in the original training images are blurred in the reconstructed images.

We can also take a look at the plot of the original and reconstructed images using images from the test data. For this, we can use the following code:

# Reconstruct and plot images - train data
rc <- ae_model %>% keras::predict_on_batch(x = testx)
par(mfrow = c(2,5), mar = rep(0, 4))
for (i in 1:5) plot(as.raster(testx[i,,,]))
for (i in 1:5) plot(as.raster(rc[i,,,]))

The following image shows the original images (first row) and reconstructed images (second row) using the test data:

Here, the reconstructed images behave as they did previously for the training data.

In this example, we have used MNIST fashion data to build an autoencoder network that helps reduce the dimensions of the images by keeping the main features and removing the features that involve finer details. Next, we will look at another variant of the autoencoder model that helps remove noise from images.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset