Now, we move on to building our model:
- We first define a variable, the value of which will be equal to the dimension of the compressed encoded representation of the input. Then, we set the input layer of our model:
encoding_dim = 32
input_img = layer_input(shape=c(784),name = "input")
- Let's build an encoder and decoder and combine them to build an autoencoder:
encoded = input_img %>% layer_dense(units = encoding_dim, activation='relu',name = "encoder")
decoded = encoded %>% layer_dense(units = c(784), activation='sigmoid',name = "decoder")
# this model maps an input to its reconstruction
autoencoder = keras_model(input_img, decoded)
Now, we visualize the summary of the autoencoder model:
summary(autoencoder)
The summary of the model is as follows:
- We then compile and train our model:
# compiling the model
autoencoder %>% compile(optimizer='adadelta', loss='binary_crossentropy')
# training the model
autoencoder %>% fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=TRUE,
validation_data=list(x_test, x_test))
- Now, we predict the output of our model on the test set and print out a sample of test and predicted images:
# predict
predicted <- autoencoder %>% predict(x_test)
# Original images from test data
grid = array_reshape(x_test[20,],dim = c(28,28))
for(i in seq(1,5)){
grid = abind(grid,array_reshape(x_test[i,],dim = c(28,28)),along = 2)
}
grid.raster(grid,interpolate=FALSE)
# Reconstructed images
grid1 = array_reshape(predicted[20,],dim = c(28,28))
for(i in seq(1,5)){
grid1 = abind(grid1,array_reshape(predicted[i,],dim = c(28,28)),along = 2)
}
grid.raster(grid1, interpolate=FALSE)
Here are some sample images from the test data:
The following screenshot shows the predicted images for the sample test images displayed previously:
We can see that all of the images have been reconstructed accurately by our model.