How to do it...

We will now move on to build our deep autoencoder. A deep autoencoder has multiple layers in its encoder and decoder network:

  1. Let's build an autoencoder:
encoded_dim = 32

# input layer
input_img <- layer_input(shape = c(784),name = "input")

# encoder
encoded = input_img %>%
layer_dense(128, activation='relu',name = "encoder_1") %>%
layer_dense(64, activation='relu',name = "encoder_2") %>%
layer_dense(encoded_dim, activation='relu',name = "encoder_3")

# decoder
decoded = encoded %>%
layer_dense(64, activation='relu',name = "decoder_1")%>%
layer_dense(128, activation='relu',name = "decoder_2")%>%
layer_dense(784,activation = 'sigmoid',name = "decoder_3")

# autoencoder
autoencoder = keras_model(input_img, decoded)
summary(autoencoder)

The summary of the autoencoder model is shown in the following screenshot:

  1. Let's create a separate encoder model; this model maps an input to its encoded representation:
encoder = keras_model(input_img, encoded)
summary(encoder)

The following screenshot shows the summary of the encoder network:

  1. Let's also create the decoder model:
# input layer for decoder
encoded_input = layer_input(shape=c(32),name = "encoded_input")

# retrieve the layer of the autoencoder model for decoder
decoder_layer1 <- get_layer(autoencoder,name= "decoder_1")
decoder_layer2 <- get_layer(autoencoder,name= "decoder_2")
decoder_layer3 <- get_layer(autoencoder,name= "decoder_3")

# create the decoder model from retreived layers
decoder = keras_model(encoded_input, decoder_layer3(decoder_layer2(decoder_layer1(encoded_input))))

summary(decoder)

The following screenshot shows the summary of the decoder network:

  1. We then compile and train our autoencoder model:
# compiling the model
autoencoder %>% compile(optimizer = 'adadelta',loss='binary_crossentropy')

# training the model
autoencoder %>% fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=TRUE,
validation_data=list(x_test, x_test))
  1. Let's now encode the test images:
encoded_imgs = encoder %>% predict(x_test)
  1. After encoding our test images, we reconstruct the original input test images from the encoded representation using the decoder network and calculate the reconstruction error:
# reconstructing images 
decoded_imgs = decoder %>% predict(encoded_imgs)

# calculating reconstruction error
reconstruction_error = metric_mean_squared_error(x_test,decoded_imgs)
paste("reconstruction error: " ,k_get_value(k_mean(reconstruction_error)))

We can see that we have achieved a satisfactory reconstruction error of 0.228:

  1. Let's now encode training images. We will use the encoded data to train a digit classifier:
encoded_train_imgs = encoder %>% predict(x_train)
  1. We now build a digit classifier network and compile it:
# Building the model
model <- keras_model_sequential()
model %>%
layer_dense(units = 256, activation = 'relu', input_shape = c(encoded_dim)) %>%
layer_dropout(rate = 0.4) %>%
layer_dense(units = 128, activation = 'relu') %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 10, activation = 'softmax')

# compiling the model
model %>% compile(
loss = 'categorical_crossentropy',
optimizer = optimizer_rmsprop(),
metrics = c('accuracy')
)
  1. We proceed to process train labels and then we will train the network:
# extracting class labels
y_train <- mnist$train$y
y_test <- mnist$test$y

# Converting class vector (integers) to binary class matrix.
y_train <- to_categorical(y_train, 10)
y_test <- to_categorical(y_test, 10)

# training the model
history <- model %>% fit(
encoded_train_imgs, y_train,
epochs = 30, batch_size = 128,
validation_split = 0.2
)
  1. Let's evaluate the model performance:
model %>% evaluate(encoded_imgs, y_test, batch_size = 128)

The following screenshot shows the model's accuracy and loss:

From the previous screenshot, it is clear that our autoencoder model did a great job at learning encoded representation of data. Using these encoded features we trained a classifier with 79% accuracy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset