Decoder model

For the decoder network, we keep the same structure, except that, instead of pooling layers, we use upsampling layers. We can use the following code to do this:

# Decoder
decoder <- encoder %>%
layer_conv_2d(filters = 32,
kernel_size = c(3,3),
activation = 'relu',
padding = 'same') %>%
layer_upsampling_2d(c(2,2)) %>%
layer_conv_2d(filters = 32,
kernel_size = c(3,3),
activation = 'relu',
padding = 'same') %>%
layer_upsampling_2d(c(2,2)) %>%
layer_conv_2d(filters = 1,
kernel_size = c(3,3),
activation = 'sigmoid',
padding = 'same') summary(decoder)
Output
Tensor("conv2d_15/Sigmoid:0", shape=(?, 28, 28, 1), dtype=float32)

In the preceding code, the first upsampling layer changes the height and width to 14 x 14 and the second upsampling layer restores it to the original height and width of 28 x 28. In the last layer, we use a sigmoid activation function, which ensures that the output values remain between 0 and 1.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset