Changes in the generator and discriminator network

The changes in the generator network are shown in the following code:

# Generator network
gi <- layer_input(shape = l)
go <- gi %>% layer_dense(units = 32 * 14 * 14) %>%
layer_activation_leaky_relu() %>%
layer_reshape(target_shape = c(14, 14, 32)) %>%
layer_conv_2d(filters = 32,
kernel_size = 5,
padding = "same") %>%
layer_activation_leaky_relu() %>%
layer_conv_2d_transpose(filters = 32,
kernel_size = 4,
strides = 2,
padding = "same") %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 64,
kernel_size = 5,
padding = "same") %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 1,
kernel_size = 5,
activation = "tanh",
padding = "same")
g <- keras_model(gi, go)

Here, we can see that, in the generator network, we are adding the layer_conv_2d and layer_activation_leaky_relu layers just before the last layer. The total number of parameters for the generator network has increased to 276,801.

The changes in the discriminator network are shown in the following code:

# Discriminator network
di <- layer_input(shape = c(h, w, c))
do <- di %>%
layer_conv_2d(filters = 64, kernel_size = 4) %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 64, kernel_size = 4, strides = 2) %>%
layer_activation_leaky_relu() %>%
layer_flatten() %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 1, activation = "sigmoid")
d <- keras_model(di, do)

Here, we have added the layer_conv_2d and layer_activation_leaky_relu layers before the flattening layer in the discriminator network. The number of parameters in the discriminator network has increased to 148,866. We have kept everything else the same and then trained the network again for 100 iterations.

Now, we can assess the impact of these changes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset