Network architecture

Let's take a look at the code for developing the generator network architecture:

# Generator network
h <- 28; w <- 28; c <- 1; l <- 28
gi <- layer_input(shape = l)
go <- gi %>% layer_dense(units = 32 * 14 * 14) %>%
layer_activation_leaky_relu() %>%
layer_reshape(target_shape = c(14, 14, 32)) %>%
layer_conv_2d(filters = 32,
kernel_size = 5,
padding = "same") %>%
layer_activation_leaky_relu() %>%
layer_conv_2d_transpose(filters = 32,
kernel_size = 4,
strides = 2,
padding = "same") %>%
layer_activation_leaky_relu() %>%
layer_conv_2d(filters = 1,
kernel_size = 5,
activation = "tanh",
padding = "same")
g <- keras_model(gi, go)

In the preceding code, we can observe the following:

  • We have specified height (h), width (w), number of channels (c), and the latent dimension (l) as 28, 28, 1, and 28, respectively.
  • We have specified the input shape for the generator input (gi) as 28. At the time of training, the generator network will be provided an input of 28 random numbers that have been obtained from a standard normal distribution which is simply noise.
  • Next, we have specified the architecture for the generator network's output (go).
  • The last layer is a convolutional 2D layer with a tanh activation function. In the last layer, we have set the filter as 1 since we will not be using color images.
  • Note that layer_conv_2d_transpose is required to be 28 x 28 in size.
  • The output dimensions from the generator output will be 28 x 28 x 1.
  • The other values that were used, such as the number of filters, kernel_size, or strides can be experimented with later if you wish to explore improving the results.
  • gi and go are used for the generator network (g).

Now, let's look at the summary of this network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset