How it works...

In step 1, we set the values of the network parameters. We set the input dimension equal to 784, which is equal to the dimension of a flattened MNIST fashion image. In step 2, we defined an input layer for the VAE and the first hidden layer with 256 neural units and the ReLU activation function. In step 3, we created two dense layers, z_mean and z_sigma. These layers have units equal to the dimensions of the latent distribution. In our example, we compressed the input space of 784 dimensions to a two-dimensional latent space. Note that these layers are individually connected to the layers defined previously. These layers represent the mean () and standard deviation () attributes of the latent representation. In step 4, we defined a sampling function that produces a random sample from a distribution whose mean and variance is known. It takes a four-dimensional tensor as input, extracts the mean and standard deviation from the tensor, and generates a random point sample from the distribution. The new random sample is generated as per , where epsilon is a point from a standard normal distribution.

In the next step, we created a layer that concatenates the output tensors of the z_mean and z_sigma layers, and then we stacked a lambda layer. A lambda layer in Keras is a wrapper that wraps arbitrary an expression as a layer. In our example, the lambda layer wraps the sampling function we defined in the previous step. The output of this layer is the input to the decoder section of the VAE. In step 6, we built the decoder part of the VAE. We instantiated two layers, x_1 and x_2with 256 and 784 units, respectively. We combined these layers to create the output layer. In step 7we built the VAE model.

In steps 8 and 9, we built an encoder and decoder model, respectively. In step 10, we defined the loss function of the VAE model. It is the sum of the reconstruction loss and the Kullback-Leibler divergence between the assumed true probability distribution of the latent variable and the conditional probability distribution of the latent variable over the input, In step 11, we compiled the VAE model and trained it for ten epochs to minimize the VAE loss using the rmsprop optimizer. In the last step, we generated a sample of new synthetic images.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset