Converting photos to paintings using a CycleGAN

Now we will learn how to implement a CycleGAN in TensorFlow. We will see how to convert pictures to paintings using a CycleGAN:

The dataset used in this section can be downloaded from https://people.eecs.berkeley.edu/~taesung_park/CycleGAN/datasets/monet2photo.zip. Once you have downloaded the dataset, unzip the archive; it will consist of four folders, trainA, trainB, testA, and testB, with training and testing images.

The trainA folder consists of paintings (Monet) and the trainB folder consists of photos. Since we are mapping photos (x) to the paintings (y), the trainB folder, which consists of photos, will be our source image, , and the trainA, which consists of paintings, will be our target image, .

The complete code for the CycleGAN with a step-by-step explanation is available as a Jupyter Notebook at https://github.com/PacktPublishing/Hands-On-Deep-Learning-Algorithms-with-Python.

Instead of looking at the whole code, we will see only how the CycleGAN is implemented in TensorFlow and maps the source image to the target domain. You can also check the complete code at https://github.com/PacktPublishing/Hands-On-Deep-Learning-Algorithms-with-Python.

Define the CycleGAN class:

class CycleGAN:
def __init__(self):

Define the placeholders for the input, X, and the output, Y:

        self.X = tf.placeholder("float", shape=[batchsize, image_height, image_width, 3])
self.Y = tf.placeholder("float", shape=[batchsize, image_height, image_width, 3])

Define the generator, , that maps to :

        G = generator("G")

Define the generator, , that maps to :

        F = generator("F")

Define the discriminator, , that discriminates between the real source image and the fake source image:

         self.Dx = discriminator("Dx")       

Define the discriminator, , that discriminates between the real target image and the fake target image:

        self.Dy = discriminator("Dy")

Generate the fake source image:

        self.fake_X = F(self.Y)

Generate the fake target image:

        self.fake_Y = G(self.X)        

Get the logits:

        #real source image logits
self.Dx_logits_real = self.Dx(self.X)


#fake source image logits
self.Dx_logits_fake = self.Dx(self.fake_X, True)


#real target image logits
self.Dy_logits_fake = self.Dy(self.fake_Y, True)


#fake target image logits
self.Dy_logits_real = self.Dy(self.Y)

We know cycle consistency loss is given as follows:

We can implement the cycle consistency loss as follows:

        self.cycle_loss = tf.reduce_mean(tf.abs(F(self.fake_Y, True) - self.X)) + 
tf.reduce_mean(tf.abs(G(self.fake_X, True) - self.Y))

Define the loss for both of our discriminators, and .

We can rewrite our loss function of discriminator with Wasserstein distance as:

Thus, the loss of both the discriminator is implemented as follows:

        self.Dx_loss = -tf.reduce_mean(self.Dx_logits_real) + tf.reduce_mean(self.Dx_logits_fake) 

self.Dy_loss = -tf.reduce_mean(self.Dy_logits_real) + tf.reduce_mean(self.Dy_logits_fake)

Define the loss for both of the generators, and . We can rewrite our loss function of generators with Wasserstein distance as:

Thus, the loss of both the generators multiplied with the cycle consistency loss, cycle_loss is implemented as:

        self.G_loss = -tf.reduce_mean(self.Dy_logits_fake) + 10. * self.cycle_loss

self.F_loss = -tf.reduce_mean(self.Dx_logits_fake) + 10. * self.cycle_loss

Optimize the discriminators and generators using the Adam optimizer:

       #optimize the discriminator
self.Dx_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0., beta2=0.9).minimize(self.Dx_loss, var_list=[self.Dx.var])


self.Dy_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0., beta2=0.9).minimize(self.Dy_loss, var_list=[self.Dy.var])


#optimize the generator
self.G_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0., beta2=0.9).minimize(self.G_loss, var_list=[G.var])


self.F_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0., beta2=0.9).minimize(self.F_loss, var_list=[F.var])

Once we start training the model, we can see how the loss of discriminators and generators decreases over the iterations:

Epoch: 0, iteration: 0, Dx Loss: -0.6229429245, Dy Loss: -2.42867970467, G Loss: 1385.33557129, F Loss: 1383.81530762, Cycle Loss: 138.448059082


Epoch: 0, iteration: 50, Dx Loss: -6.46077537537, Dy Loss: -7.29514217377, G Loss: 629.768066406, F Loss: 615.080932617, Cycle Loss: 62.6807098389


Epoch: 1, iteration: 100, Dx Loss: -16.5891685486, Dy Loss: -16.0576553345, G Loss: 645.53137207, F Loss: 649.854919434, Cycle Loss: 63.9096908569
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset