Start training and generate the image. For every 100 iterations, we print the image generated by the generator:
onehot = np.eye(10)
for epoch in range(num_epochs):
for i in range(0, data.train.num_examples // batch_size):
Sample the images:
x_batch, _ = data.train.next_batch(batch_size)
x_batch = np.reshape(x_batch, (batch_size, 28, 28, 1))
Sample the value of c:
c_ = np.random.randint(low=0, high=10, size=(batch_size,))
c_one_hot = onehot[c_]
Sample noise z:
z_batch = np.random.uniform(low=-1.0, high=1.0, size=(batch_size,64))
Optimize the loss of the generator and the discriminator:
feed_dict={x: x_batch, c: c_one_hot, z: z_batch, is_train: True}
_ = session.run(D_optimizer, feed_dict=feed_dict)
_ = session.run(G_optimizer, feed_dict=feed_dict)
Print the generator image for every 100th iteration:
if i % 100 == 0:
discriminator_loss = D_loss.eval(feed_dict)
generator_loss = G_loss.eval(feed_dict)
_fake_x = fake_x.eval(feed_dict)
print("Epoch: {}, iteration: {}, Discriminator Loss:{}, Generator Loss: {}".format(epoch,i,discriminator_loss,generator_loss))
plt.imshow(plot(c_one_hot, _fake_x))
plt.show()
We can see how the generator is evolving on each iteration and generating better digits: