Visualizing extracted features

Now that we have trained our CNN model, we can see what features our CNN has extracted to recognize the image. As we learned, each convolutional layer extracts important features from the image. We will see what features our first convolutional layer has extracted to recognize the handwritten digits.

First, let's select one image from the training set, say, digit 1:

plt.imshow(mnist.train.images[7].reshape([28, 28]))

The input image is shown here:

Feed this image to the first convolutional layer, that is, conv1, and get the feature maps:

image = mnist.train.images[7].reshape([-1, 784])
feature_map = sess.run([conv1], feed_dict={X_: image})[0]

Plot the feature map:

for i in range(32):
feature = feature_map[:,:,:,i].reshape([28, 28])
plt.subplot(4,8, i + 1)
plt.imshow(feature)
plt.axis('off')
plt.show()

As you can see in the following plot, the first convolutional layer has learned to extract edges from the given image:

Thus, this is how the CNN uses multiple convolutional layers to extract important features from the image and feed these extracted features to a fully connected layer to classify the image. Now that we have learned how CNNs works, in the next section, we will learn about several interesting CNN architectures.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset