Defining helper functions

Now we define the functions for initializing weights and bias, and for performing the convolution and pooling operations.

Initialize the weights by drawing from a truncated normal distribution. Remember, the weights are actually the filter matrix that we use while performing the convolution operation:

def initialize_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.1))

Initialize the bias with a constant value of, say, 0.1:

def initialize_bias(shape):
return tf.Variable(tf.constant(0.1, shape=shape))

We define a function called convolution using tf.nn.conv2d(), which actually performs the convolution operation; that is, the element-wise multiplication of the input matrix (x) by the filter (W) with a the stride of 1 and the same padding. We set strides = [1,1,1,1]. The first and last values of strides are set to 1, which implies that we don't want to move between training samples and different channels. The second and third values of strides are also set to 1, which implies that we move the filter by 1 pixel in both the height and width direction:

def convolution(x, W):
return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding='SAME')

We define a function called max_pooling, using tf.nn.max_pool() to perform the pooling operation. We perform max pooling with a stride of 2 and the same padding and ksize implies our pooling window shape:

def max_pooling(x):
return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')

Define the placeholders for the input and output.

The placeholder for the input image is defined as follows:

X_ = tf.placeholder(tf.float32, [None, 784])

The placeholder for a reshaped input image is defined as follows:

X = tf.reshape(X_, [-1, 28, 28, 1])

The placeholder for the output label is defined as follows:

y = tf.placeholder(tf.float32, [None, 10])
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset