Defining placeholders

As we have learned, we first need to define the placeholders for input and output. Values for the placeholders will be fed in at runtime through feed_dict:

with tf.name_scope('input'):
X = tf.placeholder("float", [None, num_input])

with tf.name_scope('output'):
Y = tf.placeholder("float", [None, num_output])

Since we have a four-layer network, we have four weights and four biases. We initialize our weights by drawing values from the truncated normal distribution with a standard deviation of 0.1. Remember, the dimensions of the weights matrix should be a number of neurons in the previous layer x a number of neurons in the current layer. For instance, the dimension of weight matrix w3 should be the number of neurons in hidden layer 2 x the number of neurons in hidden layer 3.

We often define all of the weights in a dictionary, as follows:

with tf.name_scope('weights'):

weights = {
'w1': tf.Variable(tf.truncated_normal([num_input, num_hidden1], stddev=0.1),name='weight_1'),
'w2': tf.Variable(tf.truncated_normal([num_hidden1, num_hidden2], stddev=0.1),name='weight_2'),
'w3': tf.Variable(tf.truncated_normal([num_hidden2, num_hidden_3], stddev=0.1),name='weight_3'),
'out': tf.Variable(tf.truncated_normal([num_hidden_3, num_output], stddev=0.1),name='weight_4'),
}

The shape of the bias should be the number of neurons in the current layer. For instance, the dimension of the b2 bias is the number of neurons in hidden layer 2. We set the bias value as a constant; 0.1 in all of the layers:

with tf.name_scope('biases'):

biases = {
'b1': tf.Variable(tf.constant(0.1, shape=[num_hidden1]),name='bias_1'),
'b2': tf.Variable(tf.constant(0.1, shape=[num_hidden2]),name='bias_2'),
'b3': tf.Variable(tf.constant(0.1, shape=[num_hidden_3]),name='bias_3'),
'out': tf.Variable(tf.constant(0.1, shape=[num_output]),name='bias_4')
}
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset