2.2. SUPERVISED LEARNING 9
Input Image Convolutional Layer
Convolution
+ ReLU
Pooling Flatten
Pooling Layer Fully Connected
Layers
Output
Figure 2.5: Convolutional neural network.
filter produces an activation map, representing the responses of the filter at each region of the
input. As the network is trained, the filters learn to recognize useful features such as edges and
corners, or even more specific features such as faces or wheels at the later layers of the network.
Pooling layers reduce the spatial size of the previous layer outputs to reduce the number of pa-
rameters in the network and the computation required. e most common pooling strategy is
max pooling, which applies a max filter to regions of the initial representation to provide an
abstracted form of the representation. e ReLU layers apply a nonlinear activation function,
similar to activation functions in other neural networks. Finally, the fully connected layer is sim-
ply a fully connected feedforward layer at the end of the network used to perform tasks such as
classification or regression [24, 53].
2.2 SUPERVISED LEARNING
Having chosen a neural network architecture, the second choice when tackling a deep learning
problem is the choice of learning strategy. e learning strategy dictates how the neural networks
weights are updated in response to training examples, allowing the neural network to learn trends
in the data and identify rules that hopefully generalize to new data. ere are multiple choices
for learning strategies. However, most deep learning algorithms discussed in later sections can be
broadly classified as supervised or reinforcement learning, therefore these two learning strategies
will be described here.
Supervised learning utilizes labeled data to learn rules about the problem domain. Labeled
data here means that the algorithm is given examples where the input as well as the correct output
for each input are known. For example, in an image classification problem, the labeled data would
be images labeled with the correct classification. erefore, the neural network can make its own
prediction and compare it to the correct answer. e difference between the correct output and
the network output is referred to as the error which is used to update the network weights. After
training, the network will learn to choose the correct output for input data without the labels.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset