Filters

One of the other unique features of a CNN is that many neurons can share the same vector of weights and biases, or more formally, the same filter. Why is that important? Because each neuron computes an output value by applying a function to the input values of the previous layer. Incremental adjustments to these weights and biases are what helps the network to learn. If the same filter can be re-used, then the required memory footprint will be greatly reduced. This becomes very important, especially as the image or receptive field gets larger.

CNNs have the following distinguishing features:

  • Three-dimensional volumes of neurons: The layers of a CNN have neurons arranged in three dimensions: width, height, and depth. The neurons inside each layer are connected to a small region of the layer before it called their receptive field. Different types of connected layers are stacked to form the actual convolutional architecture, as shown in the following diagram:
Convolving
  • Shared weights: In a convolutional neural network, each receptive field (filter) is replicated across the entire visual field, as the preceding image shows. These filters share the same weight vector and bias parameters, and form what is commonly referred to as a feature map. This means that all the neurons in a given convolutional layer respond to the same feature within their specific field. Replicating units in this way allows for features to be detected regardless of their position in the visual field. The following diagram is a simple example of what this means:

This is a sample

   This is a sample

       This is a sample

           This is a sample

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset