Inception v2 and v3

Inception v2 and v3 are introduced in the paper, Going Deeper with Convolutions by Christian Szegedy as mentioned in Further reading section. The authors suggest the use of factorized convolution, that is, we can break down a convolutional layer with a larger filter size into a stack of convolutional layers with a smaller filter size. So, in the inception block, a convolutional layer with a 5 x 5 filter can be broken down into two convolutional layers with 3 x 3 filters, as shown in the following diagram. Having a factorized convolution increases performance and speed:

The authors also suggest breaking down a convolutional layer of filter size n x n into a stack of convolutional layers with filter sizes 1 x n and n x 1. For example, in the previous figure, we have 3 x 3 convolution, which is now broken down into 1 x 3 convolution, followed by 3 x 1 convolution, as shown in the following diagram:

As you will notice in the previous diagram, we are basically expanding our network in a deeper fashion, which will lead us to lose information. So, instead of making it deeper, we make our network wider, shown as follows:

In inception net v3, we use factorized 7 x 7 convolutions with RMSProp optimizers. Also, we apply batch normalization in the auxiliary classifiers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset