Building an image classifier

CIFAR-10 is one of the few balanced datasets available. It has an overall size of 60,000 images. The following snippet loads the CIFAR-10 dataset, and sets the train and test variables:

# load CIFAR dataset 
(X_train, y_train), (X_test, y_test) = cifar10.load_data() 

The images in the dataset are of low resolution, and sometimes even difficult for a human to label. The code shared in this section is available in the IPython Notebook CIFAR-10_CNN_Classifier.ipynb.

We have discussed CNNs and how they are optimized for visual datasets. CNNs work upon the principles of weight sharing to reduce the number of parameters; developing them from scratch requires not just strong deep learning skills but huge infrastructure requirements as well. Keeping this in mind, it would be interesting to develop a CNN from scratch and test our skills.

The following snippet showcases a very simple CNN with just five layers (two convolutional, one max pooling, one dense, and one final softmax layer) built using Keras:

model = Sequential() 
model.add(Conv2D(16, kernel_size=(3, 3), 
                activation='relu', 
input_shape=INPUT_SHAPE)) 
 
model.add(Conv2D(32, (3,3), padding='same',  
kernel_regularizer=regularizers.l2(WEIGHT_DECAY), 
                                   activation='relu')) 
model.add(BatchNormalization()) 
model.add(MaxPooling2D(pool_size=(2,2))) 
model.add(Dropout(0.2)) 
 
model.add(Flatten()) 
model.add(Dense(128, activation='relu')) 
model.add(Dropout(0.5)) 
model.add(Dense(NUM_CLASSES, activation='softmax')) 

To improve overall generalization performance, the model also contains a BatchNormalization layer along with a DropOut layer. These layers help us keep overfit in check, and also prevent the network from memorizing the dataset itself.

We ran the model with just 25 epochs to achieve approximately 65% accuracy on the validation set. The following screenshot shows the output predictions from the trained model:

Predictions from CNN based CIFAR-10 classifier

The results are decent enough, though by no means close to state-of-the-art results. Readers should keep in mind that this CNN was just to showcase the immense potential at hand, and we encourage you to try experimenting on the same lines.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset