How it works...

In step 1, we defined our train and test generators to set the parameters for data augmentation. Then, we loaded the datasets into our environment and simultaneously performed real-time data augmentation while resizing the images to 150 × 150.

In the next step, we instantiated a pre-trained base model, VGG16, with weights trained on ImageNet data. ImageNet is a large visual database that contains images of 1,000 different classes. Note that we had set the value of include_top as FALSE. Setting it to false does not include the default densely connected layers of the VGG16 network, which correspond to 1,000 classes of the ImageNet data. Further, we defined a sequential Keras model that contains the base model along with a few custom dense layers to build a binary classifier. Next, we printed out a summary of our model and the number of kernels and biases in it. Then we froze the layers of the base model because we did not want to modify its weights while training on our dataset.

In the last step, we compiled our model with binary_crossentropy as the loss function and trained it using the RMSprop optimizer. Once we had trained our model, we printed its performance metrics on the test data.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset