How to do it...

We will use the flow_images_from_directory() function from keras to read and manipulate data on the fly.

  1. Let's read the images from the train and test directory and do the required transformations:
# Reading train data
train_data <- flow_images_from_directory(directory = train_path,
target_size = img_size,
color_mode = "rgb",
class_mode = "categorical",
classes = class_label,
batch_size = 20)

# Reading test data
test_data <- flow_images_from_directory(directory = test_path,
target_size = img_size,
color_mode = "rgb",
class_mode = "categorical",
classes = class_label,
batch_size = 20)

Let's see how many images we have in train and test sets:

print(paste("Number of images in train and test is",train_data$n,"and ",test_data$n,"repectively"))

We can see that the training dataset contains 11,397 images and the test data contains 3,829 images:

Now let's also have a look at the number of images per class in the train and test data:

table(factor(train_data$classes))

This is the distribution of images per class in the training data:

table(factor(test_data$classes))

This is the distribution of images per class in the test data:

Note that the class labels are numeric. Let's look at the mapping of the class label and class label names. These would be the same for the train and test data:

train_data$class_indices

The screenshot shows the class labels in the data:

Similarly, we can look at the test label and label names. Now let's print the shape of the image we loaded into our environment:

train_data$image_shape

The screenshot shows the dimensions of the image loaded:

  1. Next, we define the Keras model with pooling layers:
cnn_model_pool <- keras_model_sequential() %>% 
layer_conv_2d(filters = 32, kernel_size = c(3,3), activation = 'relu',
input_shape = c(img_width,img_height,3),padding = "same") %>%
layer_conv_2d(filters = 16, kernel_size = c(3,3), activation = 'relu',padding = "same") %>%
layer_max_pooling_2d(pool_size = c(2,2)) %>%
layer_flatten() %>%
layer_dense(units = 50, activation = 'relu') %>%
layer_dense(units = 23, activation = 'softmax')

Let's look at the model summary:

cnn_model_pool %>% summary()

The following screenshot shows the summary of the model:

  1. After defining our model, we compile and train it.

While compiling the model, we set the loss function, model metric, learning rate, and decay rate of our optimizer:

cnn_model_pool %>% compile(
loss = "categorical_crossentropy",
optimizer = optimizer_rmsprop(lr = 0.0001,decay = 1e-6),
metrics = c('accuracy')
)

Now we train the model:

cnn_model_pool %>% fit_generator(generator = train_data,
steps_per_epoch = 20,
epochs = 5)

After training the model, we evaluate its performance on test data and print the performance metrics:

scores <- cnn_model_pool %>% evaluate_generator(generator = test_data,steps = 20)

# Output metrics
paste('Test loss:', scores[[1]], ' ')
paste('Test accuracy:', scores[[2]], ' ')

The following screenshot shows the model performance on the test data:

We can see that the accuracy of the model on test data is around 79%.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset