How to do it...

In this section, we will use the same Fashion-MNIST dataset that was used in the previous Introduction to the convolution operation recipe of this chapter. The data exploration and transformation will remain the same, hence we directly jump to the model configuration:

  1. Let's define our model with strides and padding:
cnn_model_sp <- keras_model_sequential() %>% 
layer_conv_2d(filters = 8, kernel_size = c(4,4), activation = 'relu',
input_shape = c(28,28,1),
strides = c(2L, 2L),,padding = "same") %>%
layer_conv_2d(filters = 16, kernel_size = c(3,3), activation = 'relu') %>%
layer_flatten() %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 10, activation = 'softmax')

Let's look at the summary of the model:

cnn_model_sp %>% summary()

The following screenshot shows the details about the model created:

  1. After configuring our model, we define its objective loss function, then compile and train it:
# loss function
loss_entropy <- function(y_pred, y_true) {
loss_categorical_crossentropy(y_pred, y_true)
}

# Compile model
cnn_model_sp %>% compile(
loss = loss_entropy,
optimizer = optimizer_sgd(),
metrics = c('accuracy')
)

# Train model
cnn_model_sp %>% fit(
x_train, y_train,
batch_size = 128,
epochs = 5,
validation_split = 0.2
)

Let's evaluate the performance of the model on the test data and print the evaluation metrics:

scores <- cnn_model_sp %>% evaluate(x_test,
y_test,
verbose = 0
)

Now we print the model loss and accuracy on the test data:

# Output metrics
paste('Test loss:', scores[[1]], ' ')
paste('Test accuracy:', scores[[2]], ' ')

We can see that the model's accuracy on test data is around 78%:

We can see that the model did a good job in the classification task.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset