Using different activation functions in the simple RNN layer

This change can be seen in the following code:

# Model architecture
model <- keras_model_sequential()
model %>%
layer_embedding(input_dim = 500, output_dim = 32) %>%
layer_simple_rnn(units = 32, activation = "relu") %>%
layer_dense(units = 1, activation = "sigmoid")

# Compile model
model %>% compile(optimizer = "rmsprop",
loss = "binary_crossentropy",
metrics = c("acc"))

# Fit model
model_three <- model %>% fit(train_x, train_y,
epochs = 10,
batch_size = 128,
validation_split = 0.2)

In the preceding code, we are changing the default activation function in the simple RNN layer to a ReLU activation function. We keep everything else the same as what we had in the previous experiment.

The accuracy and loss values after 10 epochs can be seen in the following graph:

From the preceding plot, we can observe the following:

  • The loss and accuracy values look much better now.
  • Both the loss and accuracy curves based on training and validation are now closer to each other.
  • We used the model to find the loss and accuracy values based on the test data that we obtained, that is, 0.423 and 0.803, respectively. This shows better results compared to the results we've obtained so far. 

Next, we will experiment further by adding more recurrent layers. This will help us build a deeper recurrent neural network model.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset