For the model architecture and model summary, we will use the following code:
model <- keras_model_sequential()
model %>% layer_embedding(input_dim = 500,
output_dim = 16,
input_length = 100) %>%
layer_flatten() %>%
layer_dense(units = 16, activation = 'relu') %>%
layer_dense(units = 1, activation = "sigmoid")
summary(model)
OUTPUT
___________________________________________________________________
Layer (type) Output Shape Param #
===================================================================
embedding_12 (Embedding) (None, 100, 16) 8000
___________________________________________________________________
flatten_3 (Flatten) (None, 1600) 0
___________________________________________________________________
dense_6 (Dense) (None, 16) 25616
___________________________________________________________________
dense_7 (Dense) (None, 1) 17
===================================================================
Total params: 33,633
Trainable params: 33,633
Non-trainable params: 0
___________________________________________________________________
From the preceding code, we can observe the following:
- Here, we've added layer_flatten() after layer_embedding().
- This is followed by a dense layer with 16 nodes and a relu activation function.
- The summary of the model shows that there are 33,633 parameters in total.
Now, we can compile the model.