The summary of the RNN model is as follows:
# Model summary
model
OUTPUT
Model
________________________________________________________________________
Layer (type) Output Shape Param #
========================================================================
embedding_21 (Embedding) (None, None, 32) 16000
________________________________________________________________________
simple_rnn_23 (SimpleRNN) (None, 8) 328
________________________________________________________________________
dense_24 (Dense) (None, 1) 9
========================================================================
Total params: 16,337
Trainable params: 16,337
Non-trainable params: 0
________________________________________________________________________
The number of parameters for the embedding layer is arrived at by multiplying 500 (number of most frequent words) and 32 (output dimension) to obtain 16,000. To arrive at the number of parameters for the simple RNN layer, we use (h(h+i) + h), where h represents the number of hidden units and i represents the input dimension for this layer. In this case, this is 32.
Thus, we have (8(8 + 32)+8) = 328 parameters.
In recurrent layers, information from the previous input is also used, which leads to these extra parameters that we can see here. This is the reason why RNNs are better suited for handling sequence data compared to a regular densely connected neural network layer. For the last layer, which is a dense layer, we have (1 x 8 + 1) = 9 parameters. Overall, this architecture has 16,337 parameters.