Sequence-to-sequence model

We will now combine the encoder and decoder to create the sequence-to-sequence model:

def seq2seq_model(input_data, target_en_data, dropout_prob, fr_len, en_len, max_en_len, 
v_size, rnn_cell_size, n_layers, word2int_en, batch_size):
input_word_embeddings = tf.Variable(fr_embeddings_matrix, name="input_word_embeddings")
encoding_embed_input = tf.nn.embedding_lookup(input_word_embeddings, input_data)
encoding_op, encoding_st = encoding_layer(rnn_cell_size, fr_len,
n_layers, encoding_embed_input, dropout_prob)
decoding_input = process_encoding_input(target_en_data, word2int_en, batch_size)
decoding_embed_input = tf.nn.embedding_lookup(en_embeddings_matrix, decoding_input)
tr_logits, inf_logits = decoding_layer(decoding_embed_input, en_embeddings_matrix,
encoding_op,encoding_st, v_size,
fr_len, en_len, max_en_len,
rnn_cell_size, word2int_en,
dropout_prob, batch_size,n_layers)
return tr_logits, inf_logits

The seq2seq_model function combines the source text embeddings, encoder, and decoder and outputs the logits. The input is the French text embeddings, fr_embeddings_matrix. The encoder and decoder layers are created using the functions defined earlier.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset