Output module

The output module usually depends on the task at hand. In this case, it is used to retrieve the most appropriate reply from a set of candidates. It does this by first converting each candidate into embeddings in the same way as the input and question modules did, and then taking dot products of each candidate embedding with the context vector from the memory module. For each candidate, we get a similarity or matching score with the context vector. For inference, we apply a softmax function over the similarity values for all the candidates to select the most appropriate one:

    def _output_module(self, context_vector):
with tf.variable_scope("OuptutModule"):
candidates_emb = tf.nn.embedding_lookup(self.output_word_emb_matrix,
self._candidates_vec)
candidates_emb_sum = tf.reduce_sum(candidates_emb, 1)
return tf.matmul(context_vector, tf.transpose(candidates_emb_sum))

If the task required the generation of a response instead of retrieval, an RNN could have been used to generate the answer token-by-token in a fashion similar to machine translation tasks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset