Domain confusion

We learned different transfer learning strategies and even discussed the three questions of whatwhen, and how to transfer knowledge from the source to the target. In particular, we discussed how feature-representation transfer can be useful. It is worth reiterating that different layers in a deep learning network capture different sets of features. We can utilize this fact to learn domain-invariant features and improve their transferability across domains. Instead of allowing the model to learn any representation, we nudge the representations of both domains to be as similar as possible.

This can be achieved by applying certain preprocessing steps directly to the representations themselves. Some of these have been discussed by Baochen Sun, Jiashi Feng, and Kate Saenko in their paper Return of Frustratingly Easy Domain Adaptation (https://arxiv.org/abs/1511.05547). This nudge toward the similarity of representation has also been presented by Ganin et. al. in their paper, Domain-Adversarial Training of Neural Networks (https://arxiv.org/abs/1505.07818). The basic idea behind this technique is to add another objective to the source model to encourage similarity by confusing the domain itself, hence domain confusion.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset