Chapter 11 - Exploring Few-Shot Learning Algorithms

  1. Learning from a few data points is called few-shot learning or k-shot learning, where k specifies the number of data points in each of the classes in the dataset.
  2. We need our models to learn from just a few data points. In order to attain this, we train them in the same way; that is, we train the model on very few data points. Say we have a dataset, we sample a few data points from each of the classes present in our dataset and we call it support set. Similarly, we sample some different data points from each of the classes and call it query set.
  3. Siamese networks basically consist of two symmetrical neural networks both sharing the same weights and architecture and both joined together at the end using some energy function, . The objective of our Siamese network is to learn whether the two inputs are similar or dissimilar. 
  1. The energy function, , which will give us a similarity between the two inputs. It can be expressed as follows:

  2. Since the goal of the Siamese network is not to perform a classification task but to understand the similarity between the two input values, we use the contrastive loss function. It can be expressed as follows:

  3. Prototypical networks are yet another simple, efficient, and popularly used few-shot learning algorithm. The basic idea of the prototypical network is to create a prototypical representation of each class and classify a query point (new point) based on the distance between the class prototype and the query point.

  4. Relation networks consist of two important functions: an embedding function denoted by  and the relation function denoted by 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset