Weight-sharing

Weight-sharing means the same set of weights is being used in different layers in the network and thus we have fewer parameters to optimize. This is seen in some popular deep learning architectures, such as the Siamese network and RNNs. Using shared weights in a few layers helps the model generalize better by controlling the model capacity. Backpropagation can easily incorporate linear weight constraints such as weight-sharing. Another type of weight-sharing is used in CNNs where, unlike a fully connected hidden layer, a convolution layer has connections between local regions. In CNN, it's assumed that the input (such as image or text) to be processed by the network can be decomposed into a set of local regions with the same nature and thus each of them can be processed with the same set of transformations, that is, shared weights. An RNN can be treated as a feed-forward network where each successive layer shares the same set of weights. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset