Adaptive neural networks

Analogous to human learning, neural networks may also work in order not to forget previous knowledge. Using the traditional approaches for neural learning, this is nearly impossible, due to the fact that every training implies replacing all the connections already made by new ones, thereby forgetting the previous knowledge. Thus a need arises to make the neural networks adapt to new knowledge by incrementing instead of replacing their current knowledge. To address that issue, we are going to explore one method called adaptive resonance theory (ART).

Adaptive resonance theory

The question that drove the development of this theory was: How can an adaptive system remain plastic to a significant input and yet keep stability for irrelevant inputs? In other words: How can it retain previously learned information while learning new information?

We've seen that competitive learning in unsupervised learning deals with pattern recognition, whereby similar inputs yield similar outputs or fire the same neurons. In an ART topology, the resonance comes in when the information is being retrieved from the network, by providing a feedback from the competitive layer and the input layer. So, while the network receives data to learn, there is an oscillation resulting from the feedback between the competitive and input layers. This oscillation stabilizes when the pattern is fully developed inside the neural network. This resonance then reinforces the stored pattern.

Implementation

A new class called ART has created into the some package, inheriting from CompetitiveLearning. Besides other small contributions, its great change is the vigilance test:

public class ART extends CompetitiveLearning{
  
  private boolean vigilanceTest(int row_i) {
    double v1 = 0.0;
    double v2 = 0.0;
    
    for (int i = 0; i < neuralNet.getNumberOfInputs(); i++) {
      double weightIn  = neuralNet.getOutputLayer().getWeight(i);
      double trainPattern = trainingDataSet.getIthInput(row_i)[i];
      
      v1 = v1 + (weightIn * trainPattern);
      
      v2 = v2 + (trainPattern * trainPattern);
    }
    
    double vigilanceValue = v1 / v2;
    
    if(vigilanceValue > neuralNet.getMatchRate()){
      return true;
    } else {
      return false;
    }
    
  }

}

The training method is shown below. It's possible to notice that, firstly, global variables and the neural net are initialized; after that, the number of training sets and the training patterns are stored; then the training process begins. The first step of this process is to calculate the index of the winner neuron; the second is make attribution of the neural net output. The next step consists of verifying whether the neural net has learned or not, whether it has learned that weights are fixed; if not, another training sample is presented to the net:

epoch=0;
int k=0;
forward();
//...
currentRecord=0;
forward(currentRecord);
while(!stopCriteria()){
 //...
  boolean isMatched = this.vigilanceTest(currentRecord);
  if ( isMatched ) {
  applyNewWeights();
} 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset