Chapter 8.  Adapting Your Machine Learning Models

This chapter covers advanced machine learning (ML) techniques in order to be able to make algorithms adaptable to new data. Readers will also see how machine learning algorithms learn incrementally over the data, that is to say that the models are updated each time they see a new training instance. Learning in dynamic environments by conceding different constraints will also be discussed. In summary, the following topics will be covered in this chapter:

  • Adapting machine learning models
  • Generalization of ML models
  • Adapting through incremental algorithms
  • Adapting through reusing ML models
  • Machine learning in dynamic environments

Adapting machine learning models

As we discussed earlier, as a part of the ML training process, a model is trained using a set of data (that is, training, test, and validation set). Machine learning models that can adapt to their environments and learn from their experience have attracted consumers and researchers from diverse areas, including computer science, engineering, mathematics, physics, neuroscience, and cognitive science. In this section, we will provide a technical overview of how to adopt machine learning models for the new data and requirements.

Technical overview

Technically, the same models might need to be retrained at a later stage if required for the betterment. This is really dependent on several factors, for example, when the new data becomes available, or when the consumer of the API has their own data to train the model or when the data needs to be filtered and the model trained with the subset of data. In these scenarios, ML algorithms should provide enough APIs to provide a convenient way to allow its consumers to produce a client that can be used on a one-time or regular basis, so that they can retrain the model using their own data.

As a result, the client will be able to evaluate the results of retraining and updating the web service API accordingly. Alternatively, they will be able to use the newly trained model. In this regard, there are several contexts of domain adaptation. However, they differ in the information considered for application type and requirements:

  • Unsupervised domain adaptation: The learning sample contains a set of labeled source examples, a set of unlabeled source examples, and an unlabeled set of target examples
  • Semi-supervised domain adaptation: In this situation, we also consider a small set of labeled target examples
  • Supervised domain adaptation: All the examples considered are supposed to be labeled:
Technical overview

Figure 1: The retraining process overview (dashed line presents the retrain steps)

Technically, there should be three alternatives for making the ML models adaptable:

  • The most widely used machine learning techniques and algorithms include decision trees, decision rules, neural networks, statistical classifiers, and probabilistic graphical models, and they all need to be developed so that they can be adaptable for the new requirements
  • Secondly, previous mentioned algorithms or techniques should be generalized so that they can be used with minimum effort
  • Moreover, more robust theoretical frameworks and algorithms such as Bayesian learning theory, classical statistical theory, minimum description length theory, and statistical mechanics approaches need to be developed in order to understand computational learning theory

The benefits from these three adaptation properties and techniques will provide insight into experimental results that will also guide machine learning communities to contribute towards the different learning algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset