Gradient-based optimization

The classic way of optimizing a fitness function  is to first deduce its gradient, that is, , which consists of the partial differentials of , that is:

The gradient is then followed iteratively in the direction of the steepest descent; a quasi-Newton optimizer can also be used if necessary. This optimizer requires that not only for the fitness function  be differentiable, but time and patience as well. This is because the gradient can be very laborious to derive, and the execution can be very time-consuming.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset