Numerical optimization

This section briefly introduces the different optimization algorithms that can be applied to minimize the loss function, with or without a penalty term. These algorithms are described in greater detail in the Summary of optimization techniques section in Appendix A, Basic Concepts.

First, let's define the least squares problem. The minimization of the loss function consists of nullifying the first order derivatives, which in turn generates a system of D equations (also known as gradient equations), D being the number of regression weights (parameters). The weights are iteratively computed by solving the system of equations using a numerical optimization algorithm.

Note

The definition of the least squares-based loss function is as follows:

Numerical optimization

The generation of gradient equations with a Jacobian J matrix (refer to the Jacobian and Hessian matrices section in Appendix A, Basic Concepts) after minimization of the loss function L is described as follows:

Numerical optimization

Iterative approximation using the Taylor series is described as follows:

Numerical optimization

Normal equations using the matrix notation and the Jacobian Numerical optimization matrix is described as follows:

Numerical optimization

The logistic regression is a nonlinear function. Therefore, it requires the nonlinear minimization of the sum of least squares. The optimization algorithms for the nonlinear least squares problems can be divided into the following two categories:

  • Newton (or 2nd order techniques): These algorithms calculate the second order derivatives (the Hessian matrix) to compute the regression weights that nullify the gradient. The two most common algorithms in this category are the Gauss-Newton and the Levenberg-Marquardt methods (refer to the Nonlinear least squares minimization section in Appendix A, Basic Concepts). Both algorithms are included in the Apache Commons Math library.
  • Quasi-Newton (or 1st order techniques): First order algorithms do not compute but estimate the second order derivatives of the least squares residuals from the Jacobian matrix. These methods can minimize any real-valued functions, not just the least squares summation. This category of algorithms includes the Davidon-Fletcher-Powell and the Broyden-Fletcher-Goldfarb-Shannon methods (refer to the Quasi-Newton algorithms section in Appendix A, Basic Concepts).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset