Performance considerations

You may have already observed that the training of a support vector regression model on a large data set is time consuming. The performance of the support vector machine depends on the type of optimizer (for example, sequential minimal optimization) selected to maximize the margin during training.

  • A linear model (SVM without kernel) has an asymptotic time complexity O(N) for training N labeled observations.
  • Nonlinear models rely on kernel methods formulated as a quadratic programming problem with an asymptotic time complexity of O(N3)
  • An algorithm that uses sequential minimal optimization techniques such as index caching or elimination of null values (as in LIBSVM), has an asymptotic time complexity of O(N2) with the worst case scenario (quadratic optimization) of O(N3)
  • Sparse problems for very large training sets (N > 10,000) also have an asymptotic time of O(N2)

The time and space complexity of the kernelized support vector machine has been receiving a great deal of attention [8:16] [8:17].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset