Intro to Support Vector Machines

Anton Haugen
3 min readOct 20, 2020

--

Support Vector Machine are a powerful class of supervised and unsupervised learning algorithms used in classification models.

Developed by Vladimir Vapnik at Bell Laboratories in the mid-90s from a theory proposed in his early 70s PhD work, SVM uses “edge cases” in order to split labeled data by a boundary line in the exact middle of these cases. SVM aims to maximize this margin by mapping training points to a higher dimensional space where a linearly separating hyperplane can be found. If a sample is correctly classified but does not lie beyond the margin, the model is penalized to find better margins and a new boundary. Though Vapnik had primarily worked with linear separations, his partner at Bell proposed using polynomial and alternative kernels to separate non-linearly separable data.

SVM with second degree polynomial kernel

SciKitLearn’s Support Vector Classification models (SVC(), NuSVC(), LinearSVC()) utilize Vapnik’s work.

Kernel Functions

Kernels provide the means for how the data is projected into the new dimensional space. Scikitlearn’s SVC models provides five types of kernels: linear, polynomial, sigmoid, radial basis function, and precomputed, in which the user provides their own kernel.

Radial basis functions are the most common kernel and the default for SciKitLearn’s SVC models. In the radial basis function kernel, squared euclidean distance is multiplied by gamma, a free parameter. The radial basis kernel is also the reason why SVCs are classified as a neural network because of the nonlinear learning rule.

Implementing a Support Vector Classification model

If you are working with sparse data or data that has several features like NLP classification problems, you might consider a SVC model as an alternative to a vanilla neural network, logistic regression, decision tree, or unsupervised clustering model. Although many say its complexity results in the model taking a longer time to train, because its decision function uses only a small number of training points, its predictions can be memory efficient.

Creating your own SVC model will mostly involving dialing in the hyper parameters of penalty (C), type of kernel, and a value for gamma if a radial basis function, sigmoid, or polynomial kernel is being used. If you are using a polynomial kernel, degree should also be considered. Gamma can either be user-provided or created through auto or scale, which is the default parameter. Scale uses the formula 1 / (n_features * X.var()) while auto uses 1/n_features.

from sklearn.svm import SVCsvc = SVC(C= 0.1, class_weight= 'balanced', kernel= 'poly', gamma='scale')params = [{'svc__class_weight': ['balanced'], 
'svc__kernel': ['linear', 'poly', 'rbf', 'sigmoid'],
'svc__C': [10, 5, 1, 0.8, 0.5, 0.1],
'svc__gamma': ['scale','auto']}]
gridsearch = GridSearchCV(svc_pipe, params, scoring='accuracy',
cv=5, verbose=1, n_jobs=-1)

NuSVC() is similar to SVC(), except NuSVC() provides the ability to specify the number of vectors via the ‘nu’ parameter, which is a value from 0 to 1 representing the quotient of penalty over the number of vectors.

NuSVC with ‘auto’ gamma and default paramaters

Further resources:

SciKitLearn Documentation — https://scikit-learn.org/stable/modules/svm.html#svm-kernels

MIT Lecture on the linear algebra behind SVM — https://www.youtube.com/watch?v=_PwhiWxHK8o

Blog on SVM at Zenva — https://pythonmachinelearning.pro/classification-with-support-vector-machines/

--

--

Anton Haugen
Anton Haugen

No responses yet