Support Vector Machines

   

Monday 6 March

 
15:05 16:00 Lecture 4 Support Vector Machines

 

Anselm Vossen

Support Vector Machines (SVMs) are advanced algorithms for classification and regression, that are conceptually easy to understand. Recently SVMs gained increased popularity due to state-of-the-art performance paired with a good mathematical understanding which enables users to choose arbitrary complex classification or regression functions without over-fitting the data. A technique known as structural risk minimization.

 

The lecture targets computer scientists interested in state of the art
pattern recognition algorithms. It will also be interesting for physicists interested in making these algorithms working for them.

 

For some of the intricacies a basic knowledge of linear algebra and statistics will be helpful. Additionally, this lecture will use the vocabulary introduced in the preceding ones, especially "Feature Selection and Classification Basics".

 

The Linear Classifier

- Toy Example: Separating points on a plane

- Optimal Margin and Support Vectors

 

Structural Risk Minimization

- Short (and incomplete) Introduction to Vapnik-Chervonenkis (VC) Theory

- Finding a balance between fitting and overfitting the data

- How to incorporate this into the linear classifier

 

Kernel Methods

-The "Kernel Trick": Mapping the data into a convenient higher
dimensional space with little computing overhead

- Using the "Kernel Trick" to extend linear algorithms to nonlinear ones

 

Support Vector Machines

- Putting everything together to build powerful classification and
regression algorithms

- SVM Libraries: how to use SVMs in your code