Machine Learning - PowerPoint PPT Presentation

About This Presentation
Title:

Machine Learning

Description:

This presentation will guide you through Support Vector Machines, Properties of an SVM, Linear separability, Linear Discriminant, Selecting the hyperplane, Support Vectors, Non linearly separable and Multi-Category Classification. – PowerPoint PPT presentation

Number of Views:28

less

Transcript and Presenter's Notes

Title: Machine Learning


1
Support Vector Machines
Swipe
2
Support Vector Machines
Support vector machines are supervised learning
models that are used for classification and
regression analysis, and machine learning
methods for such models are available to study
and identify patterns.
3
Properties of an SVM
Non probabilistic binary linear
classifier Support for non-linear classification
using the 'kernel trick'
4
Linear separability
  • If points in three-dimensional space can be
    linearly separated by a two-dimensional
    hyperplane, then they are linearly separable.
  • Example - The two sets of 2D data in the image ar
    e

separated by a straight line
single (1D hence
hyperplane), and
are linearly separable
5
Linear Discriminant
The hyperplane that separates the two sets of data
is called the linear discriminant.
Equation WT X C W X the
w1,w2,.......wn
X1,X2,......Xn for nth dimension
An SVM is simply a linear discriminant which
tries to build a hyperplane such that it has a
large margin. It classifies a new sample by simpl
y computing the distance from the hyperplane.
6
Selecting the hyperplane
Every linearly separable data set, no matter how
huge, has a large number of hyperplanes dividing
it. Therefore, the decision on the most appropria
te one for categorization must be made.
7
Support Vectors
Observations (represented as vectors) which lie
at marginal distance from the hyperplane are
called support vectors. These are important as
shifting them even slightly might change the
position of the hyperplane to a great extent.
class 2
Example - Support vectors The vectors lying on
the green lines in the image are the support
vectors. "" are support vectors lines are
support planes
X1


class 1


X2
8
Non linearly separable
In this case, an SVM would not able to linearly
classify the data. Hence SVM uses what is known
as the kernel trick. The idea is that the
enlarged feature space might have a linear
boundary which might not quite be linear in the
original feature space. In this trick the
feature space is enlarged. This can be done
using various kernel functions.
9
Multi-Category Classification
One-Versus-One Classification
One-Versus-All Classification
10
Topics for next Post
Linear regression Logistic regression Naive
bayes Stay Tuned with
Write a Comment
User Comments (0)
About PowerShow.com