Title: Introduction to Machine Learning
1Introduction to Machine Learning
- Isabelle Guyon
- isabelle_at_clopinet.com
2What is Machine Learning?
Trained machine
TRAINING DATA
Answer
?
Query
3What for?
- Classification
- Time series prediction
- Regression
- Clustering
4Some Learning Machines
- Linear models
- Kernel methods
- Neural networks
- Decision trees
5Applications
6Banking / Telecom / Retail
- Identify
- Prospective customers
- Dissatisfied customers
- Good customers
- Bad payers
- Obtain
- More effective advertising
- Less credit risk
- Fewer fraud
- Decreased churn rate
7Biomedical / Biometrics
- Medicine
- Screening
- Diagnosis and prognosis
- Drug discovery
- Security
- Face recognition
- Signature / fingerprint / iris verification
- DNA fingerprinting
6
8Computer / Internet
- Computer interfaces
- Troubleshooting wizards
- Handwriting and speech
- Brain waves
- Internet
- Hit ranking
- Spam filtering
- Text categorization
- Text translation
- Recommendation
7
9Challenges
NIPS 2003 WCCI 2006
Ada
training examples
Sylva
105
Gisette Gina
104
Dexter, Nova
103
Madelon
Arcene, Dorothea, Hiva
102
10
inputs
10
102
103
104
105
10Ten Classification Tasks
11Challenge Winning Methods
BER/ltBERgt
12Conventions
n
Xxij
y yj
m
xi
a
w
13Learning problem
Data matrix X m lines patterns (data points,
examples) samples, patients, documents, images,
n columns features (attributes, input
variables) genes, proteins, words, pixels,
Unsupervised learning Is there structure in
data? Supervised learning Predict an outcome y.
Colon cancer, Alon et al 1999
14Linear Models
- f(x) w ? x b Sj1n wj xj b
- Linearity in the parameters, NOT in the input
components. - f(x) w ? F(x) b Sj wj fj(x) b
(Perceptron) - f(x) Si1m ai k(xi,x) b (Kernel method)
15Artificial Neurons
Cell potential
Axon
Activation of other neurons
Activation function
Dendrites
Synapses
f(x) w ? x b
McCulloch and Pitts, 1943
16Linear Decision Boundary
17Perceptron
Rosenblatt, 1957
18NL Decision Boundary
19Kernel Method
Potential functions, Aizerman et al 1964
20Hebbs Rule
Axon
Link to Naïve Bayes
21Kernel Trick (for Hebbs rule)
- Hebbs rule for the Perceptron
- w Si yi F(xi)
- f(x) w ? F(x) Si yi F(xi) ? F(x)
- Define a dot product
- k(xi,x) F(xi) ? F(x)
- f(x) Si yi k(xi,x)
-
22Kernel Trick (general)
- f(x) Si ai k(xi, x)
- k(xi, x) F(xi) ? F(x)
- f(x) w ? F(x)
- w Si ai F(xi)
Dual forms
23What is a Kernel?
- A kernel is
- a similarity measure
- a dot product in some feature space k(s, t)
F(s) ? F(t) - But we do not need to know the F representation.
- Examples
- k(s, t) exp(-s-t2/s2) Gaussian kernel
- k(s, t) (s ? t)q Polynomial kernel
24Multi-Layer Perceptron
Back-propagation, Rumelhart et al, 1986
25Chessboard Problem
26Tree Classifiers
- CART (Breiman, 1984) or C4.5 (Quinlan, 1993)
27Iris Data (Fisher, 1936)
Figure from Norbert Jankowski and Krzysztof
Grabczewski
Linear discriminant
Tree classifier
versicolor
setosa
virginica
Gaussian mixture
Kernel method (SVM)
28Fit / Robustness Tradeoff
x2
x1
15
29Performance evaluation
f(x) lt 0
f(x) lt 0
x2
f(x) 0
f(x) 0
f(x) gt 0
f(x) gt 0
x1
30Performance evaluation
f(x) lt -1
f(x) lt -1
x2
f(x) -1
f(x) -1
f(x) gt -1
f(x) gt -1
x1
31Performance evaluation
f(x) lt 1
f(x) lt 1
x2
f(x) 1
f(x) 1
f(x) gt 1
f(x) gt 1
x1
32ROC Curve
For a given threshold on f(x), you get a point
on the ROC curve.
Ideal ROC curve
100
Actual ROC
Positive class success rate (hit rate,
sensitivity)
Random ROC
0
100
1 - negative class success rate (false alarm
rate, 1-specificity)
33ROC Curve
For a given threshold on f(x), you get a point
on the ROC curve.
Ideal ROC curve (AUC1)
100
Actual ROC
Positive class success rate (hit rate,
sensitivity)
Random ROC (AUC0.5)
0 ? AUC ? 1
0
100
1 - negative class success rate (false alarm
rate, 1-specificity)
34Lift Curve
Ideal Lift
Customers ranked according to f(x) selection of
the top ranking customers.
100
Hit rate Frac. good customers select.
Actual Lift
M
O
Random lift
Gini2 AUC-1 0 ? Gini ? 1
100
0
Fraction of customers selected
35Performance Assessment
False alarm rate type I errate 1-specificity
Hit rate 1-type II errate sensitivity
recall test power
- Compare F(x) sign(f(x)) to the target y, and
report - Error rate (fn fp)/m
- Hit rate , False alarm rate or Hit rate ,
Precision or Hit rate , Frac.selected - Balanced error rate (BER) (fn/pos fp/neg)/2
1 (sensitivityspecificity)/2 - F measure 2 precision.recall/(precisionrecall)
- Vary the decision threshold q in F(x)
sign(f(x)q), and plot - ROC curve Hit rate vs. False alarm rate
- Lift curve Hit rate vs. Fraction selected
- Precision/recall curve Hit rate vs. Precision
36What is a Risk Functional?
- A function of the parameters of the learning
machine, assessing how much it is expected to
fail on a given task. - Examples
- Classification
- Error rate (1/m) Si1m 1(F(xi)?yi)
- 1- AUC (Gini Index 2 AUC-1)
- Regression
- Mean square error (1/m) Si1m(f(xi)-yi)2
37How to train?
- Define a risk functional Rf(x,w)
- Optimize it w.r.t. w (gradient descent,
mathematical programming, simulated annealing,
genetic algorithms, etc.)
( to be continued in the next lecture)
38How to Train?
- Define a risk functional Rf(x,w)
- Find a method to optimize it, typically gradient
descent - wj ? wj - ? ?R/?wj
- or any optimization method (mathematical
programming, simulated annealing, genetic
algorithms, etc.) - ( to be continued in the next lecture)
39Summary
- With linear threshold units (neurons) we can
build - Linear discriminant (including Naïve Bayes)
- Kernel methods
- Neural networks
- Decision trees
- The architectural hyper-parameters may include
- The choice of basis functions f (features)
- The kernel
- The number of units
- Learning means fitting
- Parameters (weights)
- Hyper-parameters
- Be aware of the fit vs. robustness tradeoff
40Want to Learn More?
- Pattern Classification, R. Duda, P. Hart, and D.
Stork. Standard pattern recognition textbook.
Limited to classification problems. Matlab code.
http//rii.ricoh.com/stork/DHS.html - The Elements of statistical Learning Data
Mining, Inference, and Prediction. T. Hastie, R.
Tibshirani, J. Friedman, Standard statistics
textbook. Includes all the standard machine
learning methods for classification, regression,
clustering. R code. http//www-stat-class.stanford
.edu/tibs/ElemStatLearn/ - Linear Discriminants and Support Vector Machines,
I. Guyon and D. Stork, In Smola et al Eds.
Advances in Large Margin Classiers. Pages
147--169, MIT Press, 2000. http//clopinet.com/isa
belle/Papers/guyon_stork_nips98.ps.gz - Feature Extraction Foundations and Applications.
I. Guyon et al, Eds. Book for practitioners with
datasets of NIPS 2003 challenge, tutorials, best
performing methods, Matlab code, teaching
material. http//clopinet.com/fextract-book -