Finding Clusters within a Class to Improve Classification Accuracy - PowerPoint PPT Presentation

About This Presentation
Title:

Finding Clusters within a Class to Improve Classification Accuracy

Description:

Finding Clusters within a Class to Improve Classification Accuracy. Final ... J. Quinonero-Candela, I. Dagan, B. Magnini, and F. d'Alche-Buc, LNAI 3944, pages ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 11
Provided by: fci5
Category:

less

Transcript and Presenter's Notes

Title: Finding Clusters within a Class to Improve Classification Accuracy


1
Finding Clusters within a Class to Improve
Classification Accuracy
  • Final Project
  • Yong Jae Lee
  • 4/28/08

2
Objective
  • Find Clusters
  • Car images

3
Approach
  • Object Representation
  • Scale Invariant Feature Transform
  • (SIFT) Lowe. 2004
  • Image to Image Similarity
  • Proximity Distribution Kernels Ling et al.
    2007
  • Clustering
  • Normalized Cuts Shi et al. 2001
  • Classification
  • Support Vector Machines Vapnik et al. 1995

X1 X2 X3 X4
X1 K11 K12 K13 K14
X2 K21 K22 K23 K24
X3 K31 K32 K33 K34
X4 K41 K42 K43 K44
4
Dataset 1
  • PASCAL VOC 2005
  • 4 categories
  • motorbikes, bicycles, people, cars
  • Train set
  • 214, 114, 84, 272 (684)
  • Test set
  • 216, 114, 84, 275 (689)

5
Results 1
  • Baseline (no-clusters)
  • Clusters (k3)

94.9 0 0 5.1
12.3 71.9 5.26 10.5
10.7 11.9 32.1 45.2
2.9 2.6 3.6 90.9
95.4 0 0 4.6
13.2 73.7 3.5 9.7
10.7 11.9 34.5 45.2
2.2 2.2 4.0 91.6
m
m
b
b
true labels
p
p
c
c
m
b
p
c
m
b
p
c
predicted labels
Mean accuracy 81.86
Mean accuracy 82.87
6
Dataset 2
  • Caltech-101
  • 101 object categories
  • 9097 images (30-80 per class)
  • 30 images / class
  • 15 train, 15 test
  • 10 runs cross-validation

7
Results 2
  • Baseline (no-clusters)
  • mean accuracy 57.42 (1.13)
  • Clusters (k3)
  • mean accuracy 59.36 (1.05)

8
Future work
  • Automatically determine k
  • - analyze eigenvalues of the Laplacian of
    affinity matrix Ng et al. 2001
  • - significant difference between two consecutive
    eigenvalues determines how many clusters there
    are
  • Comparison with other classifiers
  • - e.g., k-Nearest Neighbor labels are
    determined by majority labels of train instances
    to the test instance

9
Questions
10
References
  • H. Ling and S. Soatto, Proximity Distribution
    Kernels for Geometric Context in Category
    Recognition, IEEE 11th International Conference
    on Computer Vision, pp. 1-8, 2007.
  • D. Lowe, Distinctive Image Features from
    Scale-Invariant Keypoints," International Journal
    of Computer Vision, vol. 60, no. 2, pp. 91-110,
    2004.
  • J. Shi and J. Malik, Normalized cuts and image
    segmentation," IEEE Transactions on Pattern
    Analysis and Machine Intelligence, vol. 22, no.
    8, pp. 888-905, 2000.
  • C. Cortes and V. Vapnik, Support-vector
    networks," Machine Learning, vol. 20, no. 3, pp.
    273-297, 1995.
  • M. Everingham, A. Zisserman, C. K. I. Williams,
    L. Van Gool, et al. The 2005 PASCAL Visual
    Object Classes Challenge, In Machine Learning
    Challenges. Evaluating Predictive Uncertainty,
    Visual Object Classification, and Recognising
    Textual Entailment., eds. J. Quinonero-Candela,
    I. Dagan, B. Magnini, and F. d'Alche-Buc, LNAI
    3944, pages 117-176, Springer-Verlag, 2006.
  • A. Ng, M. Jordan and Y. Weiss. On spectral
    clustering Analysis and an algorithm In
    Advances in Neural Information Processing Systems
    14, 2001
  • L. Fei-Fei, R. Fergus, and P. Perona. Learning
    generative visual models from few training
    examples an incremental Bayesian approach tested
    on 101 object categories. In Proceedings of the
    Workshop on Generative-Model Based Vision.
    Washington, DC, June 2004.
Write a Comment
User Comments (0)
About PowerShow.com