February, 26 - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

February, 26

Description:

Semantic gap between system and human mode of image analysis ... How to catch semantics of an image. Signature ... Natalia Vassilieva, Boris Novikov. ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 31
Provided by: nata1159
Category:
Tags: boris | february

less

Transcript and Presenter's Notes

Title: February, 26


1
Content-Based Image Retrieval
Natalia Vassilieva natalia_at_ntc-it.ru
Ilya Markov ilya.markov_at_gmail.com
Alexander Dolnik alexander.dolnik_at_gmail.com
  • Saint-Petersburg State University

2
Our team
  • Natalia Vassilieva
  • Alexander Dolnik
  • Ilya Markov
  • Maria Teplyh
  • Maria Davydova
  • Dmitry Shubakov
  • Alexander Yaremchuk

3
General problems
  • Semantic gap between system and human mode of
    image analysis
  • Specific of human visual perception
  • How to catch semantics of an image
  • Signature calculation and response time
  • Combining different features and metrics

4
Image retrieval system
How to minimize semantic gap?
  • General goal an image retrieval system
  • that is able to process natural language query
  • that is able to search among annotated and
    non-annotated images
  • that takes into account human visual perception
  • that processes various features (color, texture,
    shapes)
  • that uses relevance feedback for query
    refinement, adaptive search

5
CBIR Traditional approach
6
Research directions
  • Color space partition according to human visual
    perception
  • Correspondence between low-level features and
    semantics auto-annotation
  • Fusion of retrieval result sets
  • Adaptive search color and texture fusion
  • Using relevance feedback

7
Human visual perception colors
Experiments with color partition HSV space
(H9 S2 V3) 72 (H11 S2 V3)
66 (H13 S2 V3) 63 (H15 S2 V3) 60
Compare partitions of different spaces (RGB, HSV,
Lab)
8
Research directions
  • Color space partition according to human visual
    perception
  • Correspondence between low-level features and
    semantics auto-annotation
  • Fusion of retrieval result sets
  • Adaptive search color and texture fusion
  • Using relevance feedback

9
Auto-annotation
  • Training set selection
  • Color feature extraction for every image from the
    set
  • Similarity calculation for every pair of images
    from the set
  • Training set clustering
  • Basis color features calculation one per every
    cluster
  • Definition of basis lexical features
  • Correspondence between basis color features and
    basis lexical features
  • Natalia Vassilieva, Boris Novikov. Establishing
    a correspondence between low-level features and
    semantics of fixed images. In Proceedings of the
    Seventh National Russian Research Conference
    RCDL'2005, Yaroslavl, October 04 - 06, 2005

10
Examples
city, night, road, river
snow, winter, sky, mountain
11
Retrieve by textual query
  • Image database is divided into clusters
  • Search for appropriate cluster by textual query
    using clusters annotations
  • Browse the images from the appropriate cluster
  • Use relevance feedback to refine the query
  • Use relevance feedback to reorganize the clusters
    and assign new annotations
  • N. Vassilieva and B. Novikov. A Similarity
    Retrieval Algorithm for Natural Images. Proc. of
    the Baltic DBIS'2004, Riga, Latvia, Scientific
    Papers University of Latvia, June 2004

12
Feature extraction color
  • Color histograms
  • Color statistical approach
  • First moments for color distribution (every
    channel) and covariations

13
Feature extraction texture
  • Texture use independent component filters that
    results from ICA

H. Borgne, A. Guerin-Dugue, A. Antoniadis
Representation of images for classification
with independent features
14
Research directions
  • Color space partition according to human visual
    perception
  • Correspondence between low-level features and
    semantics auto-annotation
  • Fusion of retrieval result sets
  • Adaptive search color and texture fusion
  • Using relevance feedback

15
Fusion of retrieval result sets
Fusion of weighted lists with ranked elements
?1
(x11, r11), (x12, r12), , (x1n, r1n)
?2
(x21, r21), (x22, r22), , (x2k, r2n)
?

?m
(xm1, rm1), (xm2, rm2), , (xml, rml)
  • How to merge fairly?
  • How to merge efficiently?
  • How to merge effectively?

16
Ranked lists fusion application area
  • Supplement fusion
  • union textual results (textual viewpoints )
  • Collage fusion
  • combine texture (texture viewpoint) color
    results (color viewpoint)
  • different color methods (different color
    viewpoints)

17
Ranked lists fusion application area
  • Search by textual query in partly annotated image
    database

Textual query
by annotations
Result

18
Three main native fusion properties
  • commutative property
  • associative property
  • value of result object's rank independent of
    another object's ranks
  • Examples
  • COMBSUM, COMBMIN, COMBMAX merge functions

19
Additional native fusion properties
  • normalization delimitation property
  • conic property
  • attraction of current object for mix result
    depend on value of function g(rank, weight) 0
  • snare condition

20
Conic properties, function g
  • g monotonically decreases with fixed weight
    parameter
  • g monotonically decreases with fixed rank
    parameter
  • g must satisfy boundaries conditions
  • g( 0, w ) gt 0 if w ! 0
  • g( r, 0 ) 0

21
Ranked lists fusion Formulas
  • Fusion formula
  • where

22
Ranked lists fusion Algorithm
  • All lists are sorted by object id
  • Using step by step lists merging (object id
    priory)
  • If object_id1 not equal object_id2 gt some object
    is absent in one of the lists

Current object_id1
List 1
Result list
List 2
Current object_id2
23
Ranked lists fusion Experiments
Necessary conditions
  • Viewpoint should provide some valuable
    information. Retrieval system's performance at
    least should be better than a random system.
  • Information is not fully duplicated. There should
    be partial disagreement among viewpoints.

24
Ranked lists fusion Experiments
Parameters
  • Roverlap Noverlap conditions
  • Intercomparison of methods
  • Classical methods COMBSUM, COMBMIN, COMBMAX
  • Probability methods probFuse
  • Random method random values that satisfied to
    merge properties.

25
Research directions
  • Color space partition according to human visual
    perception
  • Correspondence between low-level features and
    semantics auto-annotation
  • Fusion of retrieval result sets
  • Adaptive search color and texture fusion
  • Using relevance feedback

26
Adaptive merge color and texture
Dist(I, Q) aC(I, Q) (1 - a)?(I, Q), C(I, Q)
color distance between I and Q T(I, Q)
texture distance between I and Q 0 a 1
  • Hypothesis
  • Optimal a depends on features of query Q. It is
    possible to distinguish common features for
    images that have the same best a.

27
Adaptive merge experiments
28
Estimation tool
  • Web-application
  • Provides interfaces for developers of
    search-methods
  • Uses common measures to estimate search methods
  • Precision
  • Pseudo-recall
  • Collects users opinions gt builds test database

29
Datasets
  • Own photo collection (2000 images)
  • Subset from own photo collection (150 images)
  • Flickr collection (15000, 1.5 mln images)
  • Corel photoset (1100 images)

30
Research directions
  • Color space partition according to human visual
    perception
  • Correspondence between low-level features and
    semantics auto-annotation
  • Fusion of retrieval result sets
  • Adaptive search color and texture fusion
  • Using relevance feedback
Write a Comment
User Comments (0)
About PowerShow.com