Title: February, 26
1Content-Based Image Retrieval
Natalia Vassilieva natalia_at_ntc-it.ru
Ilya Markov ilya.markov_at_gmail.com
Alexander Dolnik alexander.dolnik_at_gmail.com
- Saint-Petersburg State University
2Our team
- Natalia Vassilieva
- Alexander Dolnik
- Ilya Markov
- Maria Teplyh
- Maria Davydova
- Dmitry Shubakov
- Alexander Yaremchuk
3General problems
- Semantic gap between system and human mode of
image analysis - Specific of human visual perception
- How to catch semantics of an image
- Signature calculation and response time
- Combining different features and metrics
4Image retrieval system
How to minimize semantic gap?
- General goal an image retrieval system
- that is able to process natural language query
- that is able to search among annotated and
non-annotated images - that takes into account human visual perception
- that processes various features (color, texture,
shapes) - that uses relevance feedback for query
refinement, adaptive search
5CBIR Traditional approach
6Research directions
- Color space partition according to human visual
perception - Correspondence between low-level features and
semantics auto-annotation - Fusion of retrieval result sets
- Adaptive search color and texture fusion
- Using relevance feedback
7Human visual perception colors
Experiments with color partition HSV space
(H9 S2 V3) 72 (H11 S2 V3)
66 (H13 S2 V3) 63 (H15 S2 V3) 60
Compare partitions of different spaces (RGB, HSV,
Lab)
8Research directions
- Color space partition according to human visual
perception - Correspondence between low-level features and
semantics auto-annotation - Fusion of retrieval result sets
- Adaptive search color and texture fusion
- Using relevance feedback
9Auto-annotation
- Training set selection
- Color feature extraction for every image from the
set - Similarity calculation for every pair of images
from the set - Training set clustering
- Basis color features calculation one per every
cluster - Definition of basis lexical features
- Correspondence between basis color features and
basis lexical features
- Natalia Vassilieva, Boris Novikov. Establishing
a correspondence between low-level features and
semantics of fixed images. In Proceedings of the
Seventh National Russian Research Conference
RCDL'2005, Yaroslavl, October 04 - 06, 2005
10Examples
city, night, road, river
snow, winter, sky, mountain
11Retrieve by textual query
- Image database is divided into clusters
- Search for appropriate cluster by textual query
using clusters annotations - Browse the images from the appropriate cluster
- Use relevance feedback to refine the query
- Use relevance feedback to reorganize the clusters
and assign new annotations
- N. Vassilieva and B. Novikov. A Similarity
Retrieval Algorithm for Natural Images. Proc. of
the Baltic DBIS'2004, Riga, Latvia, Scientific
Papers University of Latvia, June 2004
12Feature extraction color
- Color statistical approach
- First moments for color distribution (every
channel) and covariations
13Feature extraction texture
- Texture use independent component filters that
results from ICA
H. Borgne, A. Guerin-Dugue, A. Antoniadis
Representation of images for classification
with independent features
14Research directions
- Color space partition according to human visual
perception - Correspondence between low-level features and
semantics auto-annotation - Fusion of retrieval result sets
- Adaptive search color and texture fusion
- Using relevance feedback
15Fusion of retrieval result sets
Fusion of weighted lists with ranked elements
?1
(x11, r11), (x12, r12), , (x1n, r1n)
?2
(x21, r21), (x22, r22), , (x2k, r2n)
?
?m
(xm1, rm1), (xm2, rm2), , (xml, rml)
- How to merge fairly?
- How to merge efficiently?
- How to merge effectively?
16Ranked lists fusion application area
- Supplement fusion
- union textual results (textual viewpoints )
- Collage fusion
- combine texture (texture viewpoint) color
results (color viewpoint) - different color methods (different color
viewpoints)
17Ranked lists fusion application area
- Search by textual query in partly annotated image
database
Textual query
by annotations
Result
18Three main native fusion properties
- commutative property
- associative property
- value of result object's rank independent of
another object's ranks - Examples
- COMBSUM, COMBMIN, COMBMAX merge functions
19Additional native fusion properties
- normalization delimitation property
-
- conic property
- attraction of current object for mix result
depend on value of function g(rank, weight) 0
- snare condition
20Conic properties, function g
- g monotonically decreases with fixed weight
parameter - g monotonically decreases with fixed rank
parameter - g must satisfy boundaries conditions
- g( 0, w ) gt 0 if w ! 0
- g( r, 0 ) 0
21Ranked lists fusion Formulas
22Ranked lists fusion Algorithm
- All lists are sorted by object id
- Using step by step lists merging (object id
priory) - If object_id1 not equal object_id2 gt some object
is absent in one of the lists
Current object_id1
List 1
Result list
List 2
Current object_id2
23Ranked lists fusion Experiments
Necessary conditions
- Viewpoint should provide some valuable
information. Retrieval system's performance at
least should be better than a random system. - Information is not fully duplicated. There should
be partial disagreement among viewpoints.
24Ranked lists fusion Experiments
Parameters
- Roverlap Noverlap conditions
-
- Intercomparison of methods
- Classical methods COMBSUM, COMBMIN, COMBMAX
- Probability methods probFuse
- Random method random values that satisfied to
merge properties.
25Research directions
- Color space partition according to human visual
perception - Correspondence between low-level features and
semantics auto-annotation - Fusion of retrieval result sets
- Adaptive search color and texture fusion
- Using relevance feedback
26Adaptive merge color and texture
Dist(I, Q) aC(I, Q) (1 - a)?(I, Q), C(I, Q)
color distance between I and Q T(I, Q)
texture distance between I and Q 0 a 1
- Hypothesis
- Optimal a depends on features of query Q. It is
possible to distinguish common features for
images that have the same best a.
27Adaptive merge experiments
28Estimation tool
- Web-application
- Provides interfaces for developers of
search-methods - Uses common measures to estimate search methods
- Precision
- Pseudo-recall
- Collects users opinions gt builds test database
29Datasets
- Own photo collection (2000 images)
- Subset from own photo collection (150 images)
- Flickr collection (15000, 1.5 mln images)
- Corel photoset (1100 images)
30Research directions
- Color space partition according to human visual
perception - Correspondence between low-level features and
semantics auto-annotation - Fusion of retrieval result sets
- Adaptive search color and texture fusion
- Using relevance feedback