by M. Nithya - PowerPoint PPT Presentation

About This Presentation
Title:

by M. Nithya

Description:

Learning Techniques for Video Shot Detection by M. Nithya Under the guidance of Prof. Sharat Chandran Outline Introduction Types of Shot-break Previous approaches to ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 42
Provided by: Nithya
Category:

less

Transcript and Presenter's Notes

Title: by M. Nithya


1
Learning Techniques for Video Shot Detection
  • by M. Nithya

Under the guidance of Prof. Sharat Chandran
2
Outline
  • Introduction
  • Types of Shot-break
  • Previous approaches to Shot Detection
  • General Approach - pixel comparison, histogram
    comparison
  • Recent Work Temporal slice analysis, Cue Video
  • Our Proposed approaches
  • Supervised Learning using AdaBoost algorithm
  • Unsupervised Learning using clustering
  • Semi-supervised Learning combining AdaBoost
    clustering
  • Conclusion

3
Introduction
  • 9,000 hours of motion pictures are produced
    around the world every year.
  • 3,000 television stations broadcasting for
    twenty-four hours a day produce eight million
    hours of video per year.
  • Problems
  • Searching the video
  • Retrieving the relevant information
  • Solution
  • Break down the video into smaller manageable
    parts
  • called Shots

4
What is Shot?
  • Shot is the result of uninterrupted camera work
  • Shot-break is the transition from one shot
  • to the next

5
Types of Shot-Break
6
Shot-Break
Dissolve
Wipe
Fade
Hard Cut
7
Hard Cut
8
Fade
9
Dissolve
10
Wipe
11
Shot Detection Methods
12
Shot Detection Methods
  • Goal
  • To segment video into shots
  • Two ways
  • Cluster the similar frames to identify shots
  • Find the shots that differ and declare it as
    shot-break

13
Pervious Approaches to Shot Detection
  • General Approaches
  • Pixel Comparison
  • Block-based approach
  • Histogram Comparison
  • Edge Change Ratio
  • Recent Work
  • Temporal Slice Analysis
  • Cue Video

14
Pixel Comparison
Frame N 1
Frame N
X
Y
?x1 ?y1 Pi(x,y) Pi1(x,y)
D(i,i1)
XY
15
Block Based Approach
Frame N 1
Frame N
Compares statistics of the corresponding blocks
Counts the number of significantly different
blocks
16
Histogram Comparison
17
Edge Change Ratio
18
Comparison
Method Advantages Disadvantages
Pixel-Comparison Simple, easy to implement Computationally heavy, Very sensitive to moving object or camera motion
Block based Performs better than pixel Cant identify dissolve, fade, fast moving objects
Histogram comparison Performance is better Detects hard-cut, fade, wipe and dissolve Fails if the two successive shots have same histogram. Cant distinguish fast object or camera motion
Edge Change Ratios Detects hard-cut, fade, wipe and dissolve Computationally heavy Fails when there is large amount of motion
19
Problems with previous approaches
  • ? Cant distinguish shot-breaks with
  • Fast object motion or Camera motion
  • Fast Illumination changes
  • Reflections from glass, water
  • Flash photography
  • ? Fails to detect long and short gradual
    transitions

20
Temporal Slice Analysis
21
Temporal Slice Analysis
22
Cue Video
23
Temporal Slice Analysis
24
Cue Video
  • Graph based approach
  • Each frame maps to a node
  • Connected upto 1, 3 or 7 frames apart
  • Each node is associated with
  • color Histogram
  • Edge Histogram
  • Weights of the edges represent similarity measure
    between the two frames
  • Graph partitioning will segment the video
  • into shots

25
Proposed Approaches
26
Proposed Approaches
  • Use learning techniques to distinguish between
  • shot-break and
  • Fast object motion or Camera motion
  • Fast Illumination changes
  • Reflections from glass, water
  • Flash photography

27
Supervised Learning
28
Feature Extraction
  • 25 Primitive features like edge, color are
    extracted directly from the image
  • These 25 features are used as input to next round
    of feature extraction yielding
  • 25 x 25 625 features
  • This 625 features can be used as input to compute
    625 x 625 15, 625 features

29
How these features can be used to classify
images?
30
Solution Use AdaBoost to select
these features.
  • Oops!! There are 15, 625 features!
  • Applying them to red, green and blue separately
    will result in 46, 875 features!
  • Can we find few important features that will help
    to distinguish the images?

31
AdaBoost Algorithm
Input (x1,y1) (x2,y2) (xm,ym) where x1,x2,xm
are the images yi 0,1 for negative and
positive examples Let n and p be the number of
positive and negative examples Initial weight
w1,i 1/2n if yi 0 and w1,I
1/2p if yi 1 For t 1,T Train one
hypothesis hi(x) for each feature and find the
error Choose the hypothesis with low error
value update the weight wt1,i wt,i
?t1-et where ei0,1for xi classified
incorrectly or correctly ?tet/(1-et) Normali
ze wt1,I so that it is a distribution Final
hypothesis is calculated as
32
Supervised Learning
  • Extract Highly selective features
  • AdaBoost algorithm to select few important
    features
  • Train the method to detect different shot-breaks

33
Unsupervised techniques Clustering
34
Unsupervised technique - clustering
35
Unsupervised technique - clustering
Hard Cut
Dissolve
36
Unsupervised technique
  • Clustering method to cluster into shots
  • Relevance Feedback

37
Semi-supervised Learning
38
Semi-supervised Learning
  • ?Combination of Supervised and Unsupervised
  • ?Few labeled data are available, using which it
    works on large unlabeled video
  • Steps
  • AdaBoost algorithm to select features
  • Clustering method to cluster into shots
  • Relevance Feedback

39
Conclusion
40
Conclusion
  • Problems with previous approaches
  • Cant distinguish shot-breaks with
  • Fast object motion or Camera motion
  • Fast Illumination changes
  • Reflections from glass, water
  • Flash photography
  • Fails to detect long and short gradual
    transitions
  • Planning to use AdaBoost learning based
    clustering scheme for shot-detection

41
Thank you ?
Write a Comment
User Comments (0)
About PowerShow.com