Title: Motion Analysis using Optical flow
1Motion Analysis using Optical flow
CIS750 Presentation Student Wan Wang Prof
Longin Jan Latecki Spring 2003 CIS Dept of
Temple
2Contents
- Brief discussion to Motion Analysis
- Introduction to optical flow
- Application of Detect and tracking people in
complex scenes using optical flow
3Part 1 Motion Analysis
- Usual input of a motion analysis system is a
temporal image sequence - Motion analysis is often connected with real-time
analysis
4Three main groups of motion analysis problem
- Motion detection
- - register any detected motion
- - single static camera
- Moving object detection and location
- - moving object detection only
motion_based segmentation methods - - detection of a moving object, detection of
the trajectory of its motion, prediction of its
future trajectory image object_matching
techniques are often used to solve these tasks
(direct matching of image data, matching of
object features, specific representative object
points (corner etc.),represent moving object as
graphs and mathing these graphs) another useful
method is optical flow - Derivation of 3D object properties from a set
of 2D projections of acquired at different time
instants of object motion
5Part2 Optical flow
- Reflects the image changes due to motion during a
time interval dt, which is short enough to
guarntees small inter-frame motion changes - The immediate objective of optical flow is to
determine a Velocity fieldA 2D representation of
a (generally) 3D motion is called a motion
field(velocity field) Whereas each point is
assigned avelocity vector corresponding the
motion direction, velocity and distance from an
observer at an appropriate image location - Based on 2 assumptions
- - The observed brightness of any object
point is constant over time - - Nearby points in the image plane move in a
similar manner(velocity smoothness constraint)
6Optical flow
Eghttp//www.ai.mit.edu/people/lpk/mars/temizer_2
001/Optical_Flow/index.html
7Computation Rationale
- Let us suppose we have a continuous image, the
image intensity is given by f(x,y,t), where the
intensity is now a function of time t, as well as
of x and y. - If this point(x,y) moves to a point (xdx,ydy)
at time tdt, the following equation holds - Â
- Taylor expansion of the right side of the
equation (1) is -
- Where fx(x,y,t),fy(x,y,t),ft(x,y,t) denote the
partial derivation of f. - And e is the high-order term in Tylor series.
8Computation Rationale
Assuming that e is negligible, we obtain the next
equation  That means
9Computation Method
10Optical flow in motion analysis
Motion, as it appears in dynamic images, is
usually some combination of 4 basic
elements (a)Translation at constant distance
from the observer. ---parallel motion
vectors (b)Translation in depth relative to the
observer. ---Vectors having common focus of
expansion. (c) Rotation at constant distance from
view axis. ---concentric motion vectors. (d)
Rotation of planar object perpendicular to the
view axis. ---- vectors starting from
straight line segments.
11Optical flow in motion analysis
- Mutual velocity of an observer and an object
- Let mutual velocities be (u,v,w) at
direction x,y,z.(z represent the depth) if
(x0,y0,z0) is the position at time t00.then the
position of the same point at time t can be
determined as - FOE (focus of expansion) determination
- Distance(depth) determination
- Collision Prediction
12 Part 3
Experiment of detecting and tracking people in
complex scenes using optical flow (by saitama
univ)
13 Demand
- Automatic visual surveillance systems are
strongly demanded for various applications. We
have several systems commercially available, most
of which are based on subtraction between
consecutive frames or that between a current
image and a stored background image. They can
work as expected if environmental conditions do
not change, such as indoors. - However, they cannot work outdoors because there
are various disturbances such as changes of
lighting and movements of background objects.
14(No Transcript)
15First step compute the optical flow
- By applying two different spatial filters g,h to
the input image , the following two constraint
equations are derived. - Â Two orientation_selective spatial Gaussian
filters g, h applied to the original image
f(x,y,t) one is sensitive to vertical edges, one
is to horizental edges. - (u,v) denotes an optical flow vector and
subscript denotes partial differentiation
16(No Transcript)
17Second step Region Segmentation
- segment the flow image into uniform flow regions
in a split-and-merge fashion. First, we divide
the image into 16 (4 X 4) regions, calculating
the mean flow vector in each region. If the
region has any outlier subregions whose flow
vectors are different from the mean, the region
is further split into 4 (2 X 2) regions. If the
region has no outlier subregion, that is, the
region has a uniform flow, it will not be split.
The above process is repeated to each region
until it becomes too small to be split
18(No Transcript)
19Third step Predicted Path Voting
- We prepare a four-dimensional voting space (
)For each uniform flow region detected in
the previous process, we predict a path of the
region in a certain time interval of future. Fig.
shows the predicted path(only x-y-t are shown).
We assume that the region continues to move in
the direction of the mean flow vector ( u,v ) at
its speed. We approximate each region by an
ellipse whose center coincides with the region
centroid. Every point inside the ellipse is given
weight, according to the two dimensional Gaussian
as shown in Fig. 3(a). This weight is voted at
the predicted position (x,y) at the time (t) in
the direction ( ). - The voted result is compared with a threshold. If
there is any region whose number of votes is over
the threshold, the region is detected as a
target.
20(No Transcript)
21(No Transcript)
22Reference
- Image processing, analysis, and machine vision
- Detecting and tracking people in complex scenes
- http//www-cv.mech.eng.osaka-u.ac.jp/research/trac
king_group/iketani/research_e/node1.html