Title: Body detection, tracking and analysis ETEAM
1Body detection, tracking and analysisE-TEAM
- Participants (9)
- FORTH, ACV, BILKENT, SZTAKI, ICG, University of
Amsterdam, University of Surrey, Technion, UPC - E-Team Leader Montse Pardàs (Cristian Cantón)
(UPC)
2Participants
- FORTH Antonis Argyros, Panos Trahanias
- ACV Herbert Ramoser
- Bilkent Ugur Gudukbay, Enis Cetin, Yigithan
Dedeoglu, B. Ugur Toreyinç - SZTAKI Tamas Sziranyi
- ICG Horst Bischof
- University of Amsterdam Thang Pham, Michiel van
Liempt, Arnold Smeulders - University of Surrey Bill Christmas
- Technion/MM E.Rivlin, M. Rudzsky
- UPC Montse Pardas, Jose Luis Landabaso, Cristian
Canton
3Description
- Relevant to WP5 (Single modality processing) and
WP11 (Integration and Grand Challenges Detecting
and interpreting humans and human behaviour in
videos) - Objective To increase collaboration in
- Body detection. Using for instance background
learning techniques in both single and
multi-camera environments. Persons will be
identified by means of classification techniques. - Body tracking. By means of models (e.g.,
templates, 3D models, classifiers) and
appropriate motion prediction. - Body analysis. Body models are being used for
analysis and tracking. They can range from simple
to complex models, depending on the applications.
4UPC application smart rooms
- Object localization and tracking task in indoor
environments surveyed by multiple fixed cameras
5UPC Detection and tracking
- The method uses a foreground separation process
at each camera, based on Stauffer and Grimson
background learning - A 3D-foreground scene is modeled and discretized
into voxels making use of all the segmented views - Voxels are grouped into blobs
- Color information together with other
characteristic features of 3D object appearances
are temporally tracked using a template-based
technique
6UPC 3D Blob Extraction
7UPC Body and gesture analysis
- Aim obtain the body posture of several people
present in a room. - Many pattern analysis challenges can be addressed
in this framework - Gesture analysis
- Scence understanding and classification (who is
doing what? i.e. someone raises his hand to ask a
question) - Friendly and non-intrusive Human Computer
Interfaces (HCI) - Gait analysis
- Biometrics
- Motion disorders detection and diagnosis
8UPC Model based analysis
- Aim Extract the posture of a human body based on
a hierarchical representation of its skeleton.
9Example (I) Simple Model
10Example (II) Not so simple model
- Related publications
- C.Canton-Ferrer, J.R.Casas, M.Pardàs, Towards a
Bayesian Approach to Robust Finding
Correspondences in Multiple View Geometry
Environments, CGGM, Atlanta (USA). LNCS
3515281-289, Springer-Verlag, 2005. - C.Canton-Ferrer, J.R.Casas, M.Pardàs, Projective
Kalman Filter Multiocular Tracking of 3D
Locations Towards Scene Understanding, MLMI,
Edinburgh (UK). To appear in LNCS, 2005.
11Example (III) Skeleton model
- From the voxels data-set, extract information
about the structure and the position of the
joints of our skeleton model.
12UPC
- Possible collaboration
- Introduce classification of the detected objects
in the smart-room context - Introduce new techniques of gesture or activity
recognition in the smart room context - Support other groups in the extension from single
camera to multi camera - Introduce body models in other groups
applications - New applications/analysis methods over our data
(availability to generate multi-camera data)
13University of Amsterdam
- Reconstruction of trajectories of people in
street surveillance videos - People detection state-of-the-art from
literature - Tracking algorithm our own work with solid
software implementation - People matching our own work in general object
matching with color invariant descriptors
(software is still under development)
14Sztaki (Szriyanzi)
- Aim Extraction of simple biometric motion of
walking and human actions from videos - Method
- The method works with spatio-temporal input
information to detect and classify typical
patterns of human movement. - Real-time operations
- New information-extraction and temporal-tracking
method based on a simplified version of the
symmetry pattern extraction, which pattern is
characteristic for the moving legs of a walking
person. This pattern also helps in recognizing
human events of more people and unusual actions.
15Symmetry patterns of walking humans
16Feature extraction Identification of the
leading leg
Leading leg the staning leg from 2 steps,
Ratio of integrated leg-areas
d
17Sztaki (Chetverikov)
- Robust Structure-from-Motion, 3D motion
segmentation and grouping - Given a set of feature points tracked over the
frames, we can do robust SfM in presence of more
than 50 outliers. - Based on that, we can do robust 3D motion
segmentation of multiple objects in presence of
occlusion, outliers, and for moving camera. - Recently, we have also developed a novel method
for grouping the segmented parts, in order to
decide which of them are related. For example,
one can determine if an object rotates around an
axis defined by another object.
18Bilkent
- Human body extraction, tracking and activity
recognition from video sequences. - Body detection and extraction based on motion
detection and object shape based classification
techniques, background learning and silhouette
shape-based object classification. - Multi-person and single person tracking
Correspondence-based whole body tracking and
model-based body part tracking methods. - Human action recognition Tracking results will
be combined with activity models (action
templates), Hidden Markov models and dynamic
programming techniques.
19ACV
- Fast Spatio-Temporal tracking based on Principal
Curves
20ACV
- Back-projected reconstructed trajectories
21ACV
- Possible cooperation
- Applying our tracking methods to your data
- Benchmarking, evaluation of motion detection,
tracking performance - Algorithms for fast computation of informative
descriptors (for recognition and tracking tasks)
22ICG
- People detection based on an On-line Adaboost
method, which is embedded in a learning framework
that can train a Person detector without hand
labelling. - Appearance based tracking of people based on an
on-line classifier. - Both methods are based on integral orientation
histogram features and are able to run in
real-time on a standard PC
23ICG
- Possible contributions
- Various sequences we use for testing our methods
(some of them with ground truth). - Combining our methods with other techniques to
improve the robustness and applicability.
24Technion
- Detection of moving objects
- Tracking of detected targets
- Classification to one of predefined classes
- human,
- human group,
- animal,
- Vehicle
-
E.Rivlin, M.Rudzsky, R.Goldenberg, U.Bogolmolov
and S.Lapchev.,ICPR'02 Y.Bogomolov, G.Dror,
S.Lapchev, E.Rivlin, M.Rudzsky. BMVC03
25 The classes handled by the system
26Technion
- Classification of moving objects
- Single human (walking, running, crawling)
Tracking
27Technion
- Possible collaboration in research of human body
detection, tracking and motion analysis in
multi-camera environments
28University of Surrey
- Automated Audio-Visual Analysis
- Work on recognition of activities in the context
of sports videos and visual surveillance. We are
concentrating on 2-D analysis, using shape and
motion cues. - Also work on 3-D representations for human
activity recognition. - We have available a public domain (LGPL) C
library that includes a good framework for
integrating different types of video sources
outputs
29Example
30FORTH
- FORTH has developed a hand detector and tracker
which - Handles multiple, potentially occluding blobs
- Supports detection of the fingers of hands
- Provides 3D information for the contours of the
tracked blobs - Operates with potentially moving cameras
- Robust performance under considerable
illumination changes - Real time performance (gt30fps)
- Has already been employed in many applications
(cognitive interpretation of human activities, a
prototype human-computer interaction system,
landmark detection in robot navigation
experiments, etc)
31FORTH - Example
32FORTH
- On-going and future research activities
- Investigation of the use of additional cues
(motion, shape, etc) for model-based human motion
detection and tracking - Development of inference mechanisms to handle
missing parts and uncertain detection estimates - Research in gesture recognition and human
activity interpretation
33E-team possible cooperation
- Main outcome of e-teams joint research papers!
- Ideas
- Extend methods developped for single camera to
multiple-camera applications - Exchange databases / applications
- Extend systems using tools from other groups
- Create sub-groups for
- Body detection
- Body tracking
- Object classification
- Gesture analysis
34E-team possible cooperation
- How?
- Software exchanges (executables, code, )
- Students visits
- Two weeks, financed by MUSCLE
- A few months, with student grants
- Create a list who wants to host or send someone
in a very specific subject - For every sub-group publish on the Muscle web the
on-going collaborations (title, partners,
results)
35Face detection and recognitionE-TEAM
- Participants (3)
- ICG, AUTH, UPC
- E-Team Leader Not decided yet (M.Pardàs, C.
Cantón) (UPC) - Note If this E-TEAM is too small it could be
embedded into the Body E-TEAM
36ICG
- Researchers Horst Bischoff
- Face detection, tracking and recognition based on
local orientation histograms - On-line Adaboost algorithm as an algorithm to
cope with all this tasks - Real time operation
37AUTH
- Researchers Ionnis Pitas, Nikos Nikolaidis
- Expertise in face detection, tracking and
verification based on several detection
techniques developed for greyscale and color
images. - Techniques based on morphological elastic graph
matching.
38UPC
- Researchers Ferran Marqués, Verónica Vilaplana
39Perceptual Model
Perceptual Model
Basic Descriptors Basic Descriptor 1 Basic
Descriptor 2 Basic Descriptor M
Shape Descriptor
Specific Descriptors Specific Descriptor
1 Specific Descriptor 2 Specific
Descriptor N
BPT Region
40Frontal Face Perceptual Model
- Candidate selection (in maroon)
- Non-complete representation of the object.
- Shape descriptor (in orange)
- Union of regions that may not be linked in the
BPT.