Title: Modeling of Distributed Camera Networks
1Murphy Junior Gant Diablo Valley
College Mentor Yang Zhao
Modeling of DistributedCamera Networks
Finite-State Machine Election
Model of Camera Network 3rd Floor, Cory Hall
Sensor Node
Wireless Sound Detector
One application of VisualSense is modeling
camera networks. In conjunction with a
clustering and power algorithm, this research
determined not only which cameras actively
monitored the moving object, but how many cameras
monitored if constrained by a power budget.
Additionally, based upon the capabilities and
limitations of each camera, a development of
tracking algorithms filtered out all wasteful,
underdeveloped data, provided from cameras that
were out of a reasonable scope.
This finite-state machine called Election is
an addition to the camera composite actors in an
effort to reduce data redundancy. When an object
is detected by a camera, that camera broadcasts
its self-computed visibility value this
initiates the election process which has an
idle time to compensate for wireless
communication delay. Each camera stores its
own collection of gathered visibility values
which it uses to decide whether or not it is one
of the top two leading cameras needed for
tracking. The leading cameras transition into a
high resolution state while the rest transition
into an idle state, until further notice.
Ultimately, either one of the two leading cameras
loose sight of the moving object or a new camera
detects the moving object and the election
process recycles.
Wireless Channel
The Merge
Recti-linear Camera
Sound Source
Head Station
Abstract
Outer Boundary
Camera and sensor networks are used for
environmental observation, military monitoring,
building monitoring, and healthcare however,
some of the challenges faced are issues of packet
loss, battery, power loss, collisions, and
geographical restrictions. Through the
functionalities of VisualSense, a modeling and
simulation framework for wireless and sensor
networks built on Ptolemy II, it is possible to
extend existing composite actors and Java classes
designed for sensor data to use data from camera
networks.
Omni-directional Camera
Results
- Successfully visualize the camera network output
by fusing sensor data - Foundation for tracking several objects at any
given time - Central-server computation model for tracking
Plots graph of object in motion
Moving Object
Restricted Area
Process
Omni-directional camera
Future Outlook
Algorithms were created to handle such issues of
camera management, visibility, and energy
consumption and this research focused on the
simulation of a camera network that monitored the
motion of a single object in a set of corridors
in Cory Hall. Implementation of reliable
camera management techniques through the use of
state machines and intuitive procedures
reinforced proposed solutions however, there are
tradeoff factors such as between communication
and power consumption.
- Simultaneously track multiple objects
- Configure cameras with zoom in/out capabilities
- Use of VisualSense framework to give feedback in
addition to visualization
Restricted Scope Capability
Acknowledgements