Next Generation 4-D Distributed Modeling and Visualization of Battlefield - PowerPoint PPT Presentation

1 / 32
About This Presentation
Title:

Next Generation 4-D Distributed Modeling and Visualization of Battlefield

Description:

Detailed, timely and accurate picture of the modern battlefield vital to military ... Navigation and Interaction in a Multi-Scale Stereoscopic Environment. ... – PowerPoint PPT presentation

Number of Views:87
Avg rating:3.0/5.0
Slides: 33
Provided by: Dtan
Category:

less

Transcript and Presenter's Notes

Title: Next Generation 4-D Distributed Modeling and Visualization of Battlefield


1
Next Generation 4-D Distributed Modeling and
Visualization of Battlefield
Avideh Zakhor UC Berkeley September 2004
2
Participants
  • Avideh Zakhor, (UC Berkeley)
  • Bill Ribarsky, (Georgia Tech)
  • Ulrich Neumann (USC)
  • Pramod Varshney (Syracuse)
  • Suresh Lodha (UC Santa Cruz)

3
Battlefield Visualization
  • Detailed, timely and accurate picture of the
    modern battlefield vital to military
  • Many sources of info to build picture
  • Archival data, roadmaps, GIS and databases
    static
  • Sensor information from mobile agents at
    different times and locations
  • Scene itself time varying moving objects
  • Multiple modalities fusion
  • How to make sense of all these without
    information overload?

4
Visualization Pentagon
4D Modeling/ Update
Visualization and rendering
Tracking/ Registration
Decision Making under Uncertainty
Uncertainty Processing/ Visualization
5
Research Agenda for 2003- 2004
  • Modeling
  • Visualization and Rendering
  • Mobile situational visualization
  • Augmented virtual environments
  • Add the temporal dimension (4D)
  • Tracking of moving objects in scenes
  • Modeling of time varying objects and scenes
  • Dynamic event analysis, and recognition
  • Path planning under uncertainty

6
Acquisition set up for dynamic scene modeling
Reference object for H-line
Digital camcorder with IR-filter
Sync electronic
VIS-light camera
rotating mirror
PC
IR line laser
Roast with vertical slices
Halogen lamp with IR-filter
7
Captured IR Frames
Horizontal line scans from top to bottom at about
1 Hz
8
Video intensity and IR captured synchronously
IR video stream
VIS video stream
  • Frame rate 30 Hz (NTSC)
  • Frame rate 10 Hz
  • Synchronized with IR video stream

9
Processing steps
  • Compute depth at the horizontal line
  • Track computed depth values along vertical lines
  • Intraframe and interframe tracking
  • Dense depth estimation

10
Results
Depth video
Color video
11
Dynamic Event Analysis
  • Video analysis
  • Segmenting and tracking moving objects (people,
    vehicles) in the scene
  • Determines regions of interest/change and allows
    for dynamic modeling and rapid modeling

12
Video Scene Analysis Activity Classification
with Uncertainty
  • Example activities sitting, bending and standing
  • The blue pointer indicates
  • the level of certainty in the
  • classifier decision

a
b
c
d
13
Audio Enhanced Visual Processing with Uncertainty
Fusion
Video Processing and Classification
Visualization
Video Acquisition
Uncertainty
Audio Processing and Classification
Description Generation
Sound Acquisition
14
AVE Fusion of 2D Video 3D Model
  • VE captures only a snapshot of the real world,
    therefore lacks any representation of dynamic
    events and activities occurring in the scene
  • AVE Approach uses sensor models and 3D models
    of the scene to integrate dynamic video/image
    data from different sources
  • Visualize all data in a single context to
    maximize collaboration and comprehension of the
    big-picture
  • Address dynamic visualization and change
    detection

15
Mobile Situational Visualization System
Buttons
Pen Tool
Mobile Team
Drawing Area
Shared observations of vehicle location,
direction, speed
collaborators
Collaboration Example
16
Optimal route planning for battlefield risk
minimization
Goal






Source

High risk
Moderate risk
Low risk
Risk free
17
Lidar Data Classification

Using height and height variation
Using LiDAR data (no aerial image)
Using all five features
18
Adaptive Stereo/Lidar based registration for
modeling outdoor scenes
Stereo Based Registration
Aerial view
  • Stereo based approach captures terrain undulations

LiDAR Based Registration
  • LiDAR based approach seems better at turns.

19
Punctuated Model Simplification
  • Our initial implementation considers planar
    loops.
  • The mesh containing the loops is a topological
    2-manifold.

Simplification path
20
Interactions on AVE
  • Collaboration with Northrop Grumman
  • install v.1 AVE system (8/03) for demonstrations
  • Install v.2 AVE system (9/04) for demonstrations
    and evaluation license
  • Tech transfer
  • Source code for LiDAR modeling to ARMY TEC labs
  • Integration into ICT training applications for
    MOUT after-action review
  • Demos/proposals/talks
  • NIMA, NRO, ICT, Northrup Grumman , Lockheed
    Martin, HRL/DARPA, Olympus, Airborne1, Boeing

21
Transitions for 3D modeling
  • Carried out a 2 day modeling of Potomac Yard
    Mall in Washington, DC in December 2003 for Night
    Army Vision Lab, and GSTI
  • Shipped equipment ahead of time
  • Spent one day driving around acquiring data
  • Spent ½ day processing the data
  • Delivered the model to Jeff Turner of GSTI/ Night
    army vision lab
  • Carried out another 2 day modeling of Ft. McKenna
    in Geogia in December 2003 in collaboration with
    Jeff Dehart of the ARL
  • Drove the equipment from DC to Georgia in a van
  • Collected data in one day, processed in few days
  • Delivered the 3D model to Larry Tokarciks group.
  • In Discussion with Harris to transition 3D
    modeling Architecure/software/hardware
  • Invited talk at the registration workshop at CVPR

22
Technology Transfer on Sitvis
  • We are continuing work centered around the mobile
    augmented battlefield visualization testbed with
    both the Georgia Tech and UNC Charlotte homeland
    security initiatives.
  • Dr. Ribarsky is on the panel to develop the
    research agenda for the new National Visual
    Analytics Center, sponsored by DHS. Mobile
    situational visualization will be part of this
    agenda.
  • The system is being used as part of the Sarnoff
    Raptor system, which is deployed to the Army and
    other military entities. In addition our
    visualization system is being used as part of the
    Raptor system at Scott Air Force Base.

23
Publications (1)
  • C. Frueh and A. Zakhor, "An Automated Method for
    Large-Scale, Ground-Based City Model Acquisition"
    in International Journal of Computer Vision, Vol.
    60, No. 1, October 2004, pp. 5 - 24.
  • C. Frueh and A. Zakhor, "Constructing 3D City
    Models by Merging Ground-Based and Airborne
    Views" in Computer Graphics and Applications,
    November/December 2003, pp. 52 - 61.
  • C. Frueh and A. Zakhor, "Reconstructing 3D City
    Models by Merging Ground-Based and Airborne
    Views", Proceedings of the VLBV, September 2003,
    pp. 306 - 313 Madrid, Spain
  • C. Frueh, R. Sammon, and A. Zakhor, "Automated
    Texture Mapping of 3D City Models With Oblique
    Aerial Imagery" in 2nd International Symposium on
    3D Data Processing, Visualization, and
    Transmission, 2004.
  • U. Neumann, Approaches to Large-Scale Urban
    Modeling in IEEE computer Graphics and
    applications
  • U. Neumann, Visualizing Reality in an Augmented
    Virtual Environment , acepted in Presence
  • U. Neumann, Augmented Virtual Environments for
    Visualization of Dynamic Imagery, accepted in
    IEEE Computer Graphics and Applications.

24
Publications (2)
  • U. Neumann, Urban Site Modleing from LIDA,
    CGGM03
  • U. Neumann, Augmented Virtual Environments
    (AVE) Dynamic Fusion of Imagery and 3D models,
    VR 2003
  • U. Neumann, 3D Video Surveillance with Augmented
    Virtual Environments, accepted in SIGGM 2003.
  • Sanjit Jhala and Suresh K. Lodha, Stereo and
    Lidar-Based Pose Estimation with Uncertainty for
    3D Reconstruction'', To appear in the Proceedings
    of Vision, Visualization, and Modeling
    Conference, Stanford, Palo Alto, CA November
    2004.
  • Hemantha Singamsetty and Suresh K. Lodha, An
    Integrated Geospatial Data Acquisition System for
    Reconstructing 3D Environments'', To appear in
    the Proceedings of the IASTED Conference on
    Advances in Computer Science and Technology
    (ACST), St. Thomas, Virgin Islands, USA, November
    2004.

25
Publications (3)
  • Amin Charaniya, Roberto Manduchi, and Suresh K.
    Lodha, Supervised Parametric Classification of
    Aerial LiDAR Data", Proceedings of the IEEE
    workshop on Real-Time 3D Sensors and Their Use,
    Washington DC, June 2004.
  • Sanjit Jhala and Suresh K. Lodha, On-line
    Learning of Motion Patterns using an Expert
    Learning Framework", Proceedings of the IEEE
    Workshop on Learning in Computer Vision and
    Pattern Recognition, Washington DC, June 2004.
  • Srikumar Ramalingam, Suresh K. Lodha, and Peter
    Sturm, A Generic Structure-from-Motion
    Algorithm for Cross-Camera Scenarios'',
    Proceedings of the OmniVis (Omnidirectional
    Vision, Camera Networks, and Non-Classical
    Cameras) Conference, Prague, Czech Republic, May
    2004.
  • Srikumar Ramalingam and Suresh K. Lodha
    Adaptive Enhancement of 3D Scenes using
    Hierarchical Registration of Texture-Mapped
    Models", Proceedings of 3DIM Conference, IEEE
    Computer Society Press, Banff, Alberta, Canada,
    October 2003, pp.203-210.

26
Publications (4)
  • Suresh K. Lodha, Nikolai M. Faaland, and Jose
    Renteria,Hierarchical Topology Preserving
    Compression of 2D Vector Fields using Bintree and
    Triangular Quadtrees'', IEEE Transactions on
    Visualization and Computer Graphics, Vol. 9, No.
    4, October 2003, pages 433--442.
  • Suresh K. Lodha, Krishna M. Roskin, and Jose C.
    Renteria, Hierarchical Topology Preserving
    Simplification of Terrains", Visual Computer,
    Vol. 19, No. 6, September 2003.
  • Suresh K. Lodha, Nikolai M. Faaland, Grant Wong,
    Amin P. Charaniya, Srikumar
  • Ramalingam, Arthur Keller, Consistent
    Visualization and Querying of Spatial Databases
    by a Location-Aware Mobile Agent'', Proceedings
    of Computer Graphics International (CGI),
    pp.248--253, IEEE Computer Society Press, Tokyo,
    Japan, July 2003.
  • Christopher Campbell, Michael M. Shafae, Suresh
    K. Lodha and Dominic W. Massaro,
  • Discriminating Visible Speech Tokens using
    Multi-Modality'', Proceedings of the
    International Conference on Auditory Display
    (ICAD), pp.13--16, Boston, MA, July 2003.

27
Publications (5)
  • Amin Charaniya and Suresh K. Lodha, Speech
    Interface for Geo-Spatial Visualization'',
    Proceedings for the Conference on Computer
    Science and Technology (CST), Cancun, Mexico, May
    2003.
  • William Ribarsky, editor (with Holly Rushmeier).
    3D Reconstruction and Visualization of Large
    Scale Environments. Special Issue of IEEE
    Computer Graphics Applications (December,
    2003).
  • Justin Jang, Peter Wonka, William Ribarsky, and
    C.D. Shaw. Punctuated Simplification of Man-Made
    Objects. Submitted to The Visual Computer.
  • Tazama St. Julien, Joseph Scoccinaro, Jonathan
    Gdalevich, and William Ribarsky. Sharing of
    Precise 4D Annotations in Collaborative Mobile
    Situational Visualization. To be submitted, IEEE
    Symposium on Wearable Computing.
  • Ernst Houtgast, Onno Pfeiffer, Zachary Wartell,
    William Ribarsky, and Frits Post. Navigation and
    Interaction in a Multi-Scale Stereoscopic
    Environment. Submitted to IEEE Virtual Reality
    2004.

28
Publications (6)
  • G.L. Foresti, C.S. Regazzoni and P.K. Varshney
    (Eds.), Multisensor Surveillance Systems The
    Fusion Perspective , Kluwer Academic Press, 2003.
  • R. Niu, P. Varshney, K. Mehrotra and C. Mohan,
    Sensor Staggering in Multi-Sensor Target
    Tracking Systems'', Proceedings of the 2003 IEEE
    Radar Conference, Huntsville AL, May 2003.
  • L. Snidaro, R. Niu, P. Varshney, and G.L.
    Foresti, Automatic Camera Selection and Fusion
    for Outdoor Surveillance under Changing Weather
    Conditions'', Proceedings of the 2003 IEEE
    International Conference on Advanced Video and
    Signal Based Surveillance, Miami FL, July 2003.
  • H. Chen, P. K. Varshney, and M.A. Slamani, "On
    Registration of Regions of Interest (ROI) in
    Video Sequences" Proceedings of IEEE
    International Conference on Advanced Video and
    Signal Based Surveillance, CD-ROM, Miami, FL,
    July 21-22, 2003.
  • R.Niu and P.K.Varshney, Target Location
    Estimation in Wireless Sensor Networks Using
    Binary Data,Proceedings of the 38th Annual
    Conference on Information Sciences and Systems,
    Princeton, NJ, March 2004.

29
Publications (7)
  • L. Snidaro, R. Niu, P. Varshney, and G.L.
    Foresti, Sensor Fusion for Video
    Surveillance'', Proceedings of the Seventh
    International Conference on Information Fusion,
    Stockholm, Sweden, June 2004. 
  • E. Elbasi, L. Zuo, K. Mehrotra, C. Mohan and P.
    Varshney, "Control Charts Approach for Scenario
    Recognition in Video Sequences," in Proc. Turkish
    Artificial Intelligence and Neural Networks
    Symposium(TAINN'04), June 2004.
  • M. Xu, R. Niu, and P. Varshney, Detection and
    Tracking of Moving Objects in Image Sequences
    with Varying Illumination'', to appear in
    Proceedings of the 2004 IEEE International
    Conference on Image Processing, Singapore,
    October 2004.
  • R. Rajagopalan, C.K. Mohan, K. Mehrotra and P.K.
    Varshney,"Evolutionary Multi-Objective Crowding
    Algorithm for Path Computations," to appear in
    Proc. International Conf. on Knowledge Based
    Computer Systems (KBCS-2004), Dec. 2004.

30
Future Work
  • Important to make sense of the world, not just
    model it or visualize it
  • Tons of data being collected by a variety of
    sensors all over the globe all the time
  • How to process or digest the data in order to
  • Recognize significant events
  • Make decisions despite uncertainty, and take
    actions
  • Current MURI most concerned about presenting
    the data to military commanders in an uncluttered
    way ? visualization
  • Future work on how to automatically construct the
    big picture of what is happening by combining a
    variety of modalities of data ? Audio, video, 3D
    models, sensors, pictures,

31
Battlefield Analysis
Accomplish tasks
Make decision Take actions
Recognize events
Analysis/reasoning
Model / Update Environment Visualize
Physical layer Processing
All of this Changing Dynamically With time
Distributed sensors
32
Outline of Talks
  • 900 - 915 Avideh Zakhor, U.C. Berkeley,
  • "Overview"
  • 915 - 1000 Chris Frueh and Avideh Zakhor,
    U.C. Berkeley,
  • "3D modeling and
    visualization of static and dynamic
  • scenes"
  • 1000 - 1045 Ulrich Neuman, U.S.C.
  • "Data Fusion in
    Augmented Virtual Environments"
  • 1045 - 1130 Bill Ribarsky, Georgia Tech
  • "Testbed and Results
    for Mobile Augmented
  • Battlefield
    Visualization"
  • 100 - 145 Suresh Lohda, U.C. Santa Cruz
  • "Uncertainty in Data
    Classification, Pose Estimation
  • and 3D Reconstruction
    for Cross-Camera and
  • Multiple Sensor
    Scenarios
  • 145 - 230 Pramod Varshney, Syracuse
    University
  • "Decision Making and
    Reasoning with Uncertain
  • Image and Sensor Data"
Write a Comment
User Comments (0)
About PowerShow.com