FINITE ELEMENT COMPONENT OF TERASHAKE PLATFORM CMU - PowerPoint PPT Presentation

1 / 8
About This Presentation
Title:

FINITE ELEMENT COMPONENT OF TERASHAKE PLATFORM CMU

Description:

Deliver necessary performance, scalability, and portability ... Support runtime visualization-based steering, and ... Topography (orography) - Dynamic rupture ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 9
Provided by: thomas570
Category:

less

Transcript and Presenter's Notes

Title: FINITE ELEMENT COMPONENT OF TERASHAKE PLATFORM CMU


1
  • FINITE ELEMENT COMPONENT OF TERASHAKE
    PLATFORM(CMU)

2
Vision
  • We envision the SCEC/CME TeraShake platform as a
    platform that will
  • Deliver necessary performance, scalability, and
    portability for ultrascale unstructured mesh
    earthquake wave propagation simulations
  • Support runtime visualization-based steering, and
  • Whenever the applications permit, avoid possible
    bottlenecks associated with multiterabyte I/O at
    runtime.
  • GOAL
  • To realize this vision within the finite element
    component of the Terashake platform we seek an
    end-to-end solution to the meshing-solving-visuali
    zing simulation pipeline
  • KEY IDEA
  • Replace the traditional file interface with a
    scalable, parallel, runtime data structure that
    supports simulation pipelines in two ways
  • - By proding a common foundation on top of
    which all simulation components operate
  • - By serving as a vehicle for data sharing
    among all simulation components, i.e., meshing,
    partitioning, solving, and visualizing.

3
New Methodology
  • New methodology has been implemented in Hercules,
    a system that targets unstructured octree-based
    finite element wave propagation simulations.
  • A figure goes here illustrating new methodology

4
New Methodology
  • Meshing, partitioning, solving and visualizing
    are implemented on top of, and operate on top of
    a unified octree data structure
  • There is only one executable (MPI code), in which
    all the components are tightly coupled and
    execute on the same processors
  • Only input is a database description of the
    spatial variation of the material properties
    (density, velocities, attenuation) of the
    structure
  • Only outputs are light-weight visualization
    frames, generated as they are simulated at every
    visualization step
  • Will modify to add summary frames, such as
    spatial distributions of peak ground velocity and
    response spectra, which are also generated as the
    simulation proceeds and are output at the
    conclusion of the simulation.

5
New Challenges
  • Currently, Hercules performs simulations based on
    a flat free surface and kinematic earthquake
    sources.
  • Future versions will need to address
  • - Topography (orography)
  • - Dynamic rupture
  • To address these challenges, we will need to
    develop a new octree-based hybrid mesher, capable
    of generating, in addition to the current regular
    hexahedra, irregular hexahedra (and perhaps
    tetrahedra)
  • Scalability for the ultrascale simulations we
    contemplate will be a major issue.

6
Ultrascale Simulations
  • To estimate the computational needs for
    performing TeraShake class simulations on a
    volume of 1000 km x 600 km x 80 km, we use as
    baseline a previous (smaller) simulation
    performed on the Alpha EV68-based TCS system at
    the PSC
  • - Volume 100 km x 100 km x 37.5 km
  • - Minimum shear wave velocity 100 m/s
  • - Maximum frequency 2 Hz
  • - Number of mesh points 1.2 billion
  • - Number of processors 2000
  • - Memory used 2 Tb
  • - Runtime 5hr for 5 sec simulation

7
Ultrascale Simulations
  • NSF expects to have in place a 100 Tflop machine
    through the Petaflop Initiative by spring 2007.
    With this machine, we can contemplate being able
    to run a TeraShake simulation in Year 1 with
    these characteristics
  • - Volume 1000 km x 600 km x 80 km (100 times
    more mesh points)
  • - Maximum shear wave velocity 100 m/s
  • - Maximum frequency 1 Hz (10 times fewer
    mesh points)
  • - Number of mesh points 12 billion
  • - Processors needed 20,000 (based on 10
    times more mesh points)
  • - Memory needed 20 Tb
  • - Runtime 45 hr for 180 sec simulation
  • (Runtime based on 36 times longer simulation
    time 2 times faster processors 2 times longer
    time step perfect scalability!)

8
Proposed Schedule
  • Year 1
  • - TeraShake simulation 1 Hz maximum
    frequency with 100 m/s minimum velocity
  • - New capability On-line visualization
  • Year 3
  • - TeraShake simulation 2 Hz maximum
    frequency with 100 m/s minimum velocity
  • - New capabilities Topography dynamic
    rupture interactive visualization steering
  • Year 5 (assuming Petaflops machines become
    available)
  • - TeraShake simulation 3 Hz maximum frequency
    with 100 m/s min velocity
  • - New capability Web-based steering control
Write a Comment
User Comments (0)
About PowerShow.com