Spiral Interaction SPINT Analysis of changing the excitability of medium with C2

1 / 33
About This Presentation
Title:

Spiral Interaction SPINT Analysis of changing the excitability of medium with C2

Description:

Each slave CPU computes values of the functions u and v for each grid point ... Task Distribution of each CPU (Slave) Master node splits the work among 14 CPUs ... –

Number of Views:18
Avg rating:3.0/5.0
Slides: 34
Provided by: SMS87
Category:

less

Transcript and Presenter's Notes

Title: Spiral Interaction SPINT Analysis of changing the excitability of medium with C2


1
Spiral Interaction (SPINT)Analysis of changing
the excitability of medium with C2
  • Presented by Group 2
  • Juan Sandoval
  • Kalpana Pal
  • Hitesh Patel
  • Pedro Perez
  • CMPT 495 / 680 Parallel Architectures and
    AlgorithmsProf. Roman Zaritski

April 30, 2007
2
Outline
  • Introduction
  • Algorithm
  • Implementation of the Galaxy MPI Cluster
  • Snapshots
  • Scalability/Speed-up Curves
  • Snapshot Results Summary
  • Summary Conclusions

3
To Do
  • Pace C2 periodically (sinusoidally) around 0.75
    value.
  • Use different frequencies and amplitudes to
    obtain different behaviors of spiral waves.
  • Report any interesting changes in spiral wave
    behavior.

4
Introduction
  • This project uses a Spiral Interaction (SPINT)
    code as the basis.
  • Experiment by changing the excitability of the
    medium C2 (currently set to 0.75).
  • To make the simulations faster, we reduced the
    domain size currently set to 1000x2000 to
    500x1000.
  • A simple FitzHugh-Nagumo model is used to
    simulate the excitability of the medium.
  • A parallel algorithm is used to distribute the
    task among nodes of Galaxy cluster because of
    large size of the domain, and the number of time
    steps.

5
Introduction (cont.)
  • The program is run on 15 CPUs.
  • The program uses a domain slicing approach to
    distribute tasks among 14 CPU slaves.
  • The grid is divided into 14 slices corresponding
    to 14 slave CPUs and one master.
  • The Graphical window contains a grid of size N x
    M, NM100
  • Each slave CPU computes values of the functions u
    and v for each grid point inside its slice, for
    every time step (dt .1), and for a total
    simulation time of DT 10000.

6
Algorithm of the Problem
  • The rectangular domain is sliced into rectangular
    strips.
  • Each available processor takes care of one such
    strip
  • For each time step, each processor updates
    unknown functions on the corresponding strip.
  • The neighboring processors exchange the common
    boundary information using message passing (MPI)
  • The entire rectangular domain gets updated.
  • As result, when the program is running, the
    spiral shows the unique results at any given time
    on entire rectangular domain.

7
Algorithm of the Problem (cont.)
  • The spint.cc program was modified by using the
    below line of code in the main loop of the
    program.
  • C2 0.75 A sin (w DiscrTime)
  • a (C1C2)E1/C2
  • E2 (C2aC3)/ (C2C3)
  • This modification lead a change of C2 value
    periodically (sinusoidally) around the value of
    0.75 making it more excitable
  • We experimented with amplitude (A) and frequency
    (wDiscrTime) value for several experiments.

8
Algorithm of the Problem (cont.)
  • By changing the code, and using the different
    values of A and w, we were able to get
    Excitability between the lowest value of C2
    0.60 and highest to C2. 0.90.
  • That means that by having a higher amplitude
    value we will obtain a higher increase of waves
    on the spiral
  • The same for the increase of frequency, by making
    the value of (w) higher, this will make the
    frequency (wt) values change more quickly with
    respect to time.

9
Implementation on the Galaxy MPI Cluster
  • Implementation in C
  • 15 CPU Pentium based Cluster (located in RI 109)
  • Red Hat Linux 6.2 Operating System
  • MPI library (parallel communication)
  • OpenGL library (data visualization)
  • Pthread library (the graphical display functions)
  • Xmanager Program (The graphical interface display)

10
Implementation on the Galaxy MPI Cluster (cont.)
  • 1000 x 2000 Rectangular Grid
  • Cells on the edges of the grid are always
    unexcited (u 0, v 0)
  • Using Vertical Domain Slicing to divide workload
    between the 15 CPUs
  • Using MPI Send and Receive Functions to exchange
    cell values on borders between each chunk

11
Task Distribution of each CPU (Slave)
CPU-1 . . . . . . . . .. . . CPU-14
  • Master node splits the work among 14 CPUs
  • 1 CPU acts as Master (has copy of whole array)
  • 14 CPUs act as Slaves (handle parts of the array)
  • Each Slave node sends screen updates to the
    Master node

12
Classic Initial Conditions
  • Grid Size 1000x2000
  • DT 10000
  • C2 0.75
  • CPUs 15

13
Classic Initial Spiral
14
SNAPSHOTS
15
Grid Size 500x1000, A0.18 W0.04, C2 0.60 at
DT10000
1.
4.
5.
2.
3.
6.
Figure 1
16
Grid Size 500x1000, A0.12 W0.04, C2 0.65 at
DT10000
3.
1.
2.
4.
Figure 2
17
Grid Size 500x1000, A0.16 W0.07, C2 0.60 at
DT10000
1.
3.
2.
4.
Figure 3
18
Scalability/Speed-up
19
Scalability/Speed-up
  • Grid Size 500x1000
  • DT 10000
  • 2 np 15

20
Scalability/Speed-up (contd)
Domain Size 500 x 1000
Graph 1
21
Scalability/Speed-up (contd)
DT 10000
Graph 2
22
Speed-up Results
  • Speed-up was optimal at 15 CPUs
  • Speed-up increases as DT increases for same
    number of CPUs (Graph 1)
  • Speed-up is optimal at largest domain 1000x2000.
    (Graph 2)
  • Speed-up tend to decrease for smallest domain
    after 9 CPUs e.g. 100x200 (Graph 2)

23
Snapshot Results Analysis
24
1. A 0.19 w 0.04
4. A 0.13 w 0.04
2. A 0.18 w 0.04
5. A 0.12 w 0.04
3. A 0.15 w 0.04
6. A 0.1 w 0.04
25
1.
A 0.19, w 0.04 0.56 lt C2 lt 0.94
26
2.
A 0.18, w 0.04 0.57 lt C2 lt 0.93
27
3.
A 0.15, w 0.04 0.60 lt C2 lt 0.90
28
4.
A 0.13, w 0.04 0.62 lt C2 lt0.88
29
5.
A 0.12, w 0.04 0.63 lt C2 lt 0.87
30
6.
A 0.1, w 0.04 0.65 lt C2 lt 0.85
31
Summary Conclusions
  • Classic spiral found for 0.65ltC2lt0.85
  • Fully excited domain, but no spirals found for
    0.60ltC2lt0.90
  • No excitation found for 0.56ltC2lt0.94
  • Highest speed-up of 6.5 is observed for domain
    size 1000x2000 on 15 CPUs
  • Speed-up increases with increase in DT value for
    same number of CPUs
  • Speed-up decreases for smallest domain size
    100x200 after 9 CPUs

32
Responsibility matrix for the contribution of
group members
33
Thank You
Write a Comment
User Comments (0)
About PowerShow.com