Symmetric Multiprocessing Capabilities of the Resistive Companion Solver - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Symmetric Multiprocessing Capabilities of the Resistive Companion Solver

Description:

Does not scale linearly not everything can be parallelized. Overhead ... Lots of Cheap Machines: Beowulf Cluster. New challenges, new benefits. 14. Thank you ... – PowerPoint PPT presentation

Number of Views:90
Avg rating:3.0/5.0
Slides: 15
Provided by: rogera8
Category:

less

Transcript and Presenter's Notes

Title: Symmetric Multiprocessing Capabilities of the Resistive Companion Solver


1
Symmetric Multiprocessing Capabilities of the
Resistive Companion Solver
VTB Users and Developers Conference Columbia,
SC September 15, 2004
  • Rod Leonard
  • Graduate Student
  • Dept of Electrical Engineering
  • University of South Carolina

2
Outline
  • The need for SMP
  • The challenge and cost
  • Features currently available
  • Performance analysis of currently available
    features
  • Future Directions

3
Interesting Systems
  • Lots of components
  • Computationally intensive components
  • Both

4
The Problem
  • Computationally intensive models
  • Large amounts of models means a big matrix to
    solve
  • Matrix inversion is costly

5
The Solution
  • Specialized machine, with multiple processors
  • Divide the workload, work problem faster

6
The Reality
  • Does not scale linearly not everything can be
    parallelized
  • Overhead associated with solution

7
Where We Are Now
  • Isolated Systems Threading

8
Available Features
  • Make use of it now

9
Fitting the Problem to the Solution
  • Using hardware-in-the-loop solutions to improve
    performance in the software
  • Hardware-in-the-loop
  • Decoupled system behaving as if it were coupled
  • Software matrix separation
  • Coupled system we would like to decouple while
    maintaining coupled behavior

10
Fitting the Problem to the Solution
  • System is decoupled through Resistive Companion
    method

11
The Results
  • Ten seconds simulation performed
  • Without multithreading 227.921s
  • With multithreading 208.719s

12
Where We Are Going Next
  • Optimization
  • Separation of Matrix through other methods
  • Matrix operations themselves parallelized using
    existing packages and other methods

13
Where We Are Going Next
  • Expensive machines
  • Lots of Cheap Machines Beowulf Cluster
  • New challenges, new benefits

14
Thank you
  • Thank you for your time
  • Rod Leonard
  • University of South Carolina
  • leonard_at_engr.sc.edu
Write a Comment
User Comments (0)
About PowerShow.com