Title: ProActive Architecture and Application for the Management of Electrical Networks
1- ProActive Architecture and Application for the
Management of Electrical Networks - E. Zimeo
- Department of Engineering - RCOST
- University of Sannio, Benevento, Italy
- zimeo_at_unisannio.it
- ProActive User Group and Context
- Sophia Antipolis, Nizza, France
- 18 October 2004
2Introduction 1/4
- On-line power systems security analysis (OPSSA)
is one of the most relevant assessments made to
assure the optimal control and management of
electrical networks - There are many phenomena (contingencies) that can
compromise power systems operation - an unexpected variation in the power system
structure - a sudden change of the operational conditions
- OPSSA deals with the assessment of the security
and reliability levels of the system under any
possible contingency
3Introduction 2/4
- Three main steps
- Screening of the most credible contingences
- Predicting their impact on the entire system
operation - the contingencies analysis is performed according
to the (n-1) criterion - for each credible contingency, the simulation of
the system behaviour and the verification of
operational limits violations - the system behavior is verified finding the
solution of the system state equations (power
flow or load flow equations) - Preventive and corrective controlling
- identification of proper control actions able to
reduce the risk of system malfunctioning
4Introduction 3/4
- Focus on-line prediction (step 2)
- computation times should be less than few minutes
for information to be useful - Unfortunately OPSSA is computing and data
intensive - structure of modern power systems
- computational complexity of algorithms
- number of contingencies to analyze
- New methodologies to reduce computational times
- parallel processing on supercomputers and then on
cluster and network of workstations (to reduce
costs) has been employed (i.e. PVM)
5Introduction 4/4
- Our proposal
- A java-based distributed architecture instead of
PVM - Advantages
- Programming is easier
- Portability is assured on each architecture
implementing a JVM - Better integration with Web technologies
- Object-Oriented programming allows for adopting
architectural and design patterns - Disadvantages
- Efficiency is reduced due to Java communication
overheads - Execution time is higher due to interpretation
6The overall distributed architecture
scalability accessibility manageability
7A network of field power meters (FEMs)
- distributed in the most critical sections of the
electrical grid - to provide input field data for power flow
equations, such as active, reactive and
apparent energy - based on ION 7330-7600TM units
- equipped with an on-board web server for their
full remote control
8A network of distributed Intelligent Electronic
Devices (IEDs)
- distributed in the most critical sections of the
electrical grid - to monitor continuously the thermal state of
critical sections of the electrical grid - to analyse system behavior
- to verify limits violations (such as load
capability) - remote controlled by the TCP/IP protocol
9Clients
- used by the power systems operators
- to access OPSSA information for system
monitoring and management
10Web components
- handle the presentation at the server-side
11Business Logic Components
- access data stored in a DBMS
- continuously monitor the power system,
coordinating the execution of security analysis
12A DataBase Management System (DBMS)
- used to permanently store output data
- remote accessed
13Computational Engine
- a parallel and distributed processing system
- to execute power flow equations in a short time
14Computational engine algorithm
Adopting a concurrent algorithm based on the
domain decomposition
15Computational Engine design goals
- The computational engine is designed as a
framework, whose main goals are - high performance
- The framework is to be able to compute the
analysis of each contingency in parallel with
the others - flexibility and scalability
- The framework is to be able to exploit all the
available resources to minimize the computation
time - hierarchy
- The framework is to be able to exploit clusters
of workstations or networks of workstations
handled by front-ends - Architectural design solution hierarchical
master/slave model
16The hierachical m/s model 1/3
- hierarchical master/slave model
- object-level parallelism
- master and slaves
- are transparently
- created
17The hierachical m/s model 2/3
- Three implementation problems
- Task allocation in order to minimize the
execution time - Time minimization algorithm
- Object-oriented design to support a transparent
hierarchical master/slave model - Hierarchical master/slave pattern
- Object-oriented implementation to support a
distributed and parallel implementation of the
hierarchical master/slave pattern - RMI based implementation
- ProActive based implementation
18Task allocation time minimization
- This phase is important and complex due to the
heterogeneity of hardware architectures - to minimize load imbalance, the workload should
be distributed dynamically and adaptively to the
changing conditions of resources. - We consider only static information
- Defined N as the number of independent sub-tasks
in which the overall task is divided - n as the number of available resources at a
certain level - and ti as the elapsed time that the resource i
needs to complete a single sub-task - ni the number of sub-tasks assigned to the
resource i - the problem to minimize the execution time using
assigned resources can be formulated as follows
19The hierachical m/s pattern
- M/S pattern
- splitWork() is used to create slave objects, to
allocate them onto the available resources, and
to partition a task into sub-tasks. - callSlaves() is used to call the method service()
on all the slave objects, which perform the same
computation with different inputs. - combineResults() is used to combine the partial
results in order to produce the final result. - Hierarchical M/S pattern
- An additional class (Server) is introduced
- service()is used to hide the difference between
master and slave objects
20HM/S pattern implementation
- To support distributed and parallel
implementation of HM/S pattern - service() has to be asynchronously invoked
- Each service() method has to be executed by a
different computational resources - Solutions an object-oriented middleware based on
remote method invocations - RMI the well known middleware provided by SUN
- ProActive - Java library for seamless sequential,
concurrent and distributed programming
21RMI implementation of HM/S pattern
- Each Server
- is defined as a remote object
- is allocated onto a different computational
resource - its method service() has to be asynchronously and
remotely invoked by the client - the asynchronous invocation is implemented by
invoking service() within a dedicated thread of
control
22ProActive implementation of HM/S p.
- Each Server
- is defined as an active object (that can be
dynamically created) - is allocated on a different computational
resource (dynamically) - its method service() has to be asynchronously and
remotely invoked by the client. - the asynchronous invocation is directly supported
by ProActive
23Software platform evaluation on a testbed 1/2
- Standard IEEE 118-nodes test network
- electrical network description is local to each
computational resource - The experiments refer to 186 contingencies
- A COW with P1 proc.
- P1 Pentium II 350 MHz
- A NOW with P2 proc.
- P2 Pentium IV 2 GHz
- RMI - ProActive
- Tomcat 4.1
- Java SDK 1.4.1
- MATLAB 6.5
24Software platform evaluation on a testbed 2/2
- The framework shows good results
- RMI vs ProActive
- Exibhit almost the same results (ProActive is
based on RMI) - ProActive simplifies parallel and distributed
programming - ProActive enables group communication (not used
in this work)
Execution times
Speed up
25Open problems
- Problem 1
- The method splitWorks has to know dynamically the
number of computers to use, the computational
power of each computer - Solution 1
- A broker based architecture
- Problem 2
- Too much code for invoking the same method on
each slave - Solution 2
- Adopting groups to simplify the code in the
master/slave model - Problem 3
- Too much data communication
- Solution 3
- Implementation of group communication on
multicast protocols
26Broker based architecture a step further toward
flexibility and efficiency
27A Grid-based Computational Engine
- At the University of Sannio we have developed a
broker-based middleware - HiMM (Hierarchical Metacomputer Middleware)
- a middleware that provides an object-based
programming model
28HiMM collective layer
- Economy-driven resource broker
- ease application submission
- simplifying the application
- deployment
- functionalities of resource
- management
- Application description
- JDF Job Description Format
- Application QoS requirements
- URDF User Requirements Description Format
- the deadline and the budget
- time optimisation algorithm or cost optimisation
algorithm
29OPSSA deployment and evaluation 1/3
- The work load is distributed among the available
resources according to their performance and
price
30OPSSA deployment and evaluation 2/3
- The resources were temporarily dedicated to OPSSA
computations - The budget is limited
31OPSSA deployment and evaluation 3/3
- The resources were temporarily dedicated to OPSSA
computations - The budget is unlimited
32Which programming paradigm for OPSSA on HiMM ?
- OPSSA on HiMM uses a specific implementation of
ProActive
33Integration HiMM / ProActive 1/2
- The implementation of P/H has been made easy
thanks to the particular organization of both
software systems - ProActive is heavily based on the Adapter Pattern
- HiMM is implemented following the Component
Framework approach.
34Integration HiMM / ProActive 2/2
- public class MatrixMultiply extends
- ProActiveExecutionEnvironment
- private NodeMgr nodeMgr
- private volatile boolean stop false
- public void init (NodeMgr nm)
- super.init(nm) nodeMgr nm
- NodeFactory.setFactory(himm, new
HNodeFactory()) -
- public void start()
- stop false
- if(nodeMgr.isRoot())
- HMatrix mDxActiveGroupnull int dim0
- System.out.println("Insert matrix size")
- ltread dimgt
- Matrix mDx new Matrix(dim)
- LevelMgr lm nodeMgr.downLevelMgr()
- int size lm.size()-1
- Object po new ObjectmDx.getTab()
- mDxActiveGroup new Matrixsize
public class HMatrix extends Matrix implements
InitActive, RunActive, EndActive private
HMatrix subMats public void
initActivity(Body body) HBodyAdapter hba
(HBodyAdapter) body.getRemoteAdapter()
NodeMgr nodeMgr hba.getNodeMgr()
if(nodeMgr.isCoordinator()) LevelMgr lm
nodeMgr.downLevelMgr() int size
lm.size()-1 subMats new Matrixsize
Object po tab for(int i 0 iltsize
i) subMatsi (HMatrix)ProActive.new
Active(HMatrix.class.getName(), po,
NodeFactory.getNode(himm//(i1)),null,
HMetaObjectFactory.newInstance())
public void runActivity(Body body)
Service service new Service(body) while
(body.isActive()) Request r
service.blockingRemoveOldest()
HBodyAdapter hba (HBodyAdapter)
body.getRemoteAdapter() NodeMgr nodeMgr
hba.getNodeMgr() if(r.getMethodName().equal
s("multiply") nodeMgr.isCoordinator())
HiMMRequest rr (HiMMRequest)r
floatmSxtab (float)rr.methodCall.getParam
eter(0) Matrix mSx new
Matrix(mSxtab) Matrix results new
MatrixsubMats.length Object
subSxMats mSx.createSubMatrixes(subMats.length)
for(int i0 ilt subMats.length i)
resultsi subMatsi.multiply((float
)subSxMatsi) Matrix result
Matrix.rebuild(results, mSxtab.length)
Object params new Objectresult
Class classes new ClassMatrix.class
try Method m HMatrix.class.getMet
hod("getResult", classes)
rr.methodCall MethodCall.getMethodCall(m,
params) catch (NoSuchMethodException
e) service.serve(rr) else
service.serve(r)
35Performance analisys 1/2
- We have conducted an analysis to compare RMI with
HiMM and P/R with P/H - The benchmark is the invocation of a remote
method (with one parameter and an empty body) and
the return of an integer value, for a varying
size of the parameter - The figure shows that for a small size (1 byte)
of the parameter, RMI RTT (0.9 ms) is smaller
than HiMM RTT (1.3 ms), but P/H RTT (7.8 ms) is
smaller than P/R RTT (9.2 ms) - Even if the HiMM transport layer is not
optimized, P/H behaves better than P/R due to the
use of asynchronous messaging that allows
low-level ProActive operations to be overlapped
with communication
36Performance analisys 2/2
- A further analysis has aimed to measure the
speedup factor by running a simple application
benchmark - The benchmark is the product of two square
matrixes with different sizes of the matrixes. - The performance obtained with P/H is slightly
better than the one obtained with P/R, especially
when the size of the matrixes is small - This is mainly due to the improvement of remote
method invocation implemented by HiMM - When the size of the matrixes is large, the
execution time is dominated by the time of the
matrix serialization, which is the same in P/H
and P/R
37On going work
38ProActive Groups over Reliable Multicast
- We intend to implement the OPSSA by using
ProActive Hierarchical Groups - Group at different hierarchical levels will be
mapped on HiMM macro-nodes - HiMM will provide reliable multicast to ProActive
groups for communication among members
Group
Group
Group
Group
39Web services for process coordination in OPSSA
scalability accessibility manageability
WSRF BPEL Engine
40Generic Middleware for Sensor Networks
scalability accessibility manageability
Sensor Networks MW
WSRF BPEL Engine
41HiMM connectivity layer
- HiMM manages resources according to a
hierarchical topology (Hierarchical
Metacomputers) in order to - meet the constraints of the Internet organization
- exploit the heterogeneity of networks
- improve scalability
- Clusters of computers hidden from the Internet or
intra-connected by fast networks are seen as
macro-nodes - A macro-node is a high-level concept that allows
clusters to be transparently used as a single
powerful machine - A macro-node can in turn contain other
macro-nodes - A metacomputer can be organized according to a
recursive tree topology - The Coordinator interfaces the macro-node with
the metacomputer network
42HiMM resource layer
- A macro-node is characterized by two main
components - The Host Manager (HM)
- The Resource Manager (RM)
- The HM runs on each node wanting to donate CPU
cycles - The RM is use to publish the available computing
power at each level - A macro-node manages another important component
- the Distributed Class Storage System (DCSS)
- This component allows a HiM to run applications
even if application code is not present on nodes
43HiMM node architecture
- HiMM implements its services in several software
components whose interfaces are designed
according to a Component Framework approach - Both nodes and coordinators are processes in
which a set of software components are loaded
either at start-up or at run-time - The main components are
- the Node Manager (NM)
- guarantees macro-node consistency
- provides users with services for writing
distributed applications - takes charge of some system tasks, such as the
creation of new nodes at run-time - the Node Engine (NE)
- The Message Consumer (MC)
- The Execution Environment (EE)
- The Level Sender (LS)
44HiMM communication API
- HiMM allows nodes to communicate using the
component LevelSender - A LevelSender can be customized even if a default
one is always available - This component provides users with simple
communication mechanisms based on the
asynchronous sending of objects - class DefaultLevelSender impements LevelSender
- public void send(Object m, int node)
- public void broadcast(Object m)
- public void deepBroadcast (Object m)
-
45HiMM interaction among components
class Application implements ExecutionEnvironment
void init(NodeMgr nm) throws nodeMgr
nm void start()
nodeMgr.downLevelSender().send(msg, node)
nodeMgr.downLevelMgr().addNode() Object o
nodeMgr.downLevelMsgConsumer().receive()