Software Architecture based Performance and Reliability Evaluation - PowerPoint PPT Presentation

About This Presentation
Title:

Software Architecture based Performance and Reliability Evaluation

Description:

... Analysis with Path Pruning using Coverage Data. Cutting ... Some errors escape in the pruned path analysis. Depends on testing framework. Cutting Edge 2005 ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 22
Provided by: vibhus
Category:

less

Transcript and Presenter's Notes

Title: Software Architecture based Performance and Reliability Evaluation


1
Software Architecture based Performance and
Reliability Evaluation
  • Vibhu S. Sharma
  • Ph.D. Scholar
  • CSE, IITk

2
Software Architecture
  • Software architecture is the fundamental
    organization of a system, embodied in its
    components, their relationships to each other and
    the environment, and the principles governing its
    design and evolution. - IEEE 1471-2000.
  • Software architecture styles are like templates
    which are followed by commonly used systems
  • Layered style Web-based systems
  • Pipe and Filter style Compilers
  • . . .

3
Modeling the workflow of the Software Architecture
  • We use finite absorbing Discrete Time Markov
    Chains (DTMC) for representing the software
    architecture workflow.
  • Examples
  • These DTMCs can be used to find the average visit
    counts to various components for a typical
    request as well as the limiting probabilities.
  • Very useful for reliability as well as
    performance prediction.

A typical layered architecture
4
Architecture based Reliability prediction
  • Issues/Questions
  • What is the reliability of a software system
    composed of unreliable components?
  • How does increasing the reliability of a
    particular component affects overall reliability
    ?
  • How does changing the architecture affect
    reliability ?
  • Reliability modeling using DTMCs
  • Model the component failure as a state in the
    DTMC and associate probabilities of transition
    (unreliabilities).
  • Overall reliability is the probability of ending
    in the correct state.
  • Another option is using a reward based method

5
Architecture based performance prediction
  • Issues/Questions
  • What is the maximum number of clients the system
    can able to handle before it saturates ?
  • What effect does varying the number of clients
    have on the throughput and the average response
    time ?
  • How should software components be allocated to
    the hardware nodes ?
  • What changes should be made in the system to
    improve performance ?
  • We take an example of layered software
    architecture.
  • A Discrete Time Markov Chain (DTMC) is used to
    characterize the control flow in the layered
    software architecture.
  • The DTMC is used to calculate the visit counts to
    various layers and then a closed queueing network
    (QN) model is constructed and solved.

6
Some Analysis Results
  • Average response time and throughput and
    bottleneck analysis
  • Effect of change in individual reliabilities
    workload on overall reliability

7
Reliability and Performance Tradeoff
  • Given a set of components and a set of machines
    to deploy these onto
  • What are the performance reliability
    characteristics of the setup under a given load ?
  • Which architecture should be chosen for optimal
    performance and reliability ?
  • What is the tradeoff ?
  • Software failures of different types cause
    different effects on the performance of the
    system.
  • Hardware availability also needs to be taken into
    account.
  • Some configurations will result in high
    throughput with decreased reliability and
    vice-versa.

8
Combining Static Analysis and Testing Frameworks
to find Software Defects
  • Vipindeep V

9
Defect Prevention during Software Development
Static analysis
Test Suite
Program Specification
10
Testing Static Analysis How?
  • Static Analysis Vs Testing
  • Different areas altogether, but motive identify
    defects
  • Combining the approaches improved program
    checking
  • Motivations for the Proposed Approach
  • Static analysis concerns
  • Testing is done.. Bugs are revealed
  • Ideal test suite gt identify all bugs
  • Non-ideal scenario of Testing
  • Identify defects, partially
  • Some paths/blocks are not (properly) tested ..
    Why??
  • Static analysis effort on unexplored parts of the
    code
  • Our proposed approach
  • Focused Static Analysis with Path Pruning using
    Coverage Data

11
Overview of the approach
12
Pros and Cons
  • Advantages
  • New real errors can be identified which escaped
    in _all_paths analysis
  • Rigorous testing effort counted for better static
    analysis
  • Reduce execution time
  • Reduced warnings gt Reduced Noise
  • Disadvantages
  • Some errors escape in the pruned path analysis
  • Depends on testing framework

13

Preliminary Results
  • Applied the approach with FindBugs, Jlint for
    JAVA
  • Results from testing were encouraging
  • Tested the applicability using Random Path
    Selection

14
Conclusions
  • Works well in practice
  • Finds enough real defects to be useful
  • Noise is low enough that people can use it
  • Scales well, so works on large very large code
    bases
  • More issues to explore..
  • Better algorithm to use testing data
  • Path profiles, Crash dumps focused static
    analysis
  • Use static analysis results for focusing the
    testing effort
  • Combining static and dynamic analysis
  • Usability related issues

15
State Based Testing Automation
  • Atul Gupta
  • Ph.D. Scholar

16
State Based Testing
  • Program behavior can be modeled as a state graph
  • A formal testing approach at all levels-unit,
    component or system level.
  • Effective testing strategy to O-O software
  • Test Cases and Test Oracles can be automatically
    generated and evaluated

Atul Gupta PhD Scholar
17
An Example A CoinBox Class
Class CoinBox unsigned totalQtrs unsigned
curQtrs unsigned allowVend public CoinBox( )
totalQtrs 0 curQtrs 0 allowVend
0 void retQtrs( ) curQtrs 0 void
addQtr( ) curQtrs curQtrs 1 if (curQtrs
gt 1) allowVend 1 void vend( ) if
(allowVend) totalQtrs totalQtrs
curQtrs curQtrs 0 allowVend 0
18
State Based Testing Criteria
  • All-Transition Coverage (AT)
  • All-Transition-pair Coverage (ATP)
  • Full Predicate coverage (FP)
  • Transition-Tree Coverage (TT)
  • All-Round-Trip path coverage (ART)
  • Complete Sequence (CS)

19
Generating Test Sequences
Criterion Criterion Prefix (State) Test Sequence Test Oracle (State)
All-Transition (AT) All-Transition (AT) WAIT Q1 Q2 addQtr( )- retQtrs( ) addQtr( )-addQtr()- retQtrs( ) vend( ) WAIT WAIT WAIT
All-Transition-Pair (ATP) At Q1 WAIT WAIT addQtr( )- retQtrs( ) addQtr( )- addQtr( ) WAIT Q2
All-Transition-Pair (ATP) At Q2 Q1 Q1 Q1 Q2 Q2 Q2 addQtr( )- retQtrs( ) addQtr( )- addQtr( ) addQtr( )- vend( ) addQtr( )- retQtrs( ) addQtr( )- addQtr( ) addQtr( )- vend( ) WAIT Q2 WAIT WAIT Q2 WAIT
All-Transition-Pair (ATP) At WAIT Q2 Q1 Q2 retQtrs( )- addQtr( ) retQtrs( )- addQtr( ) vend ( )- addQtr( ) Q1 Q1 Q1
Transition-Tree (TT) Transition-Tree (TT) WAIT WAIT WAIT WAIT addQtr( )- retQtrs( ) addQtr( )- addQtr( )-retQtrs( ) addQtr( )- addQtr( )-vend( ) addQtr( )- addQtr( )- addQtr( ) WAIT WAIT WAIT Q2
20
Research Issues
  • How to generate test cases automatically from a
    state model
  • How to execute them in an automated manner
  • How to evaluate the results automatically

21
Our Approach
  • Identify State Model of a Class
  • Generate test sequences based on some Coverage
    Criteria
  • Convert these sequences into JUnit format for
    Automatic Execution and Result evaluation
  • AIM Complete end-to-end Automation
Write a Comment
User Comments (0)
About PowerShow.com