Learning Based Assume-Guarantee Reasoning - PowerPoint PPT Presentation

About This Presentation
Title:

Learning Based Assume-Guarantee Reasoning

Description:

... good subset of the assumption alphabet? ... AllDiff adds all actions in the symmetric difference of the trace alphabets ... Measure effect of alphabet refinement ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 28
Provided by: cor126
Category:

less

Transcript and Presenter's Notes

Title: Learning Based Assume-Guarantee Reasoning


1
Learning Based Assume-Guarantee Reasoning
  • Corina Pasareanu
  • Perot Systems Government Services,
  • NASA Ames Research Center
  • Joint work with
  • Dimitra Giannakopoulou (RIACS/NASA Ames)
  • Howard Barringer (U. of Manchester)
  • Jamie Cobleigh (U. of Massachusetts
    Amherst/MathWorks)
  • Mihaela Gheorghiu (U. of Toronto)

2
Thanks
  • Eric Madelaine
  • Monique Simonetti
  • INRIA

3
Context
  • Objective
  • An integrated environment that supports software
    development and verification/validation
    throughout the lifecycle detect integration
    problems early, prior to coding
  • Approach
  • Compositional (divide and conquer)
    verification, for increased scalability, at
    design level
  • Use design level artifacts to improve/aid coding
    and testing

Compositional Verification
Testing
Design
Coding
Requirements
Deployment
implementations
4
Compositional Verification
Does system made up of M1 and M2 satisfy property
P?
M1
  • Check P on entire system too many states!
  • Use the natural decomposition of the system into
    its components to break-up the verification task
  • Check components in isolation
  • Does M1 satisfy P?
  • Typically a component is designed to satisfy its
    requirements in specific contexts / environments
  • Assume-guarantee reasoning
  • Introduces assumption A representing M1s
    context

A
M2
5
Assume-Guarantee Rules
  • Reason about triples
  • ?A? M ?P?
  • The formula is true if whenever M is part of a
    system that satisfies A, then the system must
    also guarantee P

M1
  • Simplest assume-guarantee rule ASYM

A
M2
How do we come up with the assumption? (usually a
difficult manual process) Solution use a
learning algorithm.
6
Outline
  • Framework for learning based assume-guarantee
    reasoning TACAS03
  • Automates rule ASYM
  • Extension with symmetric SAVCBS03 and circular
    rules
  • Extension with alphabet refinement TACAS07
  • Implementation and experiments
  • Other extensions
  • Related work
  • Conclusions

7
Formalisms
  • Components modeled as finite state machines (FSM)
  • FSMs assembled with parallel composition operator
  • Synchronizes shared actions, interleaves
    remaining actions
  • A safety property P is a FSM
  • P describes all legal behaviors
  • Perr complement of P
  • determinize complete P with an error state
  • bad behaviors lead to error
  • Component M satisfies P iff error state
    unreachable in (M Perr)
  • Assume-guarantee reasoning
  • Assumptions and guarantees are FSMs
  • ?A? M ?P? holds iff error state unreachable in (A
    M Perr)

8
Example
Input
Ordererr
in
send
in
ack

out
in
out
Output
out
send
ack
9
The Weakest Assumption
  • Given component M, property P, and the interface
    of M with its environment, generate the weakest
    environment assumption WA such that ?WA? M ?P?
    holds
  • Weakest means that for all environments E
  • ?true? M E ?P? IFF ?true? E ?WA?

10
Learning for Assume-Guarantee Reasoning
  • Use an off-the-shelf learning algorithm to build
    appropriate assumption for rule ASYM
  • Process is iterative
  • Assumptions are generated by querying the system,
    and are gradually refined
  • Queries are answered by model checking
  • Refinement is based on counterexamples obtained
    by model checking
  • Termination is guaranteed

11
Learning with L
  • L algorithm by Angluin, improved by Rivest
    Schapire
  • Learns an unknown regular language U (over
    alphabet ?) and produces a DFA A such that L(A)
    U
  • Uses a teacher to answer two types of questions

true
L
query string s
is s in U?
false
remove string t
false
conjecture Ai
true
output DFA A such that L(A) U
is L(Ai)U?
false
add string t
12
Learning Assumptions
  1. ?A? M1 ?P?
  2. ?true? M2 ?A?
  3. ?true? M1 M2 ?P?
  • Use L to generate candidate assumptions
  • ?A (?M1 ? ?P) ? ?M2

true
Model Checking
L
query string s
false
remove cex. t/?A
false (cex. t)
conjecture Ai
true
true
P holds in M1 M2
false (cex. t)
counterex. analysis ?t/?A? M1 ?P?
false
add cex. t/?A
true
P violated
13
Characteristics
  • Terminates with minimal automaton A for U
  • Generates DFA candidates Ai A1 lt A2 lt lt
    A
  • Produces at most n candidates, where n A
  • queries ?(kn2 n logm),
  • m is size of largest counterexample, k is size of
    alphabet

14
Example
Ordererr
Output
Input
in
out
send
out
out
in
ack
Computed Assumption
send
A2
out, send
15
Extension to n components
  • To check if M1 M2 Mn satisfies P
  • decompose it into M1 and M2 M2 Mn
  • apply learning framework recursively for 2nd
    premise of rule
  • A plays the role of the property
  • At each recursive invocation for Mj and Mj
    Mj1 Mn
  • use learning to compute Aj such that
  • ?Ai? Mj ?Aj-1? is true
  • ?true? Mj1 Mn?Aj? is true

16
Symmetric Rules
  • Assumptions for both components at the same time
  • Early termination smaller assumptions
  • Example symmetric rule SYM
  • coAi complement of Ai, for i1,2
  • Requirements for alphabets
  • ?P ? ?M1 ? ?M2 ?Ai ? (?M1 ? ?M2) ? ?P, for i
    1,2
  • The rule is sound and complete
  • Completeness needed to guarantee termination
  • Straightforward extension to n components

17
Learning Framework for Rule SYM
add counterex.
add counterex.
L
L
remove counterex.
remove counterex.
A2
A1
?A1? M1 ?P?
?A2? M2 ?P?
false
false
true
true
L(coA1 coA2) ? L(P)
true
P holds in M1M2
false
counterex. analysis
P violated in M1M2
18
Circular Rule
  • Rule CIRC from GrumbergLong Concur91
  • Similar to rule ASYM applied recursively to 3
    components
  • First and last component coincide
  • Hence learning framework similar
  • Straightforward extension to n components

19
Outline
  • Framework for assume-guarantee reasoning
    TACAS03
  • Uses learning algorithm to compute assumptions
  • Automates rule ASYM
  • Extension with symmetric SAVCBS03 and circular
    rules
  • Extension with alphabet refinement TACAS07
  • Implementation and experiments
  • Other extensions
  • Related work
  • Conclusions

20
Assumption Alphabet Refinement
  • Assumption alphabet was fixed during learning
  • ?A (?M1 ? ?P) ? ?M2
  • SPIN06 A subset alphabet
  • May be sufficient to prove the desired property
  • May lead to smaller assumption
  • How do we compute a good subset of the assumption
    alphabet?
  • Solution iterative alphabet refinement
  • Start with small (empty) alphabet
  • Add actions as necessary
  • Discovered by analysis of counterexamples
    obtained from model checking

21
Learning with Alphabet Refinement
  • 1. Initialize S to subset of alphabet ?A (?M1 ?
    ?P) ? ?M2
  • 2. If learning with S returns true, return true
    and go to 4. (END)
  • 3. If learning returns false (with counterexample
    c), perform extended counterexample analysis on
    c.
  • If c is real, return false and go to 4. (END)
  • If c is spurious, add more actions from ?A to S
    and go to 2.
  • 4. END

22
Extended Counterexample Analysis
?true? M2 ?A?
false
cex t
false
true
L
?t/S? M1 ?P?
?t/?A? M1 ?P?
Real error (c)
false
true
cex c
Original Cex. Analysis S ?A
Refiner compare t/?A and c/?A
different
Add actions to S and restart learning
?A (?M1 ? ?P) ? ?M2 S ? ?A is the current
alphabet
23
Characteristics
  • Initialization of S
  • Empty set or property alphabet ?P ? ?A
  • Refiner
  • Compares t/?A and c/?A
  • Heuristics
  • AllDiff adds all actions in the symmetric
    difference of the trace alphabets
  • Forward scans traces in parallel forward adding
    first action that differs
  • Backward symmetric to previous
  • Termination
  • Refinement produces at least one new action and
    the interface is finite
  • Generalization to n components
  • Through recursive invocation

24
Implementation Experiments
  • Implementation in the LTSA tool
  • Learning using rules ASYM, SYM and CIRC
  • Supports reasoning about two and n components
  • Alphabet refinement for all the rules
  • Experiments
  • Compare effectiveness of different rules
  • Measure effect of alphabet refinement
  • Measure scalability as compared to
    non-compositional verification

25
Case Studies
  • Model of Ames K9 Rover Executive
  • Executes flexible plans for autonomy
  • Consists of main Executive thread and
    ExecCondChecker thread for monitoring state
    conditions
  • Checked for specific shared variable if the
    Executive reads its value, the ExecCondChecker
    should not read it before the Executive clears it
  • Model of JPL MER Resource Arbiter
  • Local management of resource contention between
    resource consumers (e.g. science instruments,
    communication systems)
  • Consists of k user threads and one server thread
    (arbiter)
  • Checked mutual exclusion between resources

26
Results
  • Rule ASYM more effective than rules SYM and CIRC
  • Recursive version of ASYM the most effective
  • When reasoning about more than two components
  • Alphabet refinement improves learning based
    assume guarantee verification significantly
  • Backward refinement slightly better than other
    refinement heuristics
  • Learning based assume guarantee reasoning
  • Can incur significant time penalties
  • Not always better than non-compositional
    (monolithic) verification
  • Sometimes, significantly better in terms of memory

27
Analysis Results
ASYM
ASYM refinement
Monolithic
Case A Mem Time A Mem Time Mem Time
MER 2 40 8.65 21.90 6 1.23 1.60 1.04 0.04
MER 3 501 240.06 -- 8 3.54 4.76 4.05 0.111
MER 4 273 101.59 -- 10 9.61 13.68 14.29 1.46
MER 5 200 78.10 -- 12 19.03 35.23 14.24 27.73
MER 6 162 84.95 -- 14 47.09 91.82 -- 600
K9 Rover 11 2.65 1.82 4 2.37 2.53 6.27 0.015
A assumption size Mem memory (MB) Time
(seconds) -- reached time (30min) or memory
limit (1GB)
28
Other Extensions
  • Design-level assumptions used to check
    implementations in an assume-guarantee way
    ICSE04
  • Allows for detection of integration problems
    during unit verification/testing
  • Extension of SPIN model checker to perform
    learning based assume-guarantee reasoning
    SPIN06
  • Our approach can use any model checker
  • Similar extension for Ames Java PathFider tool
    ongoing work
  • Support compositional reasoning about Java
    code/UML statecharts
  • Support for interface synthesis compute
    assumption for M1 for any M2
  • Compositional verification of C code
  • Collaboration with CMU
  • Uses predicate abstraction to extract FSMs from
    C components
  • More info on my webpage
  • http//ase.arc.nasa.gov/people/pcorina/

29
Applications
  • Support for compositional verification
  • Property decomposition
  • Assumptions for assume-guarantee reasoning
  • Assumptions may be used for component
    documentation
  • Software patches
  • Assumption used as a patch that corrects a
    component errors
  • Runtime monitoring of environment
  • Assumption monitors actual environment during
    deployment
  • May trigger recovery actions
  • Interface synthesis
  • Component retrieval, component adaptation,
    sub-module construction, incremental
    re-verification, etc.

30
Related Work
  • Assume-guarantee frameworks
  • Jones 83 Pnueli 84 Clarke, Long McMillan 89
    Grumberg Long 91
  • Tool support MOCHA Calvin (static checking of
    Java)
  • We were the first to propose learning based
    assume guarantee reasoning since then, other
    frameworks were developed
  • Alur et al. 05, 06 Symbolic BDD implementation
    for NuSMV (extended with hyper-graph partitioning
    for model decomposition)
  • Sharygina et al. 05 Checks component
    compatibility after component updates
  • Chaki et al. 05 Checking of simulation
    conformance (rather than trace inclusion)
  • Sinha Clarke 07 SAT based compositional
    verification using lazy learning
  • Interface synthesis using learning Alur et al.
    05
  • Learning with optimal alphabet refinement
  • Developed independently by Chaki Strichman 07
  • CEGAR counterexample guided abstraction
    refinement
  • Our alphabet refinement is similar in spirit
  • Important differences
  • Alphabet refinement works on actions, rather than
    predicates
  • Applied compositionally in an assume guarantee
    style
  • Computes under-approximations (of assumptions)
    rather than behavioral over-approximations
  • Permissive interfaces Hezinger et al. 05

31
Conclusion and Future Work
  • Learning based assume guarantee reasoning
  • Uses L for automatic derivation of assumptions
  • Applies to FSMs and safety properties
  • Asymmetric, symmetric, and circular rules
  • Can accommodate other rules
  • Alphabet refinement to compute small assumption
    alphabets that are sufficient for verification
  • Experiments
  • Significant memory gains
  • Can incur serious time overhead
  • Should be viewed as a heuristic
  • To be used in conjunction with other techniques,
    e.g. abstraction
  • Future work
  • Look beyond safety (learning for infinitary
    regular sets)
  • Optimizations to overcome time overhead
  • Re-use learning results across refinement stages
  • CEGAR to compute assumptions as abstractions of
    environments
  • More experiments
Write a Comment
User Comments (0)
About PowerShow.com