A Performance Study of BDD-Based Model Checking - PowerPoint PPT Presentation

1 / 60
About This Presentation
Title:

A Performance Study of BDD-Based Model Checking

Description:

A Performance Study of BDD-Based Model Checking Bwolen Yang Randal E. Bryant, David R. O Hallaron, Armin Biere, Olivier Coudert, Geert Janssen – PowerPoint PPT presentation

Number of Views:121
Avg rating:3.0/5.0
Slides: 61
Provided by: bwo79
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: A Performance Study of BDD-Based Model Checking


1
A Performance Study of BDD-Based Model Checking
  • Bwolen Yang

Randal E. Bryant, David R. OHallaron, Armin
Biere, Olivier Coudert, Geert Janssen Rajeev K.
Ranjan, Fabio Somenzi
2
Motivation for Studying Model Checking (MC)
  • MC is an important part of formal verification
  • digital circuits and other finite state systems
  • BDD is an enabling technology for MC
  • Not well studied
  • Packages are tuned using combinational circuits
    (CC)
  • Qualitative differences between CC and MC
    computations
  • CC build outputs, constant time equivalence
    checking
  • MC build model, many fixed-points to verify
    the specs
  • CC BDD algorithms are polynomial
  • MC key BDD algorithms are exponential

3
Outline
  • BDD Overview
  • Organization of this Study
  • participants, benchmarks, evaluation process
  • Experimental Results
  • performance improvements
  • characterizations of MC computations
  • BDD Evaluation Methodology
  • evaluation platform
  • various BDD packages
  • real workload
  • metrics

4
BDD Overview
  • BDD
  • DAG representation for Boolean functions
  • fixed order on Boolean variables
  • Set Representation
  • represent set as Boolean function
  • an elements value is true ltgt it is in the set
  • Transition Relation Representation
  • set of pairs (current to next state transition)
  • each state variable is split into two copies
  • current state and next state

5
BDD Overview (Contd)
  • BDD Algorithms
  • dynamic programming
  • sub-problems (operations)
  • recursively apply Shannon decomposition
  • memoization computed cache
  • Garbage Collection
  • recycle unreachable (dead) nodes
  • Dynamic Variable Reordering
  • BDD graph size depends on the variable order
  • sifting based
  • nodes in adjacent levels are swapped

6
Organization of this StudyParticipants
Armin Biere ABCD Carnegie Mellon / Universität
Karlsruhe Olivier Coudert TiGeR Synopsys /
Monterey Design Systems Geert Janssen
EHV Eindhoven University of Technology Rajeev K.
Ranjan CAL Synopsys Fabio Somenzi
CUDD University of Colorado Bwolen Yang
PBF Carnegie Mellon
7
Organization of this StudySetup
  • Metrics 17 statistics
  • Benchmark 16 SMV execution traces
  • traces of BDD-calls from verification of
  • cache coherence, Tomasulo, phone, reactor, TCAS
  • size
  • 6 million - 10 billion sub-operations
  • 1 - 600 MB of memory
  • Evaluation platform trace driver
  • drives BDD packages based on execution trace

8
Organization of this StudyEvaluation Process
Phase 1 no dynamic variable reordering Phase 2
with dynamic variable reordering
9
Phase 1 Results Initial / Final
speedups gt 100 6 10 - 100 16 5 - 10
11 2 - 5 28
Conclusion collaborative efforts have led to
significant performance improvements
10
Phase 1Hypotheses / Experiments
  • Computed Cache
  • effects of computed cache size
  • amounts of repeated sub-problems across time
  • Garbage Collection
  • reachable / unreachable
  • Complement Edge Representation
  • work
  • space
  • Memory Locality for Breadth-First Algorithms
  • Computed Cache
  • effects of computed cache size
  • amounts of repeated sub-problems across time
  • Garbage Collection
  • reachable / unreachable
  • Complement Edge Representation
  • work
  • space
  • Memory Locality for Breadth-First Algorithms

11
Phase 1Hypotheses / Experiments (Contd)
  • For Comparison
  • ISCAS85 combinational circuits (gt 5 sec, lt 1GB)
  • c2670, c3540
  • 13-bit, 14-bit multipliers based on c6288
  • Metrics depends only on the trace and BDD
    algorithms
  • machine-independent
  • implementation-independent

12
Computed CacheRepeated Sub-problems Across Time
  • Source of Speedup
  • increase computed cache size
  • Possible Cause
  • many repeated sub-problems are far apart in time
  • Validation
  • study the number of repeated sub-problems across
  • user issued operations (top-level operations).

13
HypothesisTop-Level Sharing
  • Hypothesis
  • MC computations have a large number of repeated
  • sub-problems across the top-level operations.
  • Experiment
  • measure the minimum number of operations with
  • GC disabled and complete cache.
  • compare this with the same setup, but
  • cache is flushed between top-level operations.

14
Results on Top-Level Sharing
flush cache flushed between top-level
operations Conclusion large cache is more
important for MC
15
Garbage CollectionRebirth Rate
  • Source of Speedup
  • reduce GC frequency
  • Possible Cause
  • many dead nodes become reachable again (rebirth)
  • GC is delayed till the number of dead nodes
  • reaches a threshold
  • dead nodes are reborn when they are part of the
  • result of new sub-problems

16
HypothesisRebirth Rate
Hypothesis MC computations have very high rebirth
rate. Experiment measure the number of deaths and
the number of rebirths
17
Results on Rebirth Rate
  • Conclusions
  • delay garbage collection
  • triggering GC should not base only on of dead
    nodes
  • delay updating reference counts

18
BF BDD Construction
Two packages (CAL and PBF) are BF based.
19
BF BDD Construction Overview
  • Level-by-Level Access
  • operations on same level (variable) are
    processed together
  • one queue per level
  • Locality
  • group nodes of the same level together in memory
  • Good memory locality due to BF gt
  • of ops processed per queue visit must be high

20
Average BF Locality
Conclusion MC traces generally have less BF
locality
21
Average BF Locality / Work
Conclusion For comparable BF locality, MC
computations do much more work.
22
Phase 1Some Issues / Open Questions
  • Memory Management
  • space-time tradeoff
  • computed cache size / GC frequency
  • resource awareness
  • available physical memory, memory limit, page
    fault rate
  • Top-Level Sharing
  • possibly the main cause for
  • strong cache dependency
  • high rebirth rate
  • better understanding may lead to
  • better memory management
  • higher level algorithms to exploit the pattern

23
Phase 2Dynamic Variable Reordering
  • BDD Packages Used
  • CAL, CUDD, EHV, TiGeR
  • improvements from phase 1 incorporated

24
Why is Variable Reordering Hard to Study
  • Time-space tradeoff
  • how much time to spent to reduce graph sizes
  • Chaotic behavior
  • e.g., small changes to triggering / termination
    criteria
  • can have significant performance impact
  • Resource intensive
  • reordering is expensive
  • space of possible orderings is combinatorial
  • Different variable order gt different
    computation
  • e.g., many dont-care space optimization
    algorithms

25
Phase 2Experiments
  • Quality of Variable Order Generated
  • Variable Grouping Heuristic
  • keep strongly related variables adjacent
  • Reorder Transition Relation
  • BDDs for the transition relation are used
    repeatedly
  • Effects of Initial Variable Order
  • with and without variable reordering
  • Quality of Variable Order Generated
  • Variable Grouping Heuristic
  • keep strongly related variables adjacent
  • Reorder Transition Relation
  • BDDs for the transition relation are used
    repeatedly
  • Effects of Initial Variable Order
  • with and without variable reordering

26
Variable Grouping HeuristicGroup Current / Next
Variables
  • Current / Next State Variables
  • for transition relation, state variable is split
    into two
  • current state and next state
  • Hypothesis
  • Grouping the corresponding current- and
    next-state variables is a good heuristic.
  • Experiment
  • for both with and without grouping, measure
  • work ( of operations)
  • space (max of live BDD nodes)
  • reorder cost ( of nodes swapped with their
    children)

27
Results onGrouping Current / Next Variables
All results are normalized against no variable
grouping. Conclusion grouping is generally
effective
28
Effects of Initial Variable OrderExperimental
Setup
  • For each trace,
  • find a good variable ordering O
  • perturb O to generate new variable orderings
  • fraction of variables perturbed
  • distance moved
  • measure the effects of these new orderings
  • with and without dynamic variable reordering

29
Effects of Initial Variable OrderPerturbation
Algorithm
  • Perturbation Parameters (p, d)
  • p probability that a variable will be perturbed
  • d perturbation distance
  • Properties
  • in average, p fraction of variables is perturbed
  • max distance moved is 2d
  • (p 1, d infinity) gt completely random
    variable order
  • For each perturbation level (p, d)
  • generate a number (sample size) of variable orders

30
Effects of Initial Variable OrderParameters
  • Parameter Values
  • p (0.1, 0.2, , 1.0)
  • d (10, 20, , 100, infinity)
  • sample size 10
  • gt for each trace,
  • 1100 orderings
  • 2200 runs (w/ and w/o dynamic reordering)

31
Effects of Initial Variable OrderSmallest Test
Case
  • Base Case (best ordering)
  • time 13 sec
  • memory 127 MB
  • Resource Limits on Generated Orders
  • time 128x base case
  • memory 500 MB

32
Effects of Initial Variable Order Result
of unfinished cases
At 128x/500MB limit, no reorder finished
33, reorder finished
90. Conclusion dynamic reordering is effective
33
Phase 2Some Issues / Open Questions
  • Computed Cache Flushing
  • cost
  • Effects of Initial Variable Order
  • determine sample size
  • Need New Better Experimental Design

34
BDD Evaluation Methodology
  • Trace-Driven Evaluation Platform
  • real workload (BDD-call traces)
  • study various BDD packages
  • focus on key (expensive) operations
  • Evaluation Metrics
  • more rigorous quantitative analysis

35
BDD Evaluation MethodologyMetrics Time
36
BDD Evaluation MethodologyMetrics Space
37
Importance of Better MetricsExample Memory
Locality
38
Summary
  • Collaboration Evaluation Methodology
  • significant performance improvements
  • up to 2 orders of magnitude
  • characterization of MC computation
  • computed cache size
  • garbage collection frequency
  • effects of complement edge
  • BF locality
  • reordering heuristic
  • current / next state variable grouping
  • effects of reordering the transition relation
  • effects of initial variable orderings
  • other general results (not mentioned in this
    talk)
  • issues and open questions for future research

39
Conclusions
  • Rigorous quantitative analysis can lead to
  • dramatic performance improvements
  • better understanding of computational
    characteristics
  • Adopt the evaluation methodology by
  • building more benchmark traces
  • for IP issues, BDD-call traces are hard to
    understand
  • using / improving the proposed metrics for
    future evaluation

For data and BDD traces used in this
study, http//www.cs.cmu.edu/bwolen/fmcad98/
40
(No Transcript)
41
Phase 1Hypotheses / Experiments
  • Computed Cache
  • effects of computed cache size
  • amounts of repeated sub-problems across time
  • Garbage Collection
  • reachable / unreachable
  • Complement Edge Representation
  • work
  • space
  • Memory Locality for Breadth-First Algorithms

42
Phase 2Experiments
  • Quality of Variable Order Generated
  • Variable Grouping Heuristic
  • keep strongly related variables adjacent
  • Reorder Transition Relation
  • BDDs for the transition relation are used
    repeatedly
  • Effects of Initial Variable Order
  • for both with and without variable reordering

43
Benchmark Sizes
Min Ops minimum number of sub-problems/operations
(no GC and complete cache) Max Live Nodes
maximum number of live BDD nodes
44
Phase 1 Before/After Cumulative
Speedup Histogram
6 packages 16 traces 96 cases
45
Computed Cache Size Dependency
  • Hypothesis
  • The computed cache is more important for MC than
    for CC.
  • Experiment
  • Vary the cache size and measure its effects on
    work.
  • size as a percentage of BDD nodes
  • normalize the result to minimum amount of work
  • necessary i.e., no GC and complete cache.

46
Effects of Computed Cache Size
of ops normalized to the minimum number of
operations cache size of BDD nodes
Conclusion large cache is important for MC
47
Death Rate
Conclusion death rate for MC can be very high
48
Effects of Complement Edge
Work
Conclusion complement edge only affects work for
CC
49
Effects of Complement Edge
Space
Conclusion complement edge does not affect
space Note maximum of live BDD nodes would be
better measure
50
Phase 2 ResultsWith / Without Reorder
4 packages 16 traces 64 cases
51
Variable Reordering Effects of Reordering
Transition Relation Only
All results are normalized against variable
reordering w/ grouping.
nodes swapped number of nodes swapped with their
children
52
Quality of the New Variable Order
  • Experiment
  • use the final variable ordering as the new
    initial order.
  • compare the results using the new initial order
    with
  • the results using the original variable order.

53
Results on Quality of the New Variable Order
Conclusion qualities of new variable orders are
generally good.
54
No Reorder (gt 4x or gt 500Mb)
55
gt 4x or gt 500Mb
Conclusions For very low perturbation,
reordering does not work well.
Overall, very few cases get finished.
56
gt 32x or gt 500Mb
Conclusion variable reordering worked rather well
57
Memory Out (gt 512Mb)
Conclusion memory intensive at highly perturbed
region.
58
Timed Out (gt 128x)
diagonal band from lower-left to upper-right
59
Issues and Open QuestionsCross Top-Level Sharing
Potential experiment identify how far apart these
repetitions are
60
Issues and Open QuestionsInconsistent
Cross-Platform Results
Probable Cause memory hierarchy This may shed
some light on the memory locality issues.
Write a Comment
User Comments (0)
About PowerShow.com