Rethinking Risk Analysis - PowerPoint PPT Presentation

1 / 98
About This Presentation
Title:

Rethinking Risk Analysis

Description:

Yes: Minimax, not risk-scoring. 20. Some challenges and limitations of risk scoring ... Minimax optimization: Model the uncertainties about (T, V, C) in more detail ... – PowerPoint PPT presentation

Number of Views:38
Avg rating:3.0/5.0
Slides: 99
Provided by: tony1154
Category:

less

Transcript and Presenter's Notes

Title: Rethinking Risk Analysis


1
Rethinking Risk Analysis
  • Tony Cox
  • MORS Workshop
  • April 13, 2009

2
How to better defend ourselves against
terrorists?Top-level view
3
Elements of smart defense
  • Anticipate attacker actions, reactions
  • What can they afford to do? When?
  • What is their best response to our actions and
    defenses?
  • Allocate resources and countermeasures to protect
    targets and to deter attacks
  • Adapt to new information and intelligence
  • Reallocate effectively hedge bets
  • Risk scoring does not do these things very well
  • How can we do better?

4
Other defenses
  • Secrecy and randomization
  • Deception, decoys, disinformation
  • Infiltration, counter-intelligence
  • Detect and interdict at attack-planning stage
  • Rapidly recognize, respond, contain
  • Preparation, excecution

5
Our focus Attack-Defense Games
  • Defender allocates resources, countermeasures
  • Attacker decides what to do, given what the
    defender has done
  • Could iterate, several layers deep (chess)
  • Attacker and defender receive consequences
  • How can Defender minimize loss?

6
Backward chaining paradigm for defensive risk
management
  • Envision What might go wrong?
  • E.g., secure facility compromised or damaged
  • Analyze How might it happen? How likely is it?
  • Identify alternative sets of sufficient
    conditions
  • Path sets, minimal path sets, dominant
    contributors
  • Recursive deepening (fault tree analysis)
  • Quantify relative probabilities, total
    probability
  • Assess How bad are the consequences?
  • Manage risk
  • Document it.
  • Risk Threat x Vulnerability x Consequence (?)
  • Request/allocate resources to reduce risks
    (biggest first)

7
TVC paradigm
  • Risk TVC
  • Threat relative probability of attack
  • Reflects attackers intent, capability, timing
    decisions
  • Budget and resource constraints? Opportunity
    costs?
  • Vulnerability probability that attack succeeds,
    if attempted
  • Could there be partial degrees of success, based
    on consequences?
  • Consequence defenders loss from successful
    attack
  • Risk management Allocate resources to defend
    biggest risks first (TVC priority list)

8
Why isnt TVC used in chess?
  • Or any other game?
  • Or in other risk management settings where
    experts (or programs) compete for prizes?

9
Improvement Focus on changes
  • Risk TVC should not drive action.
  • ?Risk (?T)(?V)(?C) is more useful
  • Risk management decisions Allocate resources to
    biggest risk reductions first

10
Improvement Focus on changes
  • Risk TVC should not drive action.
  • ?Risk (?T)(?V)(?C) is more useful
  • Requires a predictive (causal) risk model
  • action ? ?V? ?T ? ?C
  • How do our actions affect attackers?
  • Risk management decisions Allocate resources to
    biggest risk reductions first

11
Key Challenge 1How to usefully predict ?T, ?V,
?C for alternative interventions?
12
Key Challenge 1How to usefully predict ?T, ?V,
?C for alternative interventions?Expert
elicitation?Modeling?
13
Key Challenge 2How to validate that predictions
and recommendations are useful?
14
Attackers view Forward chaining paradigm
  • If I prepare for (or launch) attack A now
  • What will I learn? (Value of information)
  • What opportunities must I give up? What will I
    gain?
  • What risks will I incur? (Detection/interdiction)
  • What direct value will it produce? (Value of
    damage)
  • How to do it?
  • Plan attack (including preparation steps)
  • Top-down Select approach, evaluate, iteratively
    improve
  • Simulate/predict results, improve/refine/test
    plan
  • What course of action is most valuable now?
  • Assuming optimal future actions

15
http//www.dtic.mil/ndia/2008homest/landsberg.pdf
16
Receive consequences
Make decisions
http//www.dtic.mil/ndia/2008homest/landsberg.pdf
17
Receive consequences
C
T
V
Attackers investment and plans
Defenders investments
http//www.dtic.mil/ndia/2008homest/landsberg.pdf
18
Minimax Competing optimization
Receive consequences
C
T
V
Attackers investment and plans
Defenders investments
http//www.dtic.mil/ndia/2008homest/landsberg.pdf
19
Paradigm clash
  • What happens when a forward-chaining attacker
    meets a population of backward-chaining (or TVC)
    defenders?
  • Concern Attacker wins too much!
  • Do defenders have a better way to outsmart
    attackers?
  • Outsmart anticipate and prepare for what the
    attacker will do next
  • Yes Minimax, not risk-scoring

20
Some challenges and limitations of risk scoring
21
Technical challenges
  • Uncertainty about (T, V, C) for each target
  • Correlated uncertainties (across targets)
  • Attacker behaviors, selection of targets
  • Countermeasure effectiveness
  • Consequences of successful attacks
  • How to optimize sets (portfolios) of defenses?
  • Taking dependencies into account
  • How to optimize resource allocation across
    opportunities
  • For defender and attacker

22
How to treat uncertain (T, V, C)?
  • RAMCAP Treat T, V, C as random variables, use
    their expected values
  • BTRA Use expert elicitation, Monte-Carlo
    simulation uncertainty analysis
  • Minimax optimization Model the uncertainties
    about (T, V, C) in more detail
  • What must be resolved to determine (T, V, C)?

23
Facility A Risk ?
24
Facility A Risk ?
no snow
snow
25
Facility A E(T)E(V)E(C) 0.24
26
Facility A E(T)E(V)E(C) 0.24
27
Facility B Risk ?
28
Facility B E(T)E(V)E(C) 0.16
29
Expected T, V, C values are irrelevant for
predicting risk
30
Expected T, V, C values are irrelevant for
predicting risk
31
Expected T, V, C values are irrelevant for
ranking risks
32
Expected T, V, C values are irrelevant for
ranking risks
33
E(T)E(V)E(C) ? E(TVC)
34
Lesson 1 Dont use expected values
35
Another example E(T)E(V)E(C) may over- or
under-estimate true risk, E(TVC)
  • E(T) E(V) E(C) 0.5. What is E(T)E(V)E(C)?
  • Assume Pr(V 1) Pr(V 0) 0.5, so E(V) 0.5
  • Then E(T)E(V)E(C) 0.125
  • But, if T C V, then E(TVC) 0.5
  • If T C (1 - V), then TVC 0.
  • Dependencies and correlations matter!

36
Lesson 2 No other summary measure works,
either! (Need joint, not marginals)
37
Challenges
  • Threat depends on what the attacker knows about
    vulnerability and consequence
  • and on how he uses that knowledge to select (and
    plan, and improve) attacks.
  • Threat depends on vulnerability and consequence
  • Positive correlation ? multiplication is wrong

38
How to treat uncertain (T, V, C)?
  • RAMCAP Treat T, V, C as random variables, use
    their expected values
  • Use Monte-Carlo uncertainty analysis

39
C
T
V
Event tree, MC simulation
http//www.dtic.mil/ndia/2008homest/landsberg.pdf
40
Challenges
  • Threat depends on
  • What the attacker knows about vulnerability and
    consequence
  • How he uses that knowledge to choose, plan, and
    improve attacks.
  • Requires modeling planning/optimization
  • Threat depends on vulnerability and consequence
  • Positive correlation ? multiplication is wrong
  • Dependency is based on decision-making
  • Modeled by decision trees, not just event trees

41
How to meet these challenges?
  • Model the uncertainties about T, V, C
  • Simulate attacker decisions under uncertainty
  • Outsmart the attacker

42
How to treat uncertain (T, V, C)?
  • RAMCAP Treat T, V, C as random variables, use
    their expected values
  • Use Monte-Carlo uncertainty analysis
  • Alternative Model uncertainty in more detail
  • Why is T uncertain?
  • Because V and C affect T in uncertain ways
  • Because V and C are uncertain
  • Develop decision tree or influence diagram
  • Model Pr(T 1 V, C) E(T V, C)
  • ?Countermeasures ? (?V, ?C, ?info.) ? ?T

43
Risk from an uninformed (blind) attacker 0.4
no snow
snow
44
Risk from a better-informed attacker, who needs
E(V)C gt 0.8 to attack 0
no snow
snow
45
Risk from an informed attacker, who needs VC gt
0.8 to attack, is 0.4
no snow
snow
46
Risk from an adaptive attacker, who waits to
attack until V 1, is risk 1!
no snow
snow
47
Lessons
  • Risk (and threat) can be 0, 0.4, or 1, depending
    on what the attacker knows (or believes) about V
    and C
  • Not on what we know about the attacker, or about
    V and C
  • Threat assessments based on our knowledge can
    be misleading
  • A valid threat assessment requires considering
    the attackers whole decision.
  • Attackers own assessment T 0 or T 1

48
Example Misguided threat assessment
  • Assume that we know that
  • Attacker attacks (T 1) if and only if he knows
    that success probability 1 (V 1). (Else, T
    0.)
  • Common knowledge True success probability 0.5
  • Does attack probability 0?
  • Not necessarily! (Misleading inference)
  • Suppose attacker attacks if and only if he first
    gets inside help that makes V 1. (Else, V 0)
  • Pr(succeeds in getting inside help) 0.5 E(V)
    V
  • Then T Pr(attack) 0.5, not 0
  • Threat and vulnerability assessors needs trees,
    not numbers, to communicate essentials about
    adaptive attackers and future contingencies.

49
Threat plan tree, not number
  • Threat depends on what attacker knows or believes
    about vulnerability and consequence
  • and on how he will use that knowledge to improve
    attacks (attack plan)
  • What will he do next?
  • What is his whole decision tree?
  • How do threat and vulnerability co-evolve?
  • No number can tell us all this!

50
Which threat is greater, A or B?
attack hazard rate
B
A
time
51
Threat is not a number
No objective way to uniquely assign numbers to
curves depends on defenders preferences
attack hazard rate
B
A
time
52
Threat is a stochastic process, not a number
No objective way to uniquely assign numbers to
curves depends on defenders preferences
attack hazard rate
B
A
time
53
Uncertain consequences
54
Ambiguous consequence values(T, V, C) for any
link depends on which other links are lost
B
A
C
Z
D
55
Ambiguous consequence values(T, V, C) for any
link depends on which other links are lost
B
T, V, C ?
A
C
Z
D
56
Ambiguous consequence values(T, V, C) for any
link depends on which other links are lost
Consequence A and Z disconnected
B
T, V, C ?
A
C
Z
D
57
Lesson It may be impossible to assess C for a
facility in isolation
Consequence A and Z still connected
B
T, V, C ?
A
C
Z
D
58
Uncertain consequences
  • No objective certainty-equivalent for uncertain
    consequences
  • Example Rank these three
  • A N(1, 0), B N(2, 1), C N(3, 2) deaths

59
Uncertain consequences
  • No objective certainty-equivalent for uncertain
    consequences
  • Example Rank these three
  • A N(1, 0), B N(2, 1), C N(3, 2) deaths
  • Certainty equivalent CE(X) E(X) kVar(X)
  • For k 0, A lt B lt C ? C is most severe
  • For k 1, A B C
  • For k 2, A gt B gt C ? C is least severe
  • Ranking depends on subjective risk attitude, k
  • In practice, severity ratings are presented
    without k ? No way to know what they mean

60
Time-varying consequences
  • Growth of consequences over time
  • Optimal harvesting
  • Optimal timing for attacks
  • Dynamic games
  • Preparation vs. detection/interdiction
  • Smaller, more frequent attacks vs. larger, rarer
    attacks

61
Summary on uncertainties
  • T, V, and C are uncertain
  • T depends on attackers beliefs about V and C
    (and resource requirements and constraints), for
    all attack opportunities
  • V and C depend on defenders actions, as well as
    attackers
  • T, V, and C are better modeled by decision trees
    than by numbers.

62
Dynamic beliefs
63
Challenge How do co-evolving beliefs determine
threats?
  • Suppose that
  • Attacker believes that defenders investment in
    protection signals the true value of C and
    invests in attack preparation accordingly.
  • Defender believes that attackers investment in
    attack preparations signals the true value of V
    and invests in defenses accordingly.
  • Then investments (and threat) may escalate,
    independent of true values of C and V.

64
Self-defeating beliefs about threats
  • Suppose that
  • Defender rank-orders 100 threats from highest to
    lowest.
  • Attackers strategy is to skip the k top-ranked
    threats, then attack the next N.
  • Attacker knows defenders ranking
  • Then the true threats to facilities never agree
    with the Defenders ranking!

65
Some lessons for Threat
  • Threats depend on attackers beliefs, which may
    not reflect what we know
  • Attackers and defenders may respond to each
    others beliefs in ways that reflect incorrect
    perceptions, not true (V, C) values
  • Focusing on estimating true (T, V, C) values
    does not necessarily help to model belief (and
    hence threat) dynamics.

66
Challenges
  • Threat depends on what attacker knows (or
    believes) about vulnerability and consequence
  • and on how he will use that knowledge to plan
    attacks (attack strategy or plan)
  • What will he do next? What is his decision tree?
  • and on what he knows about other attack
    opportunities, costs, and resource constraints

67
Vulnerability (and threat) may depend on
attackers level of effort
  • Repeated attacks may increase the effective value
    of V
  • Any V gt 0 implies effective V 1, if limitless
    attempts can be made for free and independently.
  • V (and T) depend on attackers resource
    constraints
  • Attacker can increase V by prior preparation
    (e.g., inside job, adaptive opportunistic
    scheduling of attacks)
  • Can he afford to increase V enough to justify
    making T 1 instead of 0?
  • T is a decision variable, not a chance variable

68
How to deal with these complexities?
  • Interacting players (and beliefs)
  • Multiple stages of decision-making
  • Uncertainties and dependencies

69
The minimax perspectiveOptimization brings
clarity!
70
Minimax perspective
  • Defender allocates resources across targets to
    minimize E(loss), assuming
  • Attacker allocates resources across targets to
    maximize Defenders E(loss).

71
Minimax perspective
  • Defender allocates resources across targets to
    minimize E(loss), assuming
  • Attacker allocates resources across targets to
    maximize Defenders E(loss).
  • minmax ?iV(xi, yi)CiTi s.t. ?ixi ? b, xi ? 0
  • x y, T
  • ?i(yi Tici) ? c attackers budget

72
Minimax perspective
  • Defender allocates resources across targets to
    minimize E(loss), assuming
  • Attacker allocates resources across targets to
    maximize Defenders E(loss).
  • min max ?iV(xi, yi)CiTi s.t. ?ixi ? b, xi ?
    0
  • x y, T
  • ?i(yi Tici) ? c attackers budget
  • Solution Each Ti 0 or 1 Vi V(xi, yi) both
    depend on all opportunities budgets b and c.
  • Vi and Ti are outputs of decisions, not inputs
  • Should allocate resources to reduce total risk

73
Minimax perspective
  • How to solve
  • min max ?iV(xi, yi)CiTi s.t. ?ixi ? b, xi
    ? 0
  • x y, T
  • ?i(yi Tici) ? c attackers budget
  • Simulation-optimization
  • Defender proposes starting allocation x
  • Given x, solve for (y, T). (Simulate response)
  • Given (y, T), re-solve for x. (Optimize
    decisions)
  • Iterate until convergence!
  • Convergence guaranteed for some cases, e.g.,
    fictitious play for ZSTP games.)

74
What does minimax do for us?
  • Decision variables
  • X Defenders resource allocation
  • Y Attackers resource allocation
  • T decision vector (what to attack)
  • Causal predictive risk model
  • Y
  • ?
  • X ? V ? T ? C
  • R

All variables except risk, R, are vectors T is
optimized out R expected loss, is simulated
from model
75
What does minimax do for us?
  • Decision variables
  • X Defenders resource allocation
  • Y Attackers resource allocation
  • T decision vector (what to attack)
  • Optimization of decision variables
  • Y
  • ?
  • X ? V ? T ? C
  • R

All variables except risk, R, are vectors T is
optimized out R expected loss, is simulated
from model
76
Minimax perspective
  • Vi and Ti are outputs, not inputs
  • Expert elicitation might provide sensible
    starting estimates for calculating them, but
    should not substitute for calculation.
  • Ti is 0 or 1 unless inputs (budgets,
    vulnerability functions, etc.) are uncertain.
  • It may be very difficult or impossible for
    experts to usefully guess joint (or marginal) Ti.

77
Minimax perspective Experts who claim to
predict attacker behavior usefully from
inadequate information are mistaken.
78
What information is needed to predict attacker
behavior?
  • What are the threats for these attacks?
  • Attack A does 20 damage, costs attacker 3
  • Attack B does 25 damage, costs attacker 2
  • Attack C does 40 damage, costs attacker 4
  • Defender can afford to block one. What to do?
  • Expert elicitation of threats (attack
    probabilities) based on these facts is
    unjustifiable.
  • These facts omit essential information for
    prediction
  • Answers (e.g., based on psychological
    speculations) are spurious, if attacker acts to
    maximize damage

79
Example Predicting attacks
  • What are the threats for these attacks?
  • Attack A does 20 damage, costs attacker 3
  • Attack B does 25 damage, costs attacker 2
  • Attack C does 40 damage, costs attacker 4
  • Relative attractiveness model
  • Costs are roughly similar ? not crucial
  • Relative Pr(attack A) 20/(20 25 40)
  • Relative Pr(attack B) 25/(20 25 40)
  • Relative Pr(attack C) 40/(20 25 40)
  • Q Is this a useful model? (Minimax No!)

80
Minimax perspective Budgets matter!
  • What are the threats for these attacks?
  • Attack A does 20 damage, costs attacker 3
  • Attack B does 25 damage, costs attacker 2
  • Attack C does 40 damage, costs attacker 4
  • Defender can afford to block one. What to do?
  • If attacker budget is 3 Defender should block B
  • TA 1, TB TC 0. Defenders loss 20
  • If attacker budget is 4 Block C
  • TA 0, TB 1, TC 0. Defenders loss 25
  • If attacker budget is 5 Block B. (Loss 40)
  • If attacker budget is 7 Block C. (Loss 45)

81
Lesson on ranking defenses
  • Defenders best action is not robust to changes
    in attacker resources
  • Defenders best decision switches back and forth
    between B and C as attackers budget increases.
  • No robust, stable best subset of defensive
    actions
  • Qualitatively different solution from a
    rank-ordering (e.g., by TVC or other
    pre-calculated scores)

82
If defender thinks attackers budget is equally
likely to be 3 or 4, block C.
  • Attack A does 20 damage, costs attacker 3
  • Attack B does 25 damage, costs attacker 2
  • Attack C does 40 damage, costs attacker 4

Defenders best decision maximizes the
facility-specific threat (1 instead of 0 or 0.5)
83
Minimax perspectivePriority rankings of
hazards or options do not support effective risk
management
84
Defenders best decisions can be very sensitive
to budget
  • Implement which countermeasure(s)?
  • Countermeasure A reduces expected loss by 20 per
    year, costs defender 3
  • Countermeasure B reduces expected loss by
    25/year, costs defender 2
  • Countermeasure C reduces expected loss by
    40/year, costs defender 4
  • Best decision B if budget is 3 C if 4 (A and
    B) if 5 (B and C) if 6

85
Lesson No priority-list allocation (set of
funded defenses increasing with budget) is
efficient for the defender
  • Implement which countermeasure(s)?
  • Countermeasure A reduces expected loss by 20 per
    year, costs defender 3
  • Countermeasure B reduces expected loss by
    25/year, costs defender 2
  • Countermeasure C reduces expected loss by
    40/year, costs defender 4

86
Resource allocation implications
  • No evaluation of risk-reducing options can
    allocate resources effectively without
    considering budget ( other) dependencies.
  • Effective resource allocation requires solving
    portfolio optimization (capital budgeting)
    problem.
  • No way to do this using scores or rankings
  • Combinatorial optimization
  • Priority ranking is a simple, wrong solution.

87
Allocating risk management resources based on
risk priority rankings is ineffective Ranking
omits essential information for optimization
How will adaptive attacker respond?
88
Example TVC-based protection
  • If we can afford to protect 2 of the following 3
    facilities, which 2 should we protect? (Assume
    attacker knows what we do, does not attack
    defended facilities.)

89
Example TVC-based protection
  • If we do nothing, what is expected loss from next
    attack?
  • It is (1.5 2)/2 1.75

90
Example TVC-based protection
  • TVC gives us a simple way to set priorities.

91
Example TVC-based protection
  • TVC gives us a simple way to set priorities.

92
Example TVC-based protection
  • TVC gives us a simple way to set priorities.
  • This increases expected loss, from 1.75 to 2.

93
Example TVC-based protection
  • Minimax Reversing TVC priorities (by leaving A
    unprotected) decreases expected loss to 1.5.

94
Adaptive attacks with imperfect attacker knowledge
  • 6 facilities (2 type A, worth 1.5 each, 4 type B,
    worth 2 each).
  • Adaptive attacker can afford 3 attacks
  • samples each of type A and B, then acts based on
    results to maximize expected damage
  • We can afford to protect 4 out of 6
  • Minimax defense Protect 1 type A, 3 type Bs
  • No priority rule can recommend best defense!

95
Lessons
  • Minimax (optimized) defense strategy often
    disagrees with ranking (or risk score-based)
    recommendations.
  • Make sure that a risk management policy works
    before recommending it!
  • Does it make things better or worse?
  • Does it produce better-than-random decisions?
  • Current risk management standards do not do this.

96
Minimax perspectives Summary
  • Vi and Ti should be outputs, not inputs
  • Priority rankings and risk scores can support
    poor risk management decisions
  • Can even be worse-than-random!
  • Dont use them.
  • Optimize (minimax), dont score.
  • Relative attractiveness is not a useful proxy
    for Ti, in general.
  • Does not approximate constrained-optimal (or, in
    some cases, sane) solutions.

97
What does minimax do for us?
  • It provides one solution to the fundamental
    challenges of predicting adaptive attacker
    behavior and modeling causal uncertainties
  • It provides limited (minimax) guarantees on risk
    reductions from defensive resource allocation.

98
Thanks!
Write a Comment
User Comments (0)
About PowerShow.com