Title: Chapter 21 Decision Analysis
1Chapter 21Decision Analysis
- Decision Making with Probabilities
- Decision Analysis
- with Sample Information
- Computing Branch Probabilities
- Using Bayes Theorem
2Problem Formulation
- The first step in the decision analysis process
is problem formulation.
- We begin with a verbal statement of the problem.
- Then we identify
- the decision alternatives
- the uncertain future events
- the consequences associated with
- each decision alternative
- each chance event outcome
3Problem Formulation
- A decision problem is characterized by decision
alternatives, states of nature, and resulting
payoffs.
- The decision alternatives are the different
possible strategies the decision maker can employ.
- The states of nature refer to future events, not
under the control of the decision maker, which
may occur.
- States of nature should be defined so that they
are mutually exclusive.
4UCF
- UCF has to decide how many seats the new football
stadium should have. UCF plans to sell seats
between 30 to 140 per season. - UCF has three different projects. One with 30,000
seats, one with 60,000 seats and one with 90,000
seats. The financial success of the project
depends upon the size of the of the stadium and
the chance event concerning the demand for
stadium seats. - The statement of the UCF decision problem is to
select the size of the new stadium that will lead
to the largest profit given the uncertainty
concerning the demand for seats.
5UCF
- Given the statement of the problem, it is clear
that the decision is to select the best size for
the stadium complex. - UCF has the following 3 alternatives
- d1 a small stadium 30,000
- d2 a medium stadium 60,000
- d3 a large stadium 90,000
- A factor in selecting the best decision
alternative is the uncertainty associated with
the event concerning the stadium seats. However
UCF president decide it would be adequate to
consider two possible chance event outcomes - 1. Strong Demand and 2. Weak Demand
6UCF
- In decision analysis the possible outcomes for a
chance event are referred to as the STATES OF
NATURE. The states of nature are defined so that
one and only one of the possible states of nature
will occur. - S1 strong demand
- S2 weak demand
7Payoff Tables
Given the tree decision alternatives and the two
states of nature, which stadium should UCF choose
? To answer UCF will need to know the
consequences associated with each decision
alternative and each state of nature (PAYOFF)
- The consequence resulting from a specific
combination of a decision alternative and a state
of nature is a payoff.
- A table showing payoffs for all combinations of
decision alternatives and states of nature is a
payoff table.
- Payoffs can be expressed in terms of profit,
cost, time, distance or any other appropriate
measure.
8PAYOFF TABLE
- The payoff table with profits expressed in
Hundred thousand of dollars per game is shown in
Table 1 - We will use the notation Vij to denote the payoff
associated with decision alternative i and state
of nature j.
9Decision Trees
- A decision tree provides a graphical
representation showing the sequential nature of
the decision-making process.
- Each decision tree has two types of nodes
- round nodes correspond to the states of nature (3
in our example) (Chance nodes) - square nodes correspond to the decision
alternatives (decision nodes)
10UCF STADIUM
UCF
UCF
11Decision Trees
- The branches leaving each round node represent
the different states of nature while the branches
leaving each square node represent the different
decision alternatives.
- At the end of each limb of a tree are the payoffs
attained from the series of branches making up
that limb.
12Decision Making with Probabilities
- Once we have defined the decision alternatives
and states of nature for the chance events, we
focus on determining probabilities for the states
of nature.
- The classical method, relative frequency method,
or subjective method of assigning probabilities
may be used.
- Because one and only one of the N states of
nature can occur, the probabilities must satisfy
two conditions
P(sj) gt 0 for all states of nature
13Decision Making with Probabilities
- Then we use the expected value approach to
identify the best or recommended decision
alternative.
- The expected value of each decision alternative
is calculated (explained on the next slide).
- The decision alternative yielding the best
expected value is chosen.
14Expected Value Approach
- The expected value of a decision alternative is
the sum of weighted payoffs for the decision
alternative.
- The expected value (EV) of decision alternative
di is defined as
where N the number of states of
nature P(sj ) the probability of state
of nature sj Vij the payoff
corresponding to decision alternative di
and state of nature sj
15Expected Value UCF
- UCF is optimistic and assign the subjective
probability of .8 for strong demand and .2 for
weak demand. - EV(d1) .8 (8) .2 (7) 7.8
- EV(d2) .8 (14) .2 (5) 12.2
- EV(d3) .8 (20) .2 (-9) 14.2
- Thus, using the expected value approach, the
large stadium complex with an expected 1,400,000
in revenue per game is the recommended decision.
16Expected Value Using Decision Trees
17Expected Value Approach
- Burger Prince Restaurant is considering
- opening a new restaurant on Main Street.
- It has three different models, each
- with a different seating capacity.
- Burger Prince estimates that the average
- number of customers per hour will be
- 80, 100, or 120. The payoff table for the
- three models is on the next slide.
18Expected Value Approach
Average Number of Customers Per Hour
s1 80 s2 100 s3 120
Model A Model B Model C
10,000 15,000 14,000
8,000 18,000 12,000
6,000 16,000 21,000
19Expected Value Approach
- Calculate the expected value for each decision.
- The decision tree on the next slide can assist in
this calculation.
- Here d1, d2, d3 represent the decision
alternatives of models A, B, and C.
- And s1, s2, s3 represent the states of nature of
80, 100, and 120 customers per hour.
20Expected Value Approach
Payoffs
Decision Tree
.4
s1
10,000
.2
s2
2
15,000
s3
.4
d1
14,000
.4
s1
8,000
d2
.2
3
s2
1
18,000
s3
.4
d3
12,000
.4
s1
6,000
4
s2
.2
16,000
s3
.4
21,000
21Expected Value Approach
EMV .4(10,000) .2(15,000)
.4(14,000) 12,600
d1
2
Model A
EMV .4(8,000) .2(18,000)
.4(12,000) 11,600
d2
Model B
3
1
d3
EMV .4(6,000) .2(16,000)
.4(21,000) 14,000
Model C
4
Choose the model with largest EV, Model C
22Expected Value of Perfect Information
- Frequently, information is available that can
improve the probability estimates for the states
of nature.
- The expected value of perfect information (EVPI)
is the increase in the expected profit that would
result if one knew with certainty which state of
nature would occur.
- The EVPI provides an upper bound on the expected
value of any sample or survey information.
23Expected Value of Perfect Information
- The expected value of perfect information is
defined as
EVPI EVwPI EVwoPI
where
EVPI expected value of perfect information
EVwPI expected value with perfect
information about the states
of nature EVwoPI expected value without perfect
information about the states
of nature
24Expected Value of Perfect Information
- Step 1
- Determine the optimal return corresponding
to each state of nature.
- Step 2
- Compute the expected value of these optimal
returns.
- Step 3
- Subtract the EV of the optimal decision
from the amount determined in step (2).
25Expected Value of Perfect Information
- Calculate the expected value for the optimum
payoff for each state of nature and subtract
the EV of the optimal decision.
- EVPI .4(10,000) .2(18,000) .4(21,000) -
14,000 2,000
26Decision Analysis With Sample Information
- Knowledge of sample (survey) information can be
used to revise the probability estimates for
the states of nature.
- Prior to obtaining this information, the
probability estimates for the states of nature
are called prior probabilities.
- With knowledge of conditional probabilities for
the outcomes or indicators of the sample or
survey information, these prior probabilities can
be revised by employing Bayes' Theorem.
- The outcomes of this analysis are called
posterior probabilities or branch probabilities
for decision trees.
27Decision Analysis With Sample Information
- A decision strategy is a sequence of decisions
and chance outcomes.
- The decisions chosen depend on the yet to be
determined outcomes of chance events.
- The approach used to determine the optimal
decision strategy is based on a backward pass
through the decision tree.
28Decision Analysis With Sample Information
- Backward Pass Through the Decision Tree
- At Chance Nodes
- Compute the expected value by multiplying the
payoff at the end of each branch by the
corresponding branch probability.
- At Decision Nodes
- Select the decision branch that leads to the
best expected value. This expected value becomes
the expected value at the decision node.
29Decision Analysis With Sample Information
- Burger Prince must decide whether to
purchase a - marketing survey from Stanton Marketing for
1,000. - The results of the survey are "favorable" or
- "unfavorable". The conditional
- probabilities are
P(favorable 80 customers per hour) .2
P(favorable 100 customers per hour) .5
P(favorable 120 customers per hour) .9
30Computing Branch ProbabilitiesUsing Bayes
Theorem
- Bayes Theorem can be used to compute branch
probabilities for decision trees.
- For the computations we need to know
- the initial (prior) probabilities for the states
of nature,
- the conditional probabilities for the outcomes or
indicators of the sample information given each
state of nature.
- A tabular approach is a convenient method for
carrying out the computations.
31Computing Branch ProbabilitiesUsing Bayes
Theorem
For each state of nature, multiply the prior
probability by its conditional probability for
the indicator. This gives the joint
probabilities for the states and indicator.
Sum these joint probabilities over all states.
This gives the marginal probability for the
indicator.
For each state, divide its joint probability by
the marginal probability for the indicator. This
gives the posterior probability distribution.
32Posterior Probabilities
Favorable
State Prior Conditional Joint
Posterior
.08 .10 .36
.148 .185 .667
80 100 120
.4 .2 .4
.2 .5 .9
.08/.54
1.000
Total .54
P(favorable) .54
33Posterior Probabilities
Unfavorable
State Prior Conditional Joint
Posterior
.32 .10 .04
.696 .217 .087
80 100 120
.4 .2 .4
.8 .5 .1
.32/.46
1.000
Total .46
P(unfavorable) .46
34Decision Analysis With Sample Information
s1 (.148)
10,000
s2 (.185)
15,000
4
d1
s3 (.667)
14,000
s1 (.148)
8,000
d2
s2 (.185)
5
2
18,000
s3 (.667)
I1 (.54)
12,000
d3
s1 (.148)
6,000
s2 (.185)
6
16,000
s3 (.667)
1
21,000
35Decision Analysis With Sample Information
- Decision Tree (bottom half)
s1 (.696)
10,000
1
I2 (.46)
s2 (.217)
7
15,000
d1
s3 (.087)
14,000
s1 (.696)
8,000
d2
s2 (.217)
8
3
18,000
s3 (.087)
12,000
d3
s1 (.696)
6,000
s2 (.217)
9
16,000
s3 (.087)
21,000
36Decision Analysis With Sample Information
EMV .148(10,000) .185(15,000)
.667(14,000) 13,593
d1
4
17,855
d2
EMV .148 (8,000) .185(18,000)
.667(12,000) 12,518
5
2
I1 (.54)
d3
EMV .148(6,000) .185(16,000) .667(21,000)
17,855
6
1
EMV .696(10,000) .217(15,000)
.087(14,000) 11,433
7
d1
I2 (.46)
d2
EMV .696(8,000) .217(18,000)
.087(12,000) 10,554
8
3
d3
11,433
EMV .696(6,000) .217(16,000) .087(21,000)
9,475
9
37Expected Value of Sample Information
- The expected value of sample information (EVSI)
is the additional expected profit possible
through knowledge of the sample or survey
information.
EVSI EVwSI EVwoSI
where
EVSI expected value of sample information
EVwSI expected value with sample
information about the states
of nature EVwoSI expected value without sample
information about the states
of nature
38Expected Value of Sample Information
- Step 1
- Determine the optimal decision and its
expected return for the possible outcomes of the
sample using the posterior probabilities for the
states of nature.
- Step 2
- Compute the expected value of these
optimal returns.
39Decision Analysis With Sample Information
d1
13,593
4
17,855
d2
12,518
5
2
I1 (.54)
d3
17,855
6
EVwSI .54(17,855) .46(11,433)
14,900.88
1
11,433
7
d1
I2 (.46)
d2
10,554
8
3
d3
11,433
9,475
9
40Expected Value of Sample Information
- If the outcome of the survey is "favorable,
- choose Model C.
- If the outcome of the survey is unfavorable,
choose Model A.
EVwSI .54(17,855) .46(11,433) 14,900.88
41Expected Value of Sample Information
Subtract the EVwoSI (the value of the optimal
decision obtained without using the sample
information) from the EVwSI.
EVSI .54(17,855) .46(11,433) - 14,000
900.88
Because the EVSI is less than the cost of the
survey, the survey should not be purchased.
42End of Chapter 21