Title: Small Decision Making Under Uncertainty And Risk
1Small Decision-Making under Uncertainty and Risk
Takemi FujikawaUniversity of Western Sydney,
Australia
Agenda
- Introduction
- Experimental Design
- Experiment 1
- Experiment 2
- Conclusion
2Introduction
- This presentation attempts to
- examine behavioural tendency in Small
Decision-Making (SDM) problems - present results of two experiments on SDM
problems - introduce the search-assessment model
- introduce EU model for SDM problems
3What are SDM problems?
Introduction
- SDM problems involve repeated tasks The decision
makers (DMs) face repeated-play choice problems. - Each single choice is trivial It has very
similar but fairly small EV. - Little time and effort is typically invested in
SDM problems - The DMs have to rely on the feedback obtained in
the past decisions.
4Experimental Design
Experimental Design
- Search treatment (Experiment 1)
- Experiment 1 was conducted without giving
subjects prior information on payoff structure. - To construct the search-assessment model
- Choice treatment (Experiment 2)
- Experiment 2 was conducted with giving subjects
prior information on payoff structure. - To construct EU model
5Experimental Design
Experimental Design
- Experiment 1 and Experiment 2 were conducted in
order at Kyoto Sangyo University Experimental
Economics Laboratory (KEEL). - Forty-two undergraduates at KSU served as paid
subjects and participated in both experiments. - Subjects received cash contingent upon
performance (i.e., points they earned). - Exchange rate 1 point 0.3 Yen (0.25 US cent).
6Choice Problems
Experimental Design
- Each experiment consists of Problem 1, 2 and 3.
- Each problem consists of 400 rounds.
- Subjects are asked to choose either H or L 400
times. - In each round t (t1, 2, , 400), subjects are
asked to choose either H or L.
Problem 1 H 4 (0.8) 0 (0.2) L 3 (1)
Problem 2 H 4 (0.2) 0 (0.8) L 3
(0.25) 0 (0.75)
Problem 3 H 32 (0.1) 0 (0.9) L 3 (1)
7Experiment 1 Search in SDM problems
- Subjects in Experiment 1 are NOT informed of
payoff structure.
8Experimental screen
Experiment 1
Problem 1 H 4 (0.8) 0 (0.2) L 3 (1)
Problem 2 H 4 (0.2) 0 (0.8) L 3
(0.25) 0 (0.75)
Problem 3 H 32 (0.1) 0 (0.9) L 3 (1)
Basic task in each problem was a binary choice
between two buttons for 400 times without giving
subjects prior information on payoff structure.
9Results of Experiment 1
Experiment 1
- choiceH The mean proportions of H choices. For
example, if she has chosen H 100 out of 400
times, then choiceH is 0.25. - posteriorH The posterior average points of H.
For example, if she chose H 10 times in Problem 1
and unluckily has got 24 pts, then posteriorH is
2.4 (24/10). Note that posteriorH may or may not
be the same as EV(H).
Problem 1 H 4 (0.8) 0 (0.2) L 3 (1)
10Experiment 1
Results choiceH
Problem 1 (choiceH0.48) H 4 (0.8) 0 (0.2)
L 3 (1)
Problem 2 (choiceH0.55) H 4 (0.2) 0 (0.8)
L 3 (0.25) 0 (0.75)
Problem 3 (choiceH0.22) H 32 (0.1) 0 (0.9)
L 3 (1)
11Experiment 1
The tendency to select best reply to past outcomes
Problem 3 (choiceH0.22) H 32 (0.1) 0 (0.9)
L 3 (1)
- After the first 100 trials, posteriorH has become
around 1.6. - Then, subjects may have judged subjectively that
EV(H)?1.6 and EV(H)ltEV(L)
12Analysis
Experiment 1
- Subjects are undisclosed payoff structure in
Experiment 1. - In Experiment 1, the information available to
subjects is limited to feedback about outcomes of
their previous decisions. - Subjects are required to discover payoff
structure by trying both alternatives as they are
undisclosed payoff distribution.
13The search-assessment model
Experiment 1
- Recall that only one alternative includes
uncertain prospect Problem 1 and 3. - To investigate Problem 1 and 3, the following
Problem A is examined. - Suppose each DM in Problem A is asked to choose
either H or L at each round t (t1,2, , 400).
Problem A H x (p) 0 (1-p) L 1 (1) where
0ltplt1, pxgt1.
Problem 1 (choiceH0.48) H 4 (0.8) 0 (0.2)
L 3 (1)
Problem 3 (choiceH0.22) H 32 (0.1) 0 (0.9)
L 3 (1)
14Experiment 1
- If she chooses H m times and gets an outcome of
x k times, then her posteriorH is greater than
or equal to 1, which is EV(L), with the
probability P(Hm) - This allows us to analyse the number of H choices
required for judging that EV(H)gtEV(L).
Problem A H x (p) 0 (1-p) L 1 (1) where
0ltplt1, pxgt1.
Problem 1 (choiceH0.48) H 4 (0.8) 0 (0.2)
L 3 (1)
Problem 3 (choiceH0.22) H 32 (0.1) 0 (0.9)
L 3 (1)
15 P(Hm) for Problem 3
Experiment 1
- P(Hm) is calibrated by setting p0.1 and x32/3.
- Calibration implies the probability that
posteriorHgt3 does not exceed 0.98 until H is
chosen 10,000 times in Problem 3.
Problem 3 (choiceH0.22) H 32 (0.1) 0 (0.9)
L 3 (1)
Problem A H x (p) 0 (1-p) L 1 (1) where
0ltplt1, pxgt1.
16Experiment 2 Choice in SDM problems
- Subjects in Experiment 2 are clearly disclosed
payoff structure.
17Experiment 2
Experimental screen
Problem 1 H 4 (0.8) 0 (0.2) L 3 (1)
Problem 2 H 4 (0.2) 0 (0.8) L 3
(0.25) 0 (0.75)
Problem 3 H 32 (0.1) 0 (0.9) L 3 (1)
Basic task in each problem was a binary choice
between two buttons for 400 times with prior
information on payoff structure.
18Results choiceH
Experiment 2
Problem 1 (choiceH0.63) H 4 (0.8) 0 (0.2)
L 3 (1)
Problem 2 (choiceH0.69) H 4 (0.2) 0 (0.8)
L 3 (0.25) 0 (0.75)
Problem 3 (choiceH0.4) H 32 (0.1) 0 (0.9)
L 3 (1)
19Analysis
Experiment 2
- Is it a optimal decision for risk-averse DM to
choose both H and L within 400 trials? - Results of Experiment 2 can be analysed within
the framework of EUT since subjects are disclosed
the payoff structure. - Making objective probabilities available to
subjects allows direct evaluation of EUT. - In analysing the results, this paper presumes
that subjects are asked how many times of 400
rounds they are willing to choose H once for all.
20Experiment 2
- The utility function, u(x), is considered
- To investigate an optimal behaviour in Problem 1
and 3, we employ the risk-averse utility function
with
Problem 1 (choiceH0.63) H 4 (0.8) 0 (0.2)
L 3 (1)
Problem 3 (choiceH0.4) H 32 (0.1) 0 (0.9)
L 3 (1)
21The EU model for Problem 1
Experiment 2
- Let V1(m) be EU she acquires when choosing H m
(?400) times in Problem 1 -
- where k is the number for the realised highest
payoff of H in Problem 1 (i.e., 4 points). - How many times out of 400 times should DM choose
H to maximise V1(m)?
Problem 1 (choiceH0.63) H 4 (0.8) 0 (0.2)
L 3 (1)
22Analysis of Problem 1
Experiment 2
- V1(m) has its maximum at m252.
- An theoretically-optimal number of H choices is
252 out of 400 times. - DM can maximise EU by choosing H 252 out of 400
times. - This coincides exactly results of Experiment 2
that H was chosen 252 times.
Problem 1 (choiceH0.63) H 4 (0.8) 0 (0.2)
L 3 (1)
23Conclusion Experiment 1 (search in SDM problem)
- Experiment 1 includes simple binary choice
problems without giving subjects any information
on payoff structure. - I have presented the search-assessment model,
which - shows that the probability that subjects
misestimate the payoff structure is large with
only 400 times, even in simple and SDM problems. - implies that subjects are likely to misunderstand
in such a way that EV(H)ltEV(L).
24Conclusion Experiment 2 (choice in SDM problem)
- Experiment 2 is conducted with giving subjects
prior information on payoff structure. - Hence, the results can be analysed within the
framework of EUT. - In Experiment 2, subjects choose both H and L in
each choice problem. - This paper presents the EU models, which reveal
that it is theoretically-optimal to choose H
often but not all the time within given trials,
to maximise EU.
25References
- Allais, M. (1953). Le Comportement de l'Homme
Rationnel devant le Risque Critique des
Postulats et Axiomes de l'Ecole Americaine.
Econometrica, 21(4), 503-46. - Barron, G., Erev, I. (2003). Small
Feedback-Based Decisions and Their Limited
Correspondence to Description-Based Decisions.
Journal of Behavioral Decision Making, 16(3),
215-33. - Erev, I., Barron, G. (2005). On Adaptation,
Maximization, and Reinforcement Learning Among
Cognitive Strategies. Psychological Review,
112(4), 912-31. - Fujikawa, T. (2005). An Experimental Study of
Petty Corrupt Behaviour in Small Decision Making
Problems. American Journal of Applied Sciences,
Special issue, 14-18. - Fujikawa, T., Oda, S. H. (2005). A Laboratory
Study of Bayesian Updating in Small
Feedback-Based Decision Problems. American
Journal of Applied Sciences, 2(7), 1129-33. - Kahneman, D., Tversky, A. (1979). Prospect
Theory An Analysis of Decision under Risk.
Econometrica, 47(2), 23-53.