Title: SIMS 213: User Interface Design
1SIMS 213 User Interface Design Development
- Marti Hearst
- Tues, April 6, 2004
2Today
- Evaluation based on Cognitive Modeling
- Comparing Evaluation Methods
3Another Kind of Evaluation
- Evaluation based on Cognitive Modeling
- Fitts Law
- Used to predict a users time to select a target
- Keystroke-Level Model
- low-level description of what users would have to
do to perform a task. - GOMS
- structured, multi-level description of what users
would have to do to perform a task
4GOMS at a glance
- Proposed by Card, Moran Newell in 1983
- Apply psychology to CS
- employ user model (MHP) to predict performance of
tasks in UI - task completion time, short-term memory
requirements - Applicable to
- user interface design and evaluation
- training and documentation
- Example of
- automating usability assessment
5Model Human Processor (MHP)
- Card, Moran Newell (1983)
- most influential model of user interaction
- used in GOMS analysis
- 3 interacting subsystems
- cognitive, perceptual motor
- each with processor memory
- described by parameters
- e.g., capacity, cycle time
- serial parallel processing
Adapted from slide by Dan Glaser
6Original GOMS (CMN-GOMS)
- Card, Moran Newell (1983)
- Engineering model of user interaction
- Goals - users intentions (tasks)
- e.g., delete a file, edit text, assist a customer
- Operators - actions to complete task
- cognitive, perceptual motor (MHP)
- low-level (e.g., move the mouse to menu)
7CMN-GOMS
- Engineering model of user interaction (continued)
- Methods - sequences of actions (operators)
- based on error-free expert
- may be multiple methods for accomplishing same
goal - e.g., shortcut key or menu selection
- Selections - rules for choosing appropriate
method - method predicted based on context
- hierarchy of goals sub-goals
8Keystroke-Level Model
- Simpler than CMN-GOMS
- Model was developed to predict time to accomplish
a task on a computer - Predicts expert error-free task-completion time
with the following inputs - a task or series of subtasks
- method used
- command language of the system
- motor-skill parameters of the user
- response-time parameters of the system
- Prediction is the sum of the subtask times and
overhead
9KLM-GOMS
(What Raskin refers to as GOMS)
Keystroke level model
1. Predict
2. Evaluate
x sec.
Action 1
Action 2
y sec.
Action 3
z sec.
Time using interface 1
Time using interface 2
10Symbols and values
Operator
Remarks
Time (s)
K B P H D M R
Press Key Mouse Button Press Point with
Mouse Home hand to and from keyboard Drawing -
domain dependent Mentally prepare Response from
system - measure
0.2 .10/.20 1.1 0.4 - 1.35 -
Assumption expert user
11Raskins rules
Rule 0 Initial insertion of candidate Ms
M before K M before P iff P selects command
i.e. not when P points to arguments
Rule 1 Deletion of anticipated Ms
If an operator following an M is fully
anticipated, delete that M.
e.g. when you point and click
12Raskins rules
Rule 2 Deletion of Ms within cognitive units
If a string of MKs belongs to a cognitive unit,
delete all Ms but the first.
e.g. 4564.23
Rule 3 Deletion of Ms before consecutive
terminators
If a K is a redundant delimiter, delete the M
before it.
e.g. )
13Raskins rules
Rule 4 Deletion of Ms that are terminators of
commands
If K is a delimiter that follows a constant
string, delete the M in front of it.
Rule 5 Deletion of overlapped Ms
Do not count any M that overlaps an R.
14Example 1
Temperature Converter
Choose which conversion is desired, then type the
temperature and press Enter.
Convert F to C.
Convert C to F.
HPBHKKKKK
Apply Rule 0
HMPMBHMKMKMKMKMK
Apply Rules 1 and 2
HMPBHMKKKKMK
Convert to numbers
.41.351.1.20.41.354(.2)1.35.2
7.15
15Example 1
Temperature Converter
Choose which conversion is desired, then type the
temperature and press Enter.
Convert F to C.
Convert C to F.
HPBHKKKKK
Apply Rule 0
HMPMBHMKMKMKMKMK
Apply Rules 1 and 2
HMPBHMKKKKMK
Convert to numbers
.41.351.1.20.41.354(.2)1.35.2
7.15
16Example 2
- GUI temperature interface
- Assume a button for compressing scale
- Ends up being much slower
- 16.8 seconds/avg prediction
17Using KLM and Information Theory to Design More
Efficient Interfaces (Raskin)
- Armed with knowledge of the minimum information
the user has to specify - Assume inputting 4 digits on average
- One more keystroke for C vs. F
- Another keystroke for Enter
- Can we design a more efficient interface?
18Using KLM to Make More Efficient Interfaces
To convert temperatures, Type in the numeric
temperature, Followed by C for Celcius or F for
Fahrenheit. The converted Temperature will be
displayed.
MKKKKMK 3.7 sec
19Using KLM to Make More Efficient Interfaces
- Second Alternative
- Translates to both simultaneously
C
F
MKKKK 2.15 sec
20GOMS in Practice
- Mouse-driven text editor (KLM)
- CAD system (KLM)
- Television control system (NGOMSL)
- Minimalist documentation (NGOMSL)
- Telephone assistance operator workstation
(CMP-GOMS) - saved about 2 million a year
21Drawbacks
- Assumes an expert user
- Assumes an error-free usage
- Overall, very idealized
22Fitts Law
Models movement time for selection tasks
- The movement time for a well-rehearsed selection
task - increases as the distance to the target
- increases
- decreases as the size of the target
- increases
23Fitts Law
Time (in msec) a b log2(D/S1)
where a, b constants (empirically derived)
D distance S size ID is Index of
Difficulty log2(D/S1)
24Fitts Law
Time a b log2(D/S1)
Target 1
Target 2
Same ID ? Same Difficulty
25Fitts Law
Time a b log2(D/S1)
Target 1
Target 2
Smaller ID ? Easier
26Fitts Law
Time a b log2(D/S1)
Target 1
Target 2
Larger ID ? Harder
27Determining Constants for Fitts Law
- To determine a and b design a set of tasks with
varying values for D and S (conditions) - For each task condition
- multiple trials conducted and the time to execute
each is recorded and stored electronically for
statistical analysis - Accuracy is also recorded
- either through the x-y coordinates of selection
or - through the error rate the percentage of trials
selected with the cursor outside the target
28A Quiz Designed to Give You Fitts
- http//www.asktog.com/columns/022DesignedToGiveFit
ts.html - Microsoft Toolbars offer the user the option of
displaying a label below each tool. Name at least
one reason why labeled tools can be accessed
faster. (Assume, for this, that the user knows
the tool and does not need the label just simply
to identify the tool.)
29A Quiz Designed to Give You Fitts
- The label becomes part of the target. The target
is therefore bigger. Bigger targets, all else
being equal, can always be acccessed faster.
Fitt's Law. - When labels are not used, the tool icons crowd
together.
30A Quiz Designed to Give You Fitts
- You have a palette of tools in a graphics
application that consists of a matrix of
16x16-pixel icons laid out as a 2x8 array that
lies along the left-hand edge of the screen.
Without moving the array from the left-hand side
of the screen or changing the size of the icons,
what steps can you take to decrease the time
necessary to access the average tool?
31A Quiz Designed to Give You Fitts
- Change the array to 1X16, so all the tools lie
along the edge of the screen. - Ensure that the user can click on the very first
row of pixels along the edge of the screen to
select a tool. There should be no buffer zone.
32Comparing Evaluation Methods
33Comparing Evaluation Methods
- User Interface Evaluation in the Real World A
Comparison of Four Techniques (Jeffries et al.,
CHI 1991) - Compared
- Heuristic Evaluation (HE)
- 4 evaluators, 2 weeks time
- Software Guidelines (SG)
- 3 software engineers, familiar with Unix
- Cognitive Walkthrough (CW)
- 3 software engineers, familiar with Unix
- Usability Testing (UT)
- Usability professional, 6 participants
- The Interface
- HP-VUE, a GUI for Unix (beta version)
34Comparing Evaluation MethodsJeffries et al., CHI
91
35Comparing Evaluation MethodsJeffries et al., CHI
91
On a 9 point scale Higher is more critical
36Comparing Evaluation MethodsJeffries et al., CHI
91
37Comparing Evaluation MethodsJeffries et al., CHI
91
38Comparing Evaluation MethodsJeffries et al., CHI
91
- Conclusions
- HE is best from a cost/benefit analysis, but
requires access to several experienced designers - Usability testing second best found recurring,
general, and critical errors but is expensive to
conduct - Guideline-based evaluators missed a lot but did
not realize this - They were software engineers, not usability
specialists - Cognitive walkthrough process was tedious