Title: Design for Human Capabilities
1Design for Human Capabilities
2Agenda
- Task Analysis
- Evaluation
- Predictive evaluation
- Heuristic evaluation
- Discount usability testing
- Cognitive walkthrough
3Task Conformance
- Task coverage
- Can system do all tasks of interest?
- Task adequacy
- Can user do tasks?
- Does system match real-world tasks?
4Task Analysis
- Analyzing how people do their jobs
- Go to their environment
- Examine users tasks to better understand what
they need from interface and how they will use it
5Task Analysis
- Broad Focus
- Observe users of current system(s)
- Generate requirements
- Hierarchical task analysis
- Knowledge-based task analysis
- Entity-Relationship model
6Existing System
- Usually task analysis involves an examination of
an existing system, process or practice - Watch what they do and how they do it
7No Existing System
- Gather documents, talk with knowledgeable people,
etc. - Can still be useful to help generate requirements
8Broad Focus
- Dont just focus on computer system artifacts and
interactions - Study related processes and objects in the
environment that people may use and involve - Example office environment---papers,
whiteboards, etc.
9Task Analysis Focus
- Not on internal cognitive state of user (more on
that in the near future) - Focus on observable behaviors
- Observe users, what they do, and how they do it
- What are the practices, methods, steps, objects,
, used?
10Key Component
- Requirements - Usually involves developing set of
requirements of what interface should provide - Example Get food in cafeteria
- A. User must be able to see all food items and
costs - B. Tray and utensils must be easily accessible
- C. ...
11Types of Task Analysis
- 1. Hierarchical task decomposition
- 2. Knowledge-based analysis
- 3. Entity-relationship methods
121. Hierarchical Task Decomposition
- Decompose task into
- Subtasks
- Multiple levels
- Plans describing ordering and conditions
13Common Plans
- Fixed sequence
- Optional tasks
- Waiting for events
- Cycles
- Time-sharing -- parallel
- Discretionary
142. Knowledge-based
- List all objects and actions involved in a task,
then build a taxonomy of them - Often times, work with domain expert to get help
15Methodology
- Sample
- Get 3x5 cards
- Put different object/action on each
- Dont worry about repetition at this point!
- Group into piles, subpiles, etc.
16Utility
- This type of task analysis can be very useful
when youre writing a manual or some
documentation - Taxonomy --- Document sections
173. Entity-Relationship
- Object-based methodology, with a real stress on
relationship between objects and actions - Involves
- Concrete objects
- Actors
- Composite objects
18Example
- Task Develop design for final project
- Objects - Pens, paper, drawing tools, etc.
- Actors - Mary, Bob, Sally
- Composite objects - The team
19Methodology
- Often list attributes, actions of objects
Object pen simple Attribute color
red writing on/off Object Mary
actor Actions M1 make a sketch
M2 organize meeting
20Sources of Information
- Documentation, training manuals
- Beware They often say what is supposed to
happen, not what happens in real life - Observation
- Make sure to just watch, get a feel for what the
task involves - Interviews
- Why do you do that?
- What happens if something goes wrong?
21Use
- Produce documentation
- Training, manuals, tutorials
- Requirements capture and system design
- Helps you define requirements document
- Helps decide what should be included
- Helps interface design
- Hierarchical breakdown might feed menu design
22Evaluation
- Gathering data about usability of a design by a
specified group of users for a particular
activity within a specified environment
23Goals
- 1. Assess extent of systems functionality
- 2. Assess effect of interface on user
- 3. Identify specific problems with system
24Forms
- Summative
- After a system has been finished. Make judgments
about final item. - Formative
- As project is forming. All through the
lifecycle. Early, continuous.
25Approaches
- Experimental (Lab studies)
- Typically in a closed, lab setting
Manipulate independent variables to see effect on
dependent variables - Naturalistic (Field studies)
- Observation occurs in real life setting Watch
process over time
Ecologically valid
26Tradeoffs
- Expert May be able to isolate cause and effect
- Ecologically valid?
- Natural
- No experimental control
27Evaluation Methods
- 1. Experimental/Observational Evaluation
- a. Collecting user opinions
- b. Observing usage
- c. Experiments (usability specifications)
- 2. Predictive Evaluation
- 3. Interpretive Evaluation
28Predictive Evaluation
- Basis
- Observing users can be time-consuming and
expensive. - Try to predict usage rather than observing it
directly. - Conserve resources (quick low cost)
29Approach
- Expert reviews
- HCI experts interact with system and try to find
potential problems and give prescriptive feedback - Best if
- Havent used earlier prototype
- Familiar with domain or task
- Understand user perspectives
30Methods
- 1. Heuristic Evaluation
- 2. Discount usability testing
- 3. Cognitive Walkthrough
- 4. User Modeling
31Heuristic Evaluation
- Developed by Jakob Nielsen
- Several evaluators assess system based on simple
and general heuristics (principles or rules of
thumb)
(Web sites)
32Procedure
- 1. Gather inputs
- 2. Evaluate system
- 3. Debriefing and collection
- 4. Severity rating
33Gather Inputs
- Who are evaluators?
- Need to learn about domain, its practices
- Get the prototype to be studied
- May vary from mock-ups and storyboards to a
working system
34Evaluation Method
- Reviewers evaluate system based on high-level
heuristics
use simple and natural dialog provide clear
marked exits speak users language provide
shortcuts minimize memory load provide good
error msgs be consistent prevent errors
provide feedback
35Updated Heuristics
visibility of system status aesthetic and
minimalist design user control
and freedom consistency and standards error
prevention
recognition rather than recall flexibility
and efficiency of use recognition, diagnosis
and recovery from errors help and
documentation match between system and real
world
36Process
- Perform two or more passes through system
inspecting - Flow from screen to screen
- Each screen
- Evaluate against heuristics
- Find problems
- Subjective
- Dont dwell on whether it is or isnt a Real
Problem
37Debriefing
- Organize all problems found by different
reviewers - At this point, decide what are and arent
problems - Group, structure
38Severity Rating
- 0-4 rating scale
- Based on
- frequency
- impact
- persistence
- market impact
39Advantage
- Cheap, good for small companies who cant afford
more - Getting someone practiced in method is valuable
40Application
- Nielsen found that about 5 evaluations found 75
of the problems - Above that you get more, but at decreasing
efficiency
41Discount Usability Testing
- Hybrid of empirical usability testing and
heuristic evaluation - Have 2 or 3 think-aloud user sessions with paper
or prototype-produced mock-ups
42Cognitive Walkthrough
- From Polson, Lewis, etc at UC Boulder
- Like code walkthrough (s/w engineering)
- Assess learnability and usability through
simulation of way users explore and become
familiar with interactive system
43CW Process
- Construct carefully designed tasks from system
spec or screen mock-up - Walk through (cognitive operational) activities
required to go from one screen to another - Review actions needed for task, attempt to
predict how users would behave and what problems
theyll encounter
44Requirements
- Description of users and their backgrounds
- Description of task user is to perform
- Complete list of the actions required to complete
task - Prototype or description of system
45Assumptions
- User has rough plan
- User explores system, looking for actions to
contribute to performance of action - User selects action seems best for desired goal
- User interprets response and assesses whether
progress has been made toward completing task
46Methodology
- Step through action sequence
- Action 1
- Response A, B, ..
- Action 2
- Response A
- ...
- For each one, ask four questions
47CW Questions
- 1. Will users be trying to produce whatever
effect action has? - 2. Will users be able to notice that correct
action is available? - 3. Once found, will they know its the right one
for desired effect? - 4. Will users understand feedback after action?
48Answering the Questions
- 1. Will user be trying to produce effect?
- Typical supporting Evidence
- It is part of their original task
- They have experience using the system
- The system tells them to do it
- No evidence?
- Construct a failure scenario
- Explain, back up opinion
49Another Question
- 2.Will user notice action is available?
- Typical supporting evidence
- Experience
- Visible device, such as a button
- Perceivable representation of an action such as a
menu item
50Example
- Program VCR
- List actions
- Ask questions