Title: HCI Evaluation Projects
1HCI Evaluation Projects
- Evaluate the HCI design of three types of
applications (you choose the specific ones, but
try to choose ones with somewhat complex
interaction) - A web site
- A general-use application
- Word processor, Fax utility, Mail reader, etc.
- A special-use application focused on your
profession - software code editor, debugger, web page builder,
system administration tool, project management
tool, etc. - For each application, evaluate for two distinct
personas - For you
- For your opposite (apologist/survivor, power
user/naïve user, technologist/techno-averse,
etc.) - Be sure to characterize the persona and their
goals, then evaluate the interaction design from
the perspective of that persona - In the HCI Design Projects, you will re-design an
interaction for one of these applications (or a
new one)
2HCI Evaluation Projects An Addition
- In choosing a system to evaluate, consider an
embedded system - automobile navigation
- cell telephone phone book
- etc.
- Combine an X with a computer, and what do you get?
3Usability
- From Larry L. Constantine and Lucy A.D. Lockwood,
Software for Use A Practical Guide to the Models
and Methods of Usage-Centered Design,
Addison-Wesley, Reading, MA, 1999.
4Some Claims
- Many systems have been constructed with
development focused almost entirely on internals,
on processing logic and data organization to
fulfill narrow concepts of functional objectives. - If it works, some would argue, that is enough.
- As we will see, by focusing first on use and
usability rather than on features or
functionality, on users and usage more than on
user interfaces, systems can be turned into
better tools for the job that are smaller,
simpler, and ultimately less expensive.
5Homo habilisHandy Man
- Homo habilis developed a characteristic that is
uniquely human, the technique of making tools
reference. - Humans are tool users
- All software systems are tools, so software
developers are tool builders - Avram Miller We are toolmakers, not artists
- Master craftsmen have special-purpose tools
- Many of our tools are used to build other tools
- Grow your own toolkit for toolmakers
6Quality of Use
- What makes a thing useful?
- Utility
- The system does something worthwhile
- useful enough to justify its expense
- Capability
- Able to do what it is supposed to do
- Utility and Capability are necessary, but not
sufficient - Users must be able to get the software to do its
thing - Usability
7Usability
- Highly usable systems are easy for people to
learn to use and easy for people to use
productively - Help people work more efficiently and make fewer
mistakes - The best systems give pleasure and satisfaction
- They make people feel good about using them
- Five facets of usability
- Learn-ability
- Remember-ability
- Efficiency in use
- Reliability in use
- User satisfaction
- Note the need for engineering trade-offs
- e.g., easy to learn via step-by-step instructions
may slow down a well-trained user
8Economics of Usability
- Developer costs frequently-cited reasons for
project cost overruns - change requests from users
- overlooked but necessary requirements
- users lack of understanding of their
requirements - insufficient user-developer communication
- ? need emphasis on building an understanding of
the real requirements of users - User costs
- time spent climbing the learning curve
- lower productivity than is possible
- user stress, frustration, morale ? mistakes,
quitting - technical support
- Over time, more features result in LESS usability
9Approaching Usability
- Traditional approaches to improving usability
- Usability testing
- Style guides and standards
- Expert consultation
- Iterative prototyping
- Alternative Usage-Centered Design
10Usability testing
- Usability labs vs. field testing
- Note beta testing is a poor way to do usability
testing - Cannot test quality into a product
- It may be too late or too expensive to fix a
product that fails testing - Get some benefit by usability testing of
prototypes or storyboards - Testing only exercises anticipated use
- Testing implies an inexperienced user (no one has
used the new system, yet) - Usability tests find isolated, focused defects,
not defects in the overall architecture or
organization of the interaction - Use usability testing to focus on specific issues
- e.g., the viability of a novel user interface
feature or for comparing alternative design
solutions to the same problem - e.g., assuring no lingering defects before
product shipment
11Style Guides and Standards
- Platform-specific style guides (e.g., Windows and
Mac) - Required for conformance stamp of approval even
though they may be wrong for the application at
hand - In-house style guides
- Standards promote consistency, and consistency is
a significant factor in making user interfaces
easier to learn and remember - Well-conceived standards and style guidelines can
reflect the best practices in user interface
design - Standards and guides can save developers time
- Re-use, dont re-invent
- Problems too big, inconsistent, not followed,
incomplete, poorly designed
12Iterative Development
- Problem prototyping is not a substitute for
analysis and design, not an excuse for sloppy
thinking - Prototyping
- Test feasibility of an approach
- Serve as a proof-of-concept for a radically new
approach - Effective tool for communication between
developers and users - Be careful that the user (or developer) do not
see a sophisticated prototype as next to the real
thing - Software engineering is the only engineering
discipline that tries to sell prototypes as
finished goods
13Design Reviews and Expert Opinion
- Simple, informal ways to solicit comments and
suggestions and to promote collaboration - Avoid long debates and discussions
- Software professionals tend to have strong
opinions - Quality of review reflects the (in)experience of
the reviewers - Expert review is often expensive, but
cost-effective - Experts are hard to find and trust
- Expert review does not transfer skills or
knowledge - The developers are as clueless as they were
before as to why a given design has usability
problems
14Built-In UsabilityUsage-Centered Design
- Interface with Users
- Design as dialog
- Usage-Centered Design
- Pragmatic design guidelines
- Model-driven design process
- Organized development activities
- Iterative improvement
- Measures of quality
15Model-driven design process
- Core, essential, abstract
- Role model
- actors and their relationships to system
- Task models
- use cases and their structure
- Content Models
- tools and materials to be provided by the user
interface, and their interconnections - Mapping to actual environment
- Operational models
- operational profiles
- Implementation model
- visual design
16Measuring Usability
- Preference metrics quantify the subjective
evaluations and preferences of users - Performance metrics measure the actual use of
working software - Predictive metrics (design metrics) assess the
quality of designs or prototypes, providing
predictions of the actual performance that can be
expected once the final system is implemented and
used
17Preference metrics SUMI
- Software Usability Measurement Inventory (SUMI)
- 50-item questionnaire that measures subjective
aspects of usability - Affect how much the user likes the design
- Efficiency how well the software enables
productive use - Helpfulness how supportive the software and
documentation are - Control how consistent and normal the software
response is - Learnability how easy the software is to explore
and master
18Preference metrics SUSS
- Subjective Usability Scales for Software (SUSS)
- A quick reading on a design or alternative
designs - Based on a sketch or screen shot, fill out a
short questionnaire about - Subjective usability
- Valence (liking or personal preference)
- Aesthetics (attractiveness)
- Organization (graphical design and layout)
- Interpretation (understandability)
- Acquisition (ease of learning)
- Facility -- How easy it would be to use the
screen to accomplish each of four tasks specific
to the screen
19Performance metrics
- Performance metrics Measures of how users
perform during actual or simulated work - Completeness
- Correctness
- Effectiveness
- correctly completed work as a percentage of total
work - Efficiency
- effectiveness per unit time
- Proficiency
- measure an experts efficiency then compare
subject efficiency to expert efficiency - Productiveness
- time actually spent on task vs. unproductive time
such as time seekinghelp, using documentation,
searching for features, undoing actions, waiting
for results, etc. - Retention (memorability)
- Features or functions recalled in a later test
20User Interface Design Metrics
- What to measure
- Structural metrics
- Based on surface properties of the configuration
and layout of user interface architectures - Semantic metrics
- Context sensitive
- Focus on the concepts and actions that visual
components represent and how users make sense of
the components and their relationships - Procedural metrics
- Task sensitive
- Deal with the fit between user tasks and a given
design in terms of its content and organization
21Measurement Criteria
- Practical metrics should be sound, simple, and
easy to use - Easy to calculate and interpret
- Apply to paper prototypes and design models
- Have a strong rationale and simple conceptual
basis - Have sufficient sensitivity and ability to
discriminate between designs - Offer direct guidance for design
- Effectively predict actual usability in practice
- Directly indicate relative quality of designs
22Structural Metrics
- Attempts to measure complexity
- Only weakly correlate with end-product usability
- Examples
- Number of visual components or widgets on a
screen or dialog - Amount and distribution of white space between
widgets - Alignment of widgets relative to one another
- Data and widget cohesion related data or widgets
are near each other and their relationship is
clear and consistent - Number of adjacent screens or dialogs directly
reachable from a given screen or dialog - Longest chain of transitions possible between
screens or dialogs
23Essential Usability Metrics Suite
- A suite of metrics to cover the various factors
that make for a good user interface design - Essential Efficiency
- Task Concordance
- Task Visibility
- Layout Uniformity
- Visual Coherence
24Essential Efficiency
- Consider a use-case narrative. It defines the
ideal number of steps a user performs to
accomplish a task - Essential efficiency compares the ideal to the
actual number of steps a user needs to perform
the use case with a particular user interface
design - Related to GOMS analysis Goals, Operators,
Methods, and Selections - A theoretical model of how people carry out
cognitive-motor tasks and interact with systems
25Counting Steps
- Entering data into one field, terminated by an
enter, tab, or some other field separator - Skipping over an unneeded field or control by
tabbing or use of a navigation key - Selecting a field, object, or group of items by
clicking, double-clicking, or sweeping with a
pointing device - Selecting a field, object, or group of items with
a keystroke or series of connected keystrokes - Switching from keyboard to point device or from
pointing device to keyboard - Triggering an action by clicking, on a tool,
command button, or other visual object - Selecting a menu or a menu item by a pointing
device - Triggering an action by typing a shortcut key or
key sequence - Dragging-and-dropping an object with a pointing
device
26Task Concordance
- A measure of how well the distribution of task
difficulty using a particular interface design
fits with the expected frequency of the various
tasks - Good designs will generally make the more
frequent tasks easier
27Task Visibility
- Visibility Principle user interfaces should show
users exactly what they need to know or need to
use to be able to complete a given task - Task Visibility measures the fit between the
visibility of features and the capabilities
needed to complete a given task or set of tasks - Things immediately obvious in the current screen
are more visible than those you have to open a
menu to find, which are more visible than those
located in other interaction contexts - It is more desirable to have immediately
available those items always needed, than those
sometimes needed - Note security correlates with very low (zero)
visibility
28WYSIWYN
- WYSIWYN What you see is what you need
- You do not see what you do not need
- Task visibility is reduced when unused or
unnecessary features are incorporated into the
user interface
29Visibility Rules
- Four categories according to function and method
of performance - Hidden
- Exposing
- Suspending
- Direct
- Visibility ranges from 0 to 1
30Visibility Rules Hidden
- Hidden (visibility 0)
- Typing a required code or shortcut in the absence
of any visual prompting or cue - Accessing a feature or features having no visible
representation on the user interface - Example the Windows Task Bar is hidden
- Any action involving an object or a feature that
may be visible but the choice of which is neither
obvious nor evident based on the visible
information on the user interface - Example right click on a blank background or
typing a keyboard shortcut without being prompted
31Visibility Rules Exposing
- Exposing (visibility 0.5)
- An enacted step is exposing if its function is to
gain access to or make visible some other needed
feature without causing or resulting in a change
of interaction context - Opening a drop-down list
- Opening a menu or submenu
- Opening a context menu by right-clicking on some
object - Opening a property sheet dialog for an object
- Opening an object or drilling down for detail
- Opening or making visible a tool palette
- Opening an attached pane or panel of a dialog
- Switching to another page or tab of a tabbed
dialog
32Visibility Rules Suspending
- Suspending (context-switching)
- An enacted step is suspending if its function is
to gain access to or make visible some other
needed feature and it causes or results in a
change of interaction context - Opening a dialog box
- Closing a dialog or message box
- Switching to another window
- Switching to or launching another application
- Suspending or context-switching actions that are
the first or last step of extensions or other
optional interactions have a visibility value of
0.5 (they may not be needed in all interactions) - Non-optional context changes have a visibility
value of 0
33Visibility Rules Direct
- Direct (visibility 1)
- An enacted step is a direct action if it is not
hidden, exposing, or suspending - Accomplished through visible features
- Choice is evident
- Do not serve to gain access to or make visible
other objects - Examples
- Applying a tool to an object to change it
- Typing a value into a visible field
- Altering the setting of an option button
34Layout Uniformity
- Measures selected aspects of the spatial
arrangement of interface components without
taking into account what those components are or
how they are used - Neither task sensitive nor context sensitive
- Visual Coherence addresses the meaning and use
- Assesses the uniformity or regularity of the user
interface layout - Usability is hindered by highly disordered or
visually chaotic arrangements - Complete uniformity is not the goal
- User needs to be able to distinguish different
features and different parts of the interface - Computed from the number of different heights,
widths, top-edge alignments, left-edge
alignments, bottom-edge alignments, and
right-edge alignments of visual components
35Visual Coherence
- A well-designed screen or window hangs together
- consolidate related things, separate unrelated
things - A semantic or context-sensitive measure of how
closely an arrangement of visual components
matches the semantic relationships among those
components - Group/separate visual components using empty
space, lines, boxes, colors, etc. - Semantic clusters must be discovered
- Use a glossary, domain object model, entity
model, data dictionary, etc.
36Metrics In Practice
- Use the numbers as a guide, not a requirement
- Focus on deriving the best design, not on
maximizing the scores - Quantitative comparisons are no substitute for
thought, careful design, systematic review, and
judicious testing - To improve usability in response to specific
feedback, construct custom, easily understood,
and easily used metrics focused on the specific
issues
37Five Rules of Usability (1-3)
- Access Rule The system should be usable,
without help or instruction, by a user who has
knowledge and experience in the application
domain but no prior experience with the system - Efficacy Rule The system should not interfere
with or impede efficient use by a skilled user
who has substantial experience with the system - Progression Rule The system should facilitate
continuous advancement in knowledge, skill, and
facility and accommodate progressive change in
usage as the user gains experience with the system
38Five Rules of Usability (4-5)
- Support Rule The system should support the real
work that users are trying to accomplish by
making it easier, simpler, faster, or more fun by
making new things possible - Context Rule The system should be suited to the
real conditions and actual environment of the
operational context within which it will be
deployed and used
39Six Principles of Usability (1-3)
- Structure Principle Organize the user interface
purposefully, in meaningful and useful ways that
put related things together and separate
unrelated things based on clear, consistent
models that are apparent and recognizable to
users - Simplicity Principle Make simple, common tasks
simple to do, communicating clearly and simply in
the users own language and providing good
shortcuts that are meaningfully related to longer
procedures - Visibility Principle Keep all needed tools and
materials for a given task visible without
distracting the user with extraneous or redundant
information What You See is What You Need
(WYSISYN)
40Six Principles of Usability (4-6)
- Feedback Principle Through clear, concise, and
unambiguous communication, keep the user informed
of actions or interpretations, changes of state
or condition, and errors or exceptions as these
are relevant and of interest to the user in
performing tasks - Tolerance Principle Be flexible and tolerant,
reducing the cost of mistakes and misuse by
allowing undoing and redoing while also
preventing errors wherever possible by tolerating
varied inputs and sequences and by interpreting
all reasonable actions reasonably - Reuse Principle Reduce the need for users to
rethink, remember, and rediscover by reusing
internal and external components and behaviors,
maintaining consistency with purpose rather than
merely arbitrary consistency