Multimedia Evaluation - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Multimedia Evaluation

Description:

A structured diary for logging incidents ... Good for : finding problems with interface, time delays, occurrence of being lost in hyperspace ... – PowerPoint PPT presentation

Number of Views:290
Avg rating:3.0/5.0
Slides: 26
Provided by: dkbis
Category:

less

Transcript and Presenter's Notes

Title: Multimedia Evaluation


1
Multimedia Evaluation

2
Introduction
  • In this lecture we will look at a central issue
    in human computer interaction
  • that of establishing the Usability of products
  • by Evaluation

3
How is Usability defined ?
  • Usability of Multimedia applications (a reminder
    from last week)
  • Does the user feel in control of the application
    ?
  • To what extent can the user achieve their goals
    using the application ?
  • How far does the product appear to assist the
    user ?
  • How easy is the application to learn ?
  • How does the user respond emotionally ?

4
Dimensions of Usability
  • Effectiveness
  • Efficiency
  • Satisfaction

5
Evaluation overview
  • Evaluation is required to find out how well a
    multimedia interface works for a user
  • - In terms of whether it is usable and
    acceptable
  • To do this an evaluation framework is required

1. Issues to be measured
2. Instruments to collect data
3. Testing episode
5. Report
4. Results analysis
6
1. Issues Measured
  • Measurements can be user related, e.g. -
  • - Attitude, user satisfaction (positive or
    negative)
  • - Knowledge, recall (what has been learned)
  • - Goal related (aim to achieve a task,
    effectiveness)
  • - Usability criteria based (is the system
    usable)
  • - Learnability
  • - Performance based (efficiency)
  • - Error rates

7
Isses measured continued
  • Measurements can be system related, e.g. -
  • - Software structure
  • - Capture paths taken through system features
    used
  • - Response, or time delays
  • - Highlight navigation problems (lost in
    hyperspace)

8
2. Instruments
  • The data collection technique used in the testing
    episode.
  • Example approaches that involve end users are -
  • i) Semi-structured interviews
  • ii) Questionnaires
  • iii) Incident diary, self-reporting
  • iv) Feature checklist
  • v) Focus groups
  • vi) Think aloud protocol
  • vii) Experiments
  • viii) Usability Laboratories
  • But, it is possible for experts to role play
    users evaluate without them
  • cognitive walkthroughs, guidelines, heuristics

9
i) Semi-structured interview
  • Semi-structured interview
  • - A qualitative / retrospective method
  • - Uses an agenda of questions
  • - Can focus on specific issues of interest
  • - Time intensive, only suitable for small
    numbers
  • Good for discussing interface options, whats
    good / bad, and suggesting improvements
  • Could also use an interview to pilot a
    questionnaire

10
ii) Questionnaires
  • Qualitative wordage approach -
  • Or
  • quantitative i.e. numeric based answer or
  • - Structured questions, answers typically (not
    exclusively) in the form of
  • - agree/disagree/neutral yes/no
    1/2/3/4/5
  • i.e. some sort of Rating scale
  • - Requires little time to administer once
    designed, user can fill in on their own, suitable
    for large numbers

11
My preference
  • These scales are all subjective
  • One persons 6 is anothers 4 etc.
  • The middle ground always causing problems
  • Can the scale to a two / three scale
  • Mandatory
  • Nice to have
  • Not needed

12
MUMMS
  • MUMMS - Measuring the Usability of Multi-Media
    Systems
  • (Human Factors Research Group (HFRG), University
    College, Cork)
  • Contains a number of subscales for measuring
    end user perceived quality of systems, including
    the extent to which the user feels they are in
    control of the pace. A new scale currently
    called excitement is being considered - the
    extent to which users feel drawn into the
    applications world, (fascination).
  • http//www.ucc.ie/hfrg/questionnaires/mumms/info.h
    tml

13
iii) Incident diary
  • Incident diary or self-reporting method
  • - Quantitative / on-the-spot (as long as its
    not done retrospectively)
  • - A structured diary for logging incidents
  • To catch interface problems that have been missed
    by other instruments, or that cannot be simulated
    in a lab environment,
  • As an alternative to think aloud protocols if
    this is uneconomical (too many subjects) or
    impractical (investigator can't be there).
  • Good for finding problems with interface, time
    delays, occurrence of being lost in hyperspace

First draft - "Fill in every time Netscape
doesn't work."Second draft - "Fill in every time
Netscape says 'The server does not have entry'
".
14
iv) Feature checklist
  • Feature checklist
  • - Quantitative / retrospective
  • - Examines features used
  • - Usage, knowledge required, need
  • - Takes 2-15 minutes
  • Good for what hypermedia facilities are used,
    and node access

15
v) Focus groups
  • Focus groups
  • - Qualitative / retrospective method
  • - Organised as a group discussion
  • - Works on the concept of human triggers
    (someone says something, others pick up)
  • - Time intensive normally about an hour
  • Good for interface options, whats good / bad,
    and suggesting improvements

16
vi) Think aloud protocol
  • Think aloud protocol
  • - Qualitative / on-the-spot method
  • - Use software, and record spoken user views as
    they use the application to perform a task
  • - Can reveal how, not just where, people get
    stuck
  • - Normally requires about an hour (any longer
    theres loss of concentration etc.)
  • Good for finding out how a system is used, and
    problems related to the system

17
vii) Experiments
  • Quantitative / on-the-spot
  • Good for gathering lots of different types of
    information
  • Gaining empirical evidence to support a claim or
    hypothesis
  • Typically lasts 1 to 3 hours
  • - However, may need to run an experiment
    several times
  • Experiment design
  • To begin an experiment we need experimental
    design (very laborious)
  • - We need to create a design plan
  • - Form a hypothesis Choose Subjects,
    Select Variables
  • - And pilot the experimental design

18
Variables
  • Manipulated variables (independent)
  • - Adjusted to find different effects, examples
    interface styles, amount of help, number of
    options, icon designs
  • - Can give each variable a value known as a
    level
  • e.g. in testing menu designs, we may have 3, 5,
    or 7 items on the menus, therefore the
    independent variable has 3 levels
  • Measured variables (dependent)
  • - Are the variables measured in the experiment,
    e.g. the speed of menu selection, errors made

19
Hypotheses
  • Hypothesis a testable statement
  • - A prediction for the outcome of the
    experiment is stated in an adjustment of the
    independent variable. This will be thought to
    affect the dependent variable, and the experiment
    will aim to prove/disprove this.
  • E.g. Students taking module X using a multimedia
    application will score higher grade points than
    those taking it by conventional means
  • - Otherwise the hypothesis will be correct in
    that there is no effect for the adjustment of the
    independent variable

20
Experimental method
  • We also need an experimental method, need to
    decide on an experimental method, either
  • - Between-groups or within-groups
  • Between-groups
  • - People assigned randomly to at least 2
    conditions
  • - Has a control group to check changes, i.e. an
    experimental condition with manipulation and a
    control without manipulation. Subjects are
    limited to one group
  • - Therefore needs lots of people
  • Within-groups
  • - Subjects will attempt each condition
  • - Adds problems of learning transfer effects
  • - Needs fewer people

21
viii) Usability laboratory
  • Typical layout
  • Office environment with 4-5 desks, computers
    etc. in soundproof room (Test Room), video
    cameras etc.
  • Observation room with 2-way mirror
  • Control room

22
Aims of usability labs
  • Represent real users performing typical tasks in
    a typical environment
  • Allow prediction of real-world usability
  • Allow developers to observe usability problems
    first hand
  • Aid in formative feedback for interface design
  • A lab test can be designed to
  • Select from competing products, systems or design
    prototypes
  • Diagnose problems, using a Formative approach,
    (Typically co-operative evaluation, think aloud,
    etc., aimed at re-design)
  • Verify acceptance goals,
  • Measures performance against acceptance criteria

23
3. Testing episode
  • Design the test
  • Use the instrument (s) to obtain results
  • Support for results capture
  • - Paper pencil (limited and hard work)
  • - Audio recording (improvement slightly
    limited)
  • - Video recording (advanced)
  • - User note books (good for unsupervised
    user)
  • Computer logging
  • - Excellent for multimedia applications
  • - Node access, errors (and where made), time
  • spent at nodes etc.

24
4. Results
  • Analyse the results obtained
  • Quantitative - whats going on, statistical
    analysis
  • E.g. Instruments
  • - Experiments, questionnaires, feature
    checklists,
  • incident diaries
  • Qualitative - explain whats happening
  • E.g. Instruments
  • - Think aloud, focus group,
    semi-structured
  • interview

25
Summary
  • Weve seen a range of possible methods for
    collecting data to evaluate Usability.
  • Each needs to be considered for its
    appropriateness at the stage of development of
    the product, its Utility (i.e. how easy is to use
    analyse the results from) and obviously its
    cost for the projected benefits.
Write a Comment
User Comments (0)
About PowerShow.com