Title: Research Methods
1Research Methods
- Collecting, Processing and Analyzing Data
2Aims of the Session
- The purpose of this session is to
- Alert you to the different types of methodology
available to you in your research - Make you aware of the different techniques that
you might use in collecting, presenting and
analysing data - Discuss the different kinds of problems that you
might encounter in pursuing your research.
3Contents
- Developing Your Research Questions
- The Different Types of Research
- Selecting Appropriate Research Methods
- Robustness of Methods
- Structuring Your Methodology
- Data Analysis
- Problems with The Research Process
- Summary
41. Developing your Research Questions
- This section of the presentation examines what
you need to do in order to focus your research.
5The Purpose of Research
- The purpose of research is to contribute to a
current academic debate, and possibly to advance
knowledge in some manner. This means that the
research that you undertake has to be - Embedded in a recognisable field of study, taking
account of, and drawing on, past research - Of interest to other researchers working in the
same field, and possibly to the wider community - Generalisable to more than one individual
experience or circumstance.
6Research Questions
- An important way of ensuring that research is
well-focussed and purposeful, is to ensure that
it provides clear, unambiguous, (and hopefully
informative), answers to particular questions. - These are called your Research Questions, and can
either be answered via the Literature Review (in
which case the answers will have already been
discovered by others, but will be novel to you),
OR via the practical work that you undertake, in
which case you could well provide novel and
original answers. - This second type or Research Question will drive
your practical work.
7Examples of the Different Research Questions
- Preliminary Questions intended to be answered by
the Literature Review - What are the available technologies for creating
a database-driven website for use in furniture
retail? - Which currently existing e-commerce websites for
furniture retailers demonstrate examples of best
practice? - Focussed Questions to be answered by the
Practical Research - Can commercially available web development tools
be used to create an effective and
professional-looking e-Commerce website for a
furniture retailer? - What are the most important aspects of a
furniture retail e-Commerce website from the
customer point of view?
8Typical Research Structure
Conduct Literature Review
The process of research is well-documented. This
diagram more or less describes the activities you
need to undertake. What we will do in this
session, is to look at some of the elements, and
how they fit together.
Select Research Questions
Devise Methodology Research Instruments
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
9Getting Started
The first thing that you will do is to make sure
you are well-informed, and to pose pertinent
questions which your practical research will
answer.
Conduct Literature Review
Select Research Questions
Devise Methodology Research Instruments
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
10Defining Your Field of Study (1)
- You undertake a Literature Review in a particular
field, in order to ensure that your research is
embedded within that field, and that you are
taking account of the methods, issues, results,
theories and conventions which apply. - At the end of the literature review, you will
have narrowed the field down to a relatively
small topic within the field, and will have some
unanswered questions which your practical
research will try to investigate. - These focussed questions form the basis of your
Practical Research.
11Defining Your Field of Study (2)
- Your Research Questions are crucial, as they
effectively define both the content of your
practical research and the manner in which you
carry it out. - Your research methodology is designed
specifically to attempt to answer these questions - The data that you collect will be focussed on
issues relevant to these questions - Your analysis of the data will seek to provide
answers to the questions - Your conclusions will summarise the answers.
12Practical ResearchQuestions
- Typical research questions might be
- To what extent is the business community in Wales
aware of the potential of Bluetooth? - Is the software currently available for teaching
arithmetic to 5 year olds appropriate and
effective? - Are there differences in the way that men and
women approach the task of writing software? - How can historical events be modelled effectively
using VRML?
13Defining Your Population
- When framing your Research questions, you need to
be clear about what set of objects, people or
events forms the background population in your
study. - Are you saying, for example that the CAL software
you produce is designed for all English-Speaking
people, all men, all Afro-Caribbeans, all
children under 5, all those who have been
diagnosed as dyslexic, or simply to
Afro-Caribbean boys under 5 with specific
learning difficulties? - If you are investigating whether on-line learning
is effective, is your population students,
University students, UK University students,
Liverpool Hope Students, Liverpool Hope Computing
Students or Liverpool Hope BSc. Computing
Students?
14Research QuestionsReferences (SW Library)
- Lewis, Ian. - So you want to do research! a
guide for beginners on how to formulate research
questions. - 2nd ed. - Edinburgh Scottish
Council for Research in Education, 1997. - (SCRE
publication 2 .. - 1860030327
152. The Different Types of Research
- Here we look at the different options available
to us in carrying out the research.
16Your Research Focus
- The main focus of your research can be
- Product-based Research, focusing on producing a
piece of hardware or software (or the designs for
them) which is at the cutting edge of a
discipline, drawing on other researchers ideas,
best practice and what is feasible. In doing
this, you may need to explore how the product
will enmesh with current systems, and existing
and future technologies. - People-based research, focusing on the people who
interact with the hardware or software, looking
at issues such as usability, user behaviour,
compatibility of software with current user
systems and other HCI issues. - Both of these approaches are legitimate, and it
is possible that in carrying out your research
you might need to use elements of each one.
17Product v. PeopleWhich focus?
- The focus of your research is decided by you.
- It will depend upon how confident you are in
creating a product at the cutting edge, or how
comfortable you will be as a researcher in
dealing with people. - When you frame your research questions, you need
to ensure that their focus leads you into the
kind of research that you want to do. - There is no right answer, but you may find that
your research will be best carried out by using a
main focus of one element, with a subsidiary
focus of another. For example producing a piece
of software which securely encodes personal data
as a self-encrypting and decrypting file stored
on an ID card, may well need trialling with real
people.
18Approaches to Research
- There are two main approaches to doing research
- Quantitative Research looks for hard numerical
data it proceeds by counting, measuring and
summarising. The goal is to look for statistical
significance in the results. - Qualitative Research takes a soft approach to
data, and has a more descriptive feel. It
attempts to get to the heart of the matter,
exploring individual cases in detail, and seeking
the reasons for behaviour and explanations for
events. - Both of these approaches are legitimate, and it
is - possible to combine elements.
19Quantitative v. QualitativeWhich approach?
- The approach you use will depend upon your topic
and your research questions. - It will also depend upon how comfortable you as a
researcher feel about using these methods. - There is no right answer here, and, as we shall
see in the rest of this presentation, there may
be good reasons for adopting a variety of
methods, which encompass both quantitative and
qualitative approaches
20Types of Research
- There are five main types of research that you
might consider - Experimental Research
- Survey Research
- Evaluative Research
- Observational Research
- Developmental Research
- All five of these types can incorporate both
quantitative and qualitative approaches
21Experimental Research
- This is normally quantitative, but can take two
forms - An attempt to produce a piece of hardware,
software or a combination of both, which is at
the cutting edge of a discipline. - An attempt to investigate and document the
performance of a particular piece technology in
specific circumstances. - This might involve
- Creating hardware or software applications
- Devising detailed tests and evaluation procedures
- Carrying out rigorous testing
- Evaluating performance or usability
22Survey Research
- This research can be qualitative or
quantitative in the widest sense, you are
interviewing people. This might involve - An unstructured interview
- A semi-structured interview
- An structured interview based on questionnaire
(face to face, or by telephone) - An administered questionnaire
- A Postal Questionnaire
23Evaluative Research
- This is primarily qualitative. Here you are
trying to assess whether something is of value,
whether it meets its specifications, or whether
it is fit-for-purpose. This might involve - Developing a list of criteria on which to make
judgements - Finding a suitable set of evaluators
- Asking the evaluators to examine the object
against each of the criteria to judge to what
extent it conforms to expectations, Weighing the
positives and the negatives, coming to overall
conclusions - Matching these judgments against similar
judgements made elsewhere in the literature or in
real life.
24Observational Research
- This research normally uses a qualitative
approach in the widest sense, you are recording
peoples behaviour. This might involve - Participating in a task or situation
- Making field notes of experiences
- Asking another person to keep a log or diary of
their experiences - Creating and using an Observation Schedule
- Making a check-list of occurrences of particular
events or items.
25Developmental Research
- This type of research is primarily quantitative,
but will almost certainly use qualitative
approaches broadly you will be developing a new
application (in its widest sense) for a
particular situation. This might involve - Discussing with or surveying possible users
- Creation of a novel algorithm, application or
other piece of software for a particular
situation or environment - Making detailed notes of your experiences as a
developer. - Discussing with other developers their
experiences and their solutions to particular
problems. - Testing the Algorithm or Software. This might
involve using a code-checker, or a formal testing
procedure. - Evaluating the software with a group of potential
users.
26Research possibilities
27Other Forms of Research
- Historical Documentary Research proceeds by
scrutinising existing materials, both written and
artefacts, using them as sources of evidence. - Action Research is normally conducted in an
educational or political context. Action is
taken, monitored, evaluated and then modified for
the next cycle. - Ethnographic Research consists of an in-depth
study of a cultural phenomenon, in order to
generate new theory. - Case Study Research selects a whole range of
research methods in scrutinising one particular
context or situation.
28Research methods 1References (SW Library)
- Crabtree, Benjamin F. - Doing qualitative
research. - London Sage, 1992. - (Research
Methods for Primary Care 3). - 0803943121 - Creswell, John W.. - Research design
qualitative and quantitative approaches / John W.
Creswell. - Thousand Oaks, Calif London Sage,
1994. - 0803952554 - Creswell, John W.. - Qualitative inquiry and
research design choosing among five traditions
/ J. - Thousand Oaks, Calif. London SAGE,
1998. - 0761901434
29Research methods 2References (SW Library)
- Miller, Delbert Charles. - Handbook of research
design and social measurement. - 3rd ed. - New
York David McKay Co . Inc, 1977. - m0859739 - Research methods in education and the social
sciences / Research Methods in. - Block 3B
Research design. - Milton Keynes Open
University Press, 1983. - (DE304, Block 3B 3B).
- 0335074235 - Yin, Robert K.. - Case study research design
and methods. - Rev. ed. - Newbury Park London
Sage, 1989. - (Applied social research methods
series v.5). - 080393470x
303. Selecting Appropriate Research Methods
- The next few slides discuss how you might go
about selecting your research methods from those
available
31Selecting Your Methodology
- Your research methodology consists of
- Research Methods (experiment, survey etc.)
- Research Instruments (questionnaire, tests etc.)
- Analytical Tools (statistics, inductive or
deductive methods) - When selecting the methodology, you need to be
aware of - The Research Questions you are trying to answer
- The Population you are trying to generalise to.
32Factors to Consider
How has previous research in this area been done?
Undertake Literature Review
What methods have been used?
Select Research Questions
What research instruments have been devised?
Devise Methodology Research Instruments
How will the methods instruments be applied?
Apply Methods Instruments
What Statistical tests can be carried out?
Perform Statistical Analysis
What Research Hypotheses can be tested?
Test Hypotheses Draw Conclusions
33Appropriate methodology
- Do your Research Questions involve impressions,
attitudes, opinions, beliefs or knowledge held by
people? - If so, then survey research is appropriate
- Do your Research Questions involve behaviour,
actions, reactions to events, circumstances or
objects? - If so, then observational research is appropriate
34Appropriate methodology
- Do your Research Questions involve the
reliability or robustness of hardware, software,
systems or infrastructure? - If so, then an evaluative study is appropriate
- Do your Research Questions involve the testing of
hardware or software at the technical level,
speed, accuracy, security etc? - If so, then experimentation is appropriate
35A Mix of Methods
- It may be that your research questions overlap
some of these categories, or different questions
address more than category. - If so, you should consider a mix of methods, that
ensure that you cover all eventualities. This may
bring added benefits. (See Triangulation)
36Mixed MethodsReferences (SW Library)
- Mixing methods qualitative and quantitative
research / edited by Julia Bra. - Aldershot
Avebury, 1995. - 1859721168 - Tashakkori, Abbas. - Mixed methodology
combining qualitative and quantitative approaches
/ Abba. - Thousand Oaks, Calif. London Sage,
1998. - (Applied social research methods series
v.46). - 0761900705
374. Robustness of Methods
- As well as ensuring that your questions are
well-focussed, and your methods relevant and
appropriate, you need to ensure that your methods
are also Reliable and Valid
38Reliability Validity
- Research is Reliable if different methods,
researchers and sample groups would have produced
the same results. - Research is Valid if the results produced by the
research are accurate portrayals of the
situation, explanations are effective, and
predictions from the research are actually borne
out by observation.
39Reliability
- Research can have poor reliability, if it on one
or two cases only, or if personal judgement or
opinion is included. - Reliability can be improved if data collection
methods are made more precise, we have controlled
experimentation, and we can produce statistical
summaries.
40Validity
- Research can have poor validity if the data
produced is too far removed the object under
study, or the respondent. Using detailed, highly
structured research instruments can lead to
distortions, by forcing observations into
categories when they do not fit. Data summaries
and averaging can also lead to distortions, and
meaningless generalities. Graphical
representations and percentages can be highly
selective and produce biased findings. - Validity can be improved by working directly with
individuals or objects, focusing on specific
cases, making detailed observations, conducting
face to face interviews, taking detailed
measurements in specific circumstances etc.
41Reliability v. Validity
Valid, but not reliable
42Reliability v. Validity
Reliable, but not Valid
43Methodological Trade-Off
- If you improve validity, you will almost
certainly reduce reliability. - If you improve reliability it will be at the cost
of reducing your validity. - The trade-off is to balance the two so that the
benefits of using particular methods outweigh the
losses incurred.
44Triangulation
- Triangulation takes its name from the
navigational method of positioning a ship at sea
by making two independent observations. - The purpose here is to use two distinct
methodologies, independent of one another to
confirm that the effects which we are observing
are real, and not artefacts of the research
process.
45Triangulation
- Triangulation attempts to counter the
methodological trade-off by using a mix of
methodologies. - If you are using highly structured, statistical
or measurement-based research, you supplement
this with detailed observations or face to face
interviewing. - If your research is mainly based on individuals
or on single items, you ensure that at least part
of it has some statistical summaries, structured
observations or questionnaires.
46Validity ReliabilityReferences (SW Library)
- Kirk, Jerome. - Reliability validity in
qualitative research. - Beverly Hills Calif
Sage Pubns, 1986. - (Qualitative Research Methods
Series 1). - 0803924704 - Litwin, Mark S. - How to measure survey
reliability and validity. - London Sage, 1995.
- (The Survey Kit 7). - 0803957041
475. Structuring Your Methodology
- This section looks in detail at the techniques
that you might employ in the Research Activity
itself.
48Research Structure
Conduct Literature Review
After framing your Research Questions, and
selecting your methodology, you should test that
this is going to work by conducting a small-scale
Pilot Study
Select Research Questions
Devise Methodology Research Instruments
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
495a Pilot Studies
- A pilot study is a set of preliminary
investigations procedures carried out prior to
the main research, to ensure that the research is
possible and can proceed without hitches.
50Pilot Study (1)
- In almost every type of Research, you will need
to devise or adapt some sort of Research Method
or Instrument. - This might be a measuring procedure, a set of
evaluation criteria, an interview procedure or
questionnaire, or an observational method or
schedule.
51Pilot Study (2)
- Your procedure or instrument should be based on
best practice from previous research. - It is unlikely that you will find exactly what
you need you will be forced to adapt or amend
it. - This means that you will need to conduct a Pilot
Study to test whether the new instrument is
fit-for-purpose.
52Pilot Study (3)
- The main purpose of a Pilot Study is to iron out
bugs in procedures, or to check that
instruments work. - The size of the pilot study will depend on how
inventive you needed to be. - If your procedures are almost entirely of your
own devising, then you will need a fairly
extensive pilot study to check them. - If you have lifted methods from the literature,
than your pilot can be quite small.
53Pilot Study (3)
- With questionnaires observation schedules, you
will need to check that individual items are
giving you expected results. - With test procedures, you need to check that you
can actually do what you have said that you are
going to do. - With evaluation criteria, you need to use these
in a limited context, to see that they are
workable and effective.
54Pilot Study (4)
- As a result of the Pilot Study, you need to
evaluate procedures and instruments, making
amendments where necessary. - The Pilot Study stage will be part of your
research you will need to write this up,
reporting on how your methods were adapted and
improved as a result.
55Carrying Out the Research
Undertake Literature Review
Select Research Questions
Devise Methodology Research Instruments
Here we will examine how particular methods and
instruments can be applied in the research
situation.
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
565b A detailed look at some Research methods
- Experimental Design
- Evaluative Research
- Observational Research
- Survey Research
- Developmental Research
575c.1 Experimental Design
- This section looks at the different ways in
which you can conduct experiments. Note that you
do not need to be doing pure experimental
research to adopt these methods.
58Experimental Designs
- You may need to think about experimental design
even if you are doing types of research other
than Experiments - These designs occur where you have made some
change, and are trying to find out its effect. - The net result is that you are comparing one
thing, or one group with another thing or group.
59Experimental Designs
- The terminology for experimental designs comes
from agricultural experiments. - We have different treatments which we apply to
different groups - We control the groups for different factors
60Experimental Designs
- Experiments normally involve independent and
dependent variables - Independent variables are factors that can be
controlled for, like temperature, file sizes,
age, gender etc. - Dependent variables are those factors which will
change as a result of altering the independent
variables. - For example, download times will increase as a
result of increasing file sizes. - Download time dependent variable,
- File size independent variable
61Pre-Test/Post TestControl Group Design
- This is the classic experimental design.
- It allows you to split your sample into two
distinct parts (A B) - You give the same test to the two groups before
you start - You treat one of the groups
- You apply the same test afterwards.
Group A (treated)
Group B (untreated)
Pre-test
Pre-test
Treatment
Post-test
Post-test
62Control Group
- The Control Group (B), is one which is the same
in all respects, except for the fact that we make
no changes. - We can use the test measurements on the control
group as a benchmark for any changes we make to
the treatment group (A)
63Example 1 Server Testing
- Suppose we wish to check whether a firewall is
effective in blocking external attacks. - Set up System A System B on two different
servers, running near-equivalent internal
external programs. - Devise a test which includes a full range of
possible attacks apply to both A B. Take
series of measurements observations. - Incorporate firewall into System A
- Reapply test to both systems, which are again
running near-equivalent programs.
64Example 2 Software Design
- Suppose we wish to find out whether
anthropomorphic agents are useful in
communicating with novice users of a website. - Set up Website A Website B with
near-equivalent structure, but different
content. - Devise a procedure which asks two equivalent sets
of users to make explorations of the websites
apply to both A B. Make a series of
observations - Incorporate Agents into Website A
- Re-apply procedure, asking two more equivalent
sets of users to make the same explorations of
both Websites. Again make observations.
65Example 3 On-Line Learning
- Suppose we wish to find out whether training is
effective in helping students cope with the
demands of on-line learning. - Set up group A group B with near-equivalent
members. - The two groups to undertake a short programme of
learning on-line, which incorporates a short
evaluation and a knowledge test. - Give Training to Group A
- The two groups to undertake a further short
programme of learning on-line, again
incorporating a short evaluation and knowledge
test.
66Factorial Designs
- This is where we examine the effects of two or
more independent factors simultaneously. - For two factors, we would need 4 groups
- Group A (Control no treatment)
- Group B (Factor 1 treatment only)
- Group C (Factor 2 Treatment only)
- Group D (Factors 1 2 Treatments)
- Clearly, this is going to increase the complexity
and size of the research, but it has the added
benefit of producing verifiable results in cases
where two factors interact to produce interesting
effects.
67Other Design Variants
- Post-Test only Control Group Design
- Here we assume that both groups are the same (but
do not test that assumption). We simply apply the
treatment to one group, and apply the test to
both groups. - Matched Pairs Design
- The individuals we use in the test groups
(subjects) are matched for characteristics which
are likely to affect the outcome (gender, age,
level of education, ethnicity etc.)
68Experimental DesignsReliability Issues
- For high reliability, we need to build in strict
controls over each of the independent variables
affecting the outcomes of the experiment. - We will also need to ensure that any measurements
taken of the dependent variables are as accurate
as possible. - We also need to ensure that our methodology is
clear and replicable.
69Experimental DesignsValidity Issues
- For high validity, we need to conduct the
experiment in as natural a setting as possible,
and in as near as exact a match to the
circumstances in which the events or objects
would normally operate. - If a sample is being used, then the sample
(whether time, events, people or objects) should
be as representative of the background
population as possible, and we should make
detailed observations of how the events unfold as
well as the final measurements.
70Experimental DesignReferences (SW Library)
- Campbell, Donald T., Donald Thomas, 1916-. -
Experimental and quasi-experimental designs for
research / Donald T. Campbell. - Boston London
Houghton Mifflin, 1963. 0395307872 - Field, Andy. - How to design and report
experiments / Andy Field, Graham Hole. - London
SAGE, 2003. - 0761973826 - Miller, Steve. - Experimental design and
statistics. - London Methuen, 1975. -
(Essential Psychology A8). - m0805407
715c.2 Evaluative Research
- This section looks at the methods and issues
surrounding evaluation. You may need to use such
techniques if evaluation is implicit in your
research questions
72Evaluative Research
- Evaluative Research covers those cases where you
are attempting to compare whether one procedure
or object is better or more effective than
another procedure or object, or to determine
whether a particular procedure or object is
fit-for-purpose. - Evaluation normally needs to be done against a
set of criteria, which have been established as
valid and reliable in this context. You would
normally produce an evaluation form to be
completed by respondents.
73User Evaluation
- Most evaluative research will attempt to involve
the potential users of the software at some
point. - This will provide very useful information on how
easy the software is to use for different types
of user, what issues arise in day-to-day use of
the software, and whether or not the users feel
that the software is intuitive, user-friendly and
well-designed. - It is possible to use a qualitative form of user
evaluation, where a user is asked to use a piece
of software, and then interviewed about
experiences this should be used with care, as
while the information this provides is very
useful, it can only be indicative.
74Using Expert Judges
- Using Expert Judges is often an extremely useful
method in evaluating a piece of software. - Individual users may not have the experience,
insight or knowledge to provide detailed
information on every aspect of a piece of
software. - One of the main issues in using this method, is
to establish the credibility of the experts, and
the relevance of their experience to the software
to be evaluated. - Expert Judges may also be used qualitatively, by
asking them to evaluate a piece of software
according to whatever criteria they feel to be
appropriate this in general is far more valid
than asking individual users to comment.
75Heuristic Evaluation
- Heuristic evaluation is a usability engineering
method for finding the usability problems in a
user interface design so that they can be
attended to as part of an iterative design
process. - The method involves a small set of evaluators
examining the interface and judging its
compliance with recognized usability principles
(the "heuristics"). - The evaluation is performed initially by
individual evaluators working alone. Only after
all evaluations have been completed are the
evaluators allowed to communicate and have their
findings aggregated. - This procedure ensures independent and unbiased
evaluations from each evaluator. - http//www.useit.com/papers/heuristic/heuristic_e
valuation.html
76Evaluation Criteria
- To establish criteria for evaluation, we need to
break the topic down into individual elements,
and state the different sub-topics on which the
item is to be evaluated. - Alongside this, we will normally state specific
questions which will need to be answered in order
to judge the item against the criterion. - The answers to the questions may involve
measurements, counts, assessments on subjective
scales, statements of fact or possibly even
opinion.
77Example Website Evaluation Criteria
- Design
- Is the use of colour acceptable?
- Are the elements in harmony?
- Navigation
- Do the links work?
- How many links per page?
- Content
- How many words graphics on page?
- Is the text interesting informative?
- Interactivity
- What interactive features are used?
- Do these improve communication with the user?
- Coding
- What scripting is used? Is it clearly annotated?
78Example Website Evaluation Form
- 1. Design
- Colour Safe Pallet used? Yes/ No
- Harmony of elements on page good/ neutral/ poor
- 2. Navigation
- Number of links on the page ____
- Number of working links ____
- 3. Content
- Total size of graphics files on page ____ MB
- Is the text boring/ dull/ neutral/
interesting/ fascinating - 4. Interactivity
- Tick and/or name all interactive features used
- rollovers dynamic images image maps ______,
______ - 5. Coding
- What DTD has been used? _______________________
79Evaluation CriteriaValidity Issues
- For criteria to be valid, each of the criteria
elements need to have face or content validity
the questions posed by the criterion needs to be
relevant to the object, and relates to a feature
of it - We also need to be able to answer the questions
in an objective manner, without recourse to
guessing, or giving an impressionistic response.
80Evaluation CriteriaReliability Issues
- For criteria to be reliable, each of the criteria
elements need to be clear and unambiguous, so
that different assessors would interpret the
criteria in the same way. - We also need to ensure that repetition of the
evaluation exercise will yield results which are
not dissimilar to one another.
81Evaluation MethodsReferences (SW Library)
- Britain, Sandy. - A framework for pedagogical
evaluation of virtual learning environments. -
Manchester Joint Information Systems Committee,
1999. - (JISC Technology Applications Programme
r.. - M0000712EL - Broadbent, George Ernest. - The role of
evaluation in user interface design a practical
implementation. - Liverpool University of
Liverpool, 1997. - p7270369 - Redmond-Pyle, David. - Graphical user interface
design and evaluation (GUIDE) a practical
process. - London Prentice Hall, 1995. -
013315193x - Smeltzer, Nicholas. - Critical analysis of the
design and evaluation of a computer-based
project. - Liverpool University of Liverpool,
2001. - M0002704LO
825c.3 Observational Research
- This section describes what you need to do if
your research involves making detailed
observations of people, events or objects.
83Observational Research
- In this case you are trying to determine
- Either How an individual or group of individuals
react, interact or behave in particular
circumstances - Or How software or hardware performs when used
by particular groups of people
84Participant Observation
- In this case, the researcher becomes one of the
subjects, and works alongside the subject,
monitoring behaviour and interacting with them. - ProsResearcher can fully understand what the
issues are. Can get real validity to research. - Cons Researcher can influence the research,
alter opinions. Highly subjective unreliable.
Taking notes is difficult relies on good memory.
85Non-Participant Observation
- In this case, the researcher studies the
situation apart from subjects, taking notes,
monitoring behaviour and observing interactions.
The use of audio video recording is useful
here. - ProsResearcher can get overview of the
situation, and achieve objectivity - Cons Researcher can only see resulting
behaviours, not what is causing it or why it is
happening.
86Observation Schedules
- An observation schedule can be a simple list of
things to look for in a particular situation - It can be far more complex a minute by minute
count of events such a mouse-clicks or verbal
interactions between subjects.
87Observation ScheduleAn Example
- Observation of subject using Information Portal
- Subject M / F Age 18-21 / 21-30 / 31-50 / 50
- Date _________ Time _________
- Selected Navigation Tool Mouse /Keyboard/
TouchScreen - First 5 pages visited in order __ __ __
__ __ - Time to obtain to obtain required information
___ min ___ sec - Total Number of Pages visited __
- Feedback from subject
- very positive / positive / neutral / negative /
very negative
88Observation SchedulesSome Reliability Issues
- For an observation schedule to be reliable, it
should require structured documentation of
events. - This will involve such things as checklists,
minute-by-minute categorisation of activity and
numerical data such as frequencies of occurrence
and time intervals. Timings should have a clear
start and end points. - The schedule should leave little room for
subjective judgement.
89Observation SchedulesSome Validity Issues
- For an observation schedule to be valid, it
should refer to events which actually happen
each of the events on the sheet should be
possible, and likely to occur. - Timings should be possible to take, and not
interfere with other observations which should be
made. - There should be room for observations which
enrich the data by adding detail to the
numbers, offering explanation and illumination.
90Observational ResearchReferences (SW Library)
- Harding, Jacqueline. - How to make observations
assessments / Jackie Harding and Liz
Meldon-Smith. - 2nd ed. - London Hodder
Stoughton, 2000. - 034078038x - Robertson, Kevin. - Observation, analysis and
video / editors Anne Simpkin and Penny
Crisfield. - Leeds National Coaching
Foundation, 1999. - 1902523164 - Simpson, Mary. - Using observations in
small-scale research a beginner's guide / Mary
Simpson. - Glasgow Scottish Council for
Research in Education, 1995. - (SCREpublications
16 130). - 1860030122
915c.4 Survey-Type Research
- This section describes what you need to do in
order to use human respondents to provide you
with information.
92Survey methods
- There are two distinct elements here
- The interview techniques that you adopt in order
to elicit information from people - The sampling methods that you adopt in order to
select respondents for interview - You will need to make rational choices for both
of these elements, depending upon the focus of
your research and the population under study.
93Interviewing
- We interview people face-to-face in order to try
to find out exactly what they think. - With the right questions, people respond with
high quality information. - The data that you get can be of high validity,
since you have access to respondents own words.
94Types of Interviews
- Unstructured Interview
- Structured Interview
- Semi-Structured Interview
- Administered Questionnaire
95Open or Closed?
- When interviewing respondents, the main choice
facing the researcher is whether to use open
questions, which leave the respondent free to
answer in any way they think fit, or closed
questions which force the respondent to make
particular choices, pre-determined by the
researcher. - OPEN What is your experience of chat rooms?
- CLOSED Do you think chat rooms should be
monitored? (Yes/No/Maybe)
96Open or Closed?Validity Reliability Issues
- In general, the data from open questions is
richer, more illuminating, and more valid, as it
can illustrate clearly why particular subjects
think the way that they do, or why they behave in
particular ways. - Data from closed questions however, is more
amenable to statistics. It is easier to
summarise, spot trend and make comparisons. In
general the data is more reliable. - A good strategy is to use a mixture of both open
and closed questions
97Unstructured Interviews
- Unstructured Interview
- Interviewer has no set agenda the object is to
get the respondent to talk freely about various
topics. - Pros Can be high quality data, often get real
insights into what people really think. - Cons Can be difficult for the novice researcher
to carry out need to be good at steering the
conversation without forcing it. Time consuming
can only carry out a small number. Difficult to
analyse
98Semi- Structured Interview
- Interviewer has a formal list of topics, which
sets the agenda however, the interviewer is free
to take these in different orders, or to return
to topics at different points. - Here we include the idea of focus groups, where
a facilitator encourages the discussion of a
particular topic. - Pros Data provided can be almost as good as
unstructured interview, but in a more focussed
manner. - Cons Can miss important ideas, because agenda
set beforehand interviews can also be time
consuming difficult to manage analyse
afterwards.
99Structured Interviews
- Interviewer has a list of topics to be taken in a
particular order questions are written, and read
out. - Pros You can make good comparisons between
different respondents you also will have access
to respondents thinking. - Cons Unless you have done some preliminary
investigations, and extensively piloted the
interviews, you may miss lots of important data.
Takes time to do properly can only carry out a
few.
100Administered Questionnaire
- Interviewer has devised a list of questions which
are read out some multiple choice, some
open-ended. Filled in either by respondent or
interviewer. - Pros Interview can be quite brief easy to make
comparisons, data amenable to statistical
analysis. - Cons Data may be warped by the choice of
questions. Respondent may be forced into giving
fabricated answers.
101Questionnaires
- There are many different types of questionnaire,
depending upon what you are trying to find out. - As well as gathering factual data about the
person, you may be trying to explore their - Knowledge
- Beliefs
- Attitudes
- Opinion
- Behaviour
If you need to design a questionnaire, see for
example http//www.leeds.ac.uk/iss/documentation/
top/top2/index.html
102Question Types
- Factual Data
- Binary (Yes/No)
- Single Selection from Categories
- Multiple Selection from Categories
- Attitude Scale items
- Focussed items to elicit categories
- Conditional Questions for routing
- Open-ended Questions
- Self-Reporting
103Writing Questions
See http//www.analytictech.com/mb313/principl.ht
m
- Put Factual Data (demographics) at end
- Put Explanations and Disclaimers at the start
- Keep questions as brief as possible
- Use non-technical language where possible
- Avoid leading respondents towards particular
answers - Use direct questions, no hypotheticals
- Use simple questions, no portmanteaus
- Use a mixture of positively and negatively worded
questions - Verify data by asking same question in different
ways.
104Questionnaire Example
- 1. Which operating system are you currently
using - Linux Windows Other
- 2. How would you rate the operating system?
- very poor poor good very good
- 3. Circle the tasks which you use your computer
to do - Word processing Internet Access Program
Development - 4. Estimate the number of hours you use the
computer for each week - ______ hours
- 5. Do you use broadband?
- Yes No
105Protocol
- Whatever your interviewing method, as part of
the ethical constraints on Hope Researchers, you
are required to do the following - Explain to the respondents what the research is
about. - Tell them that they will not be identified by
name in the final report, and that any views that
they express will be in confidence. - Explain to them that if there are questions with
which they feel uncomfortable, they do not have
to answer.
106Attitude Measurement
- You may wish to incorporate attitude
measurement into your questionnaire as either a
major or a minor feature. - There are several different types of scales which
can be used to elicit numerical measurements of
attitude - Likert Scaling
- Thurstone Scaling
- Guttman Scaling
- Semantic Differential Scaling
- See http//www.socialresearchmethods.net/kb/scalg
en.htm
107Likert Scales (1932)
- This consists of items like I would not trust a
banks website to keep my details secure - Respondents are asked to use a scale such as
1Strongly disagree 2disagree 3neutral
4agree 5strongly agree - Scales can vary 0-4, 1-7 etc.
- Some proponents suggest removing the neutral
category to force a choice, but this can reduce
validity. - Scores on individual items are totalled to obtain
the respondents score this is often shown as an
average. -
108Thurstone Scales (1928)
- This pre-ranks 11 statements with numerical
values 1-11 each statement carries a numerical
score, for example - 1 Teleworking is an impossible concept
- 2 Teleworking may be OK in exceptional cases
- 8 If offered the opportunity, I would try
teleworking - 11 In 20 years time, everyone will be
teleworking - Clearly, lots of preparatory work needs to be
done to generate and rank the statements. - A respondents score, is the average numerical
value of all the items they agree with.
109Guttman Scales (1944)
- This pre-ranks statements in order, very similar
to Thurstone scaling, except here we attempt to
construct the scale so that if a person agrees
with item 4 on the scale, they will also agree
with items 1,2 and 3. - When the questions are administered, the items
are muddled, but each retains a ranking. - The respondents score is the sum of the ranks
associated with the items he or she agreed with.
110Osgoods (1957)Semantic Differential Scales
- Respondents are asked to rate an idea or an
object against a series of opposing adjectives or
descriptions, for example - Using Learnwise
- Exciting Dull
- Hard Easy
- Frustrating
Stimulating - The scale asks respondents to tick the box
nearest the descriptor that they agree with. Each
box has a numerical value e.g. 1,2,3 7 - The respondents score can be portrayed
graphically, or as a total of numerical values.
111Questionnaires InterviewsReliability Issues
- For questionnaire results to be reliable, you
need closed questions that are precisely framed,
unambiguous, with response categories that are
well-defined, exhaustive and exclusive. You need
to ask the same question in different ways, and
you need to collate the information into
statistical summaries and expose the data to
rigorous statistical testing. - You need to use as large a sample as possible,
and even out any random fluctuations in the data
by using summary statistics and hypothesis
testing.
112Questionnaires InterviewsValidity Issues
- For questionnaire results to be valid, you need
response categories which enable the respondent
to express their views accurately this can best
be done with open questions. You should take time
with the respondent and respect the data that
they provide for you. - You need to ensure that the sample is
representative of the population, and large
enough for the results to be statistically
significant.
113Questionnaire References On-Line
- http//www.tardis.ed.ac.uk/kate/qmcweb/qcont.htm
- http//www.leeds.ac.uk/iss/documentation/top/top2.
pdf - http//www.statpac.com/surveys/
- http//www.fao.org/docrep/W3241E/w3241e05.htmchap
ter20420questionnaire20design - http//www.surveysystem.com/sdesign.htm
114Questionnaire References (1)(SW Library)
- Foddy, William. - Constructing questions for
interviews and questionnaires theory and
practice. - Cambridge Cambridge University
Press, 1993. - 0521467330 - Fowler, Floyd J.. - Improving survey questions
design and evaluation. - London Sage, 1995. -
0803945833 - Frazer, Lorelle. - Questionnaire design and
administration a practical guide / Lorelle
Frazer. - Brisbane Wiley, 2000. - 0471342920 - Gillham, W. E. C., William Edwin Charles, 1936-.
- Developing a questionnaire. - London
Continuum, 2000. - (Real world research). -
0826447953
115Questionnaire References (2) (SW Library)
- Oppenheim, A. N., Abraham Naftali, 1924-. -
Questionnaire design, interviewing and attitude
measurement / A.N. Oppenheim. - New ed. - London
Continuum, 2000. - 0826451764 - Wengraf, Tom. - Qualitative research interviewing
biographic narrative and semi-structured
interviews - London SAGE, 2001. - 0803975007 - Young, Pauline V. - Scientific social surveys and
research an introduction to the background,. -
Englewood Cliffs, N.J. Prentice-Hall, 1966. -
(Prentice-Hall Sociology Series). - m0859969 - Youngman, Michael Brendan. - Designing and
analysing questionnaires. - Maidenhead, Berks
TRC Rediguide. - (Rediguide 12). - m0891542
116Sampling
- The question of how to select the respondents for
the sample is always tricky. - The principle behind sampling, is that you should
ensure that the sample is appropriate in order
for you to generalise your results to the
population under study. - There are essentially three ways in which this is
done - Random Sampling
- Quota Sampling
- Stratified Random Sampling
117Random Sampling
- This method requires you to get a list of all the
population, as near complete as you can find,
then use some random selection method (such as
shutting your eyes and stabbing a pen at the
list, or allocating using random numbers) - The idea is that every person in the population
has an equal chance of ending up in the sample. - Most statistical methods assume that you are
sampling randomly, and it is the only method
which overall is guaranteed to ensure that
samples are free from bias, and therefore provide
validity.
118Quota Sampling
- Here, you identify particular sectors of the
population, such as men, women , those under 21,
over 21, employed, unemployed etc., and put
quotas on the number of people in each category. - For example, in a quota sample of 12, we might
have - 2 males under 21 2 females under 21
- 4 males over 21 4 females over 21
- The purpose here is not to get a sample which
represents the population in its entirety, but to
ensure that views from important sections of the
population are represented.
119Stratified Random Sampling
- One of the problems with Random Sampling, is that
if you take small samples, you can very well end
up with a biased sample , by chance. - In Stratified Random sampling, you would measure
what proportions of the population lie in each
category or strata, and select your sample so
that you get precisely those proportions in those
strata - For example in your population you might have
- 15 males under 21 10 females under 21
- 30 males over 21 45 females over 21
- If you select a sample of 200 you would need
stratified samples - 30 males under 21 20 females under 21
- 60 males over 21 90 females over 21
- The allocation of subjects to samples should be
done randomly. - The purpose here is to get a valid sample which
which represents the population in its entirety.
120Other Sampling Methods
- Snowball Sampling This method is used whenever
you are dealing with a tricky subject, where
people may be doing devious or illegal activities
(for example hacking, virus creation etc.). In
this case, you would use one respondent to
suggest the name of another who might be willing
to be interviewed that one would lead to several
others and so on. - Cluster Sampling This method might be used if
you have a diverse population (such as those
living in African urban communities). Here you
would randomly select particular cities, and
sample within neighbourhoods within those cities.
- Convenience Sampling This is used mainly for
investigative research. It relies on the fact
that people appearing at a particular location or
time will do so, at random (may not be true).
We use this fact to stand in one place and
interview them.
121Sampling On-Line References
- This is an easy introduction
- http//www.csm.uwe.ac.uk/pwhite/SURVEY1/node26.ht
ml - This is more complex and technical
- http//www.socialresearchmethods.net/kb/external.h
tm
1225c.5 Developmental Research
- This short section describes what is required
for a project which involves the development of a
piece of software or an application.
123Developmental Methods
- Depending upon the type of developmental work to
be undertaken, you should engage in some of the
following - An analysis of the current situation.
- A detailed design for the new piece of software
- Testing evaluation of the software once it has
been developed
124Analysis of the current situation
- The analysis could involve some or all of the
following - Detailed Systems Analysis of the context of
application - Examination of the theoretical background to
developmental work of this nature, e.g. what are
the available technologies, what are the
theoretical constraints? - A Feasibility Study
- A Cost-Benefit analysis
- A User Needs Analysis
125Design for the software
- The design for the software could involve any of
the following, as relevant to the particular
project - Detailed plan of the software design structure,
with Structured English Blocks - Storyboards
- Prototypes
- Detailed Design Specification e.g. file sizes
types, screen layouts and resolution etc.
126Evaluation and Testing (1)
- Testing might involve
- Using a code-checker (see e.g. http//w3schools.co
m for web-pages) - Designing a detailed testing procedure where each
possible input and output is tested methodically - Testing the software in-situ
127Evaluation and Testing (2)
- Evaluation might involve
- Observing users working with the software, and
collecting data on times, reactions etc. - Evaluation using experts
- User focus group sessions, using research
instruments such as evaluation sheets,
questionnaires
128Developmental ResearchReliability Issues
- For the results of developmental research to be
reliable, If another developer had been
developing the software, the solution that you
have come to should have been reached by that
developer for that context the solution (and the
software) should NOT be dependent upon your
skills and your knowledge, but rather the
specific requirements of the problem which you
were trying to solve. - To do this you need to ensure that the way that
the software has been constructed has taken
account of previous work in this area, and that
you have made comparisons with currently existing
software both the existing software in the
actual context in which you are working, and
similar software in similar situations.
129Developmental ResearchValidity Issues
- For the results of developmental research to be
valid, you will need to ensure that the piece of
software resulting from the research exactly
matches the requirements of the context. - The way to do this is to ensure that sufficient
data is collected about the context prior to
software design and construction, and that the
design and product specification are agreed with
clients and other interested parties, - In addition it is crucial that the software is
tested and evaluated effectively, and account is
taken of the results of this, in order to ensure
that the match with user requirements and
contextual constraints have been satisfied.
1306. Data Analysis
- This section of the presentation looks at the
different types of data available, and how this
can be analysed
131Analysis of Research Data
Undertake Literature Review
Select Research Questions
Devise Methodology Research Instruments
Here we examine how the data produced by the
research can be presented effectively and
statistically analysed.
Apply Methods Instruments
Perform Statistical Analysis
Test Hypotheses Draw Conclusions
132Data Analysis
- 6a Types Of Data
- 6b Extracting Data for Analysis
- 6c Presentation of Data