Assoc. Prof. Sami Fethi - PowerPoint PPT Presentation

1 / 35
About This Presentation
Title:

Assoc. Prof. Sami Fethi

Description:

Title: Slide 1 Author: gill geraghty Last modified by: emu Created Date: 3/21/2005 11:00:42 AM Document presentation format: On-screen Show Company – PowerPoint PPT presentation

Number of Views:167
Avg rating:3.0/5.0
Slides: 36
Provided by: gillge
Category:

less

Transcript and Presenter's Notes

Title: Assoc. Prof. Sami Fethi


1
Department of Business Administration
SPRING 2009-10
The Research Design and Measurements
  • by
  • Assoc. Prof. Sami Fethi

2
The Research Design and Measurements
  • The design problem
  • Problem structure and research design
  • The problem of cause
  • The classic experiment
  • Validity threats
  • Defining measurement
  • Scales of measurement
  • validity and reliability in measurements
  • Improving your measurements

3
The Research Design and Measurements
  • Research design
  • Research designs are master techniques
    (Kornhauser and Lazarsfeld, 1955).
  • The research design is the overall plan for
    relating the conceptual research problem to
    relevant and practicable empirical research as
    well as data collection and its analysis.
  • The design problem
  • Empirical research is conducted to answer (or
    elucidate) research questions .
  • Poorly formulated research questions will lead to
    misguided research design.
  • An open approach with no research questions is a
    way of research design, however it is very risky
    approach (Hammersley and atkinson, 1995).
  • Strategic choice of research design should come
    up with an approach that allows for solving the
    research problem in the best possible way within
    the given constraints.
  • e.g. Time, budgetary and skill constraints.
  • Choice of research design can be conceived as the
    overall strategy to get the information wanted.
    This choice influences the subsequent research
    activities as such data and its analysis-servant
    techniques (Kornhauser and Lazarsfeld, 1955).
  • When the design problem is neglected, such errors
    called design error occurred as well as
    irrelevant design choices.

4
Problem structure and research design
  • Structured and Unstructured
  • A political party wants to conduct a poll to
    examine its share of voters. This is a structured
    problem because the political party knows what
    information is wanted, that is, the percentage of
    voters. Descriptive-structured.
  • Company As sales have dropped in the last two
    months and the management does not know why. The
    management does know what has caused the decline
    in sales. This is a more unstructured problem.
    Explanatory-unstructured.
  • An advertising company has produced two set of
    copy and wants to know which is the most
    effective in an advertising campaign. This case
    is structured and may produce some effect-cause.
    Causal-structured.
  • Explanatory research
  • When the research problem is badly understood,
    more or less explanatory research design is
    adequate.
  • Flexibility is a key characteristic of the
    explanatory research to solving the problem.
  • e.g. Detective TV series start with phone call
    that somebody is murdered who did this who is
    the quilty person how does th edetective
    proceed s/he tries to collect data and find a
    lead as new information comes up, the picture
    becomes clearer and finally the detective found
    answer.
  • Explanatory research requires skills, but key
    requirements are often the ability to observe,
    get information and construct explanation.

5
Problem structure and research design
  • Descriptive research
  • In such research, the problem is structured and
    well understood.
  • e.g. Examine the case where a firm wants to look
    at the size of market M. First step is to make a
    classification of what is meant by market. Such
    as specify the potential group of buyers, a
    specific area and a specified time period.
  • Having defined group, (say X) and time (say one
    year), the researchers task now is to produce
    this information by conducting a survey if
    relevant secondary data is not available-
    sampling plan-beyond this point is to construct
    question that is measurements.
  • Key characteristics of descriptive research are
    structure, precise rules and procedures.
  • In the following figure, a researcher wants to
    describe smokers by social class, i.e.
    Cross-table.

6
Cross-table
Table 1 Cross-table
7
Problem structure and research design
  • Causal research
  • In this case, the problem under scrutiny are
    structured, however in contrast to descriptive
    research, the researcher is also confronted with
    cause-effect problems.
  • In a such research the main tasks are to isolate
    causes and tell whether and to what extent causes
    results in effects.
  • e.g. Is the medical drug effective? What dose is
    the most effective? Does the advertising help in
    acheaving greater market share? See the following
    example for the case of causal research.

8
Weight loss programme
Groups Groups Groups Groups Groups
Diet Exercise Education Control
Weight loss -5.2 kg -4.1 kg -6.1 kg -1.5 kg
Stadard deviation 2.3 1.5 3.5 1.2
participant 30 30 30 30
Table 2 The data show that groups on average
have lost weight but the diet, exercise and
education groups lost more than the control
group. Here diet, exercise and education are
seen as potential causes of weight loss
9
The Problem of Cause
  • Cause
  • A dealer has reduced the price of TV sets by 10
    percent and sales increased by 20 percent. Is the
    price reduction a cause of the increased sales?
  • Manager are often preoccupied with success
    factors. Peters and waterman (1982) claimed that
    being close to the customers is an important
    factor in explainning success. Is closeness to
    customers a cause of success?
  • In order to analyse the relationship between
    cause and effect, we need to use the covariation
    technique.
  • In such framework, price reduction and change
    sales or price reduction and closeness to
    customers.
  • In the following table, effect is not always
    present when cause is present. i.e. 80 of cases
    with price reduction, no increase in sales occur.
    (see Table 3). i.e. alternative causes.

10
Covariation
Table 3  Covariation
11
The importance of theory
  • Cause-effect
  • The question of cause-effect also calls for a
    priori theory in research.
  • The need for theory can be illustrated in the
    following way. Assume that you have two variables
    (X and Y) in your research and the relationships
    will be possible as below
  • e.g. X?Y (X cause Y)
  • X?Y (Y cause X)
  • X?Y (mutual causation)
  • X ? Y (no relationship)
  • The use or role of theory are roughly stated by
    March and Simon (1958). The use or role of theory
    are multiple in research and include the
    following
  • identifying research problems
  • raising questions
  • identifying relevant factors and relationships
    i.e. Variables
  • interpreting observations or data
  • advancing explanations

12
The classic experiment
  • Experiment
  • Even though most business studies are not
    experimental as we cannot control organizational
    behaviour, the classic experimental research
    design is useful for understanding all other
    designs.
  • In the following figure, O denote observations, X
    is the experimental stimulus. Observation are
    made both before (pre-test) and after
    manupulation of experimental stimulus
    (post-test). Two groups are included, the
    experimental group and control group and R
    indicates randomization.
  • e.g. Some treatment such as medical drug for
    headache.
  • Here independent variable is the experimental
    stimulus and experimental variable treatment
    takes values 1 and 0 respectively.
  • The researcher has control over the independent
    variable so s/he can manipulate the various
    experimental conditions. The impact of outside
    factors is assumed to be levelled out through
    randomization.
  • Do we need to use control group?
  • We need such a group to evaulate whether the
    drug has any effect or not.

13
The Problem of Cause the classic experiment
Figure 1 The classic experiment
14
The case of influenza
  • Influenza
  • In this case, 100 people diagnosed with influenza
    were randomly assigned to two groups, a test
    group that was an effective drug and a control
    group which was given ineffective one. After one
    week, asked Do you feel better?
  • In the following table. A higher fraction of the
    test group reported better than is the case for
    control group. In other word, it is very likely
    the drug has an effect.
  • In the case, the treatment is considered a cause
    or the effective medical drug really can be seen
    as a cause of improvement.
  • More people receiving the treatment feel better
    than those who did not.

15
Reported improvement in the testand control
groups
Table 3 Reported improvement in the test and
control groups
16
The effects of message and gender
  • Type of message and gender
  • The independent (explanatory) variable can
    definitely more than two values.
  • e.g. Whether selling strategies is most
    effective or not, they can be labelled as
    s1-phone call, s2-advertisement, s3-personal
    selling, s4-(per ads).
  • More than one independent (explanatory) variable
    (treatment) can be included
  • e.g. Selling message using either one-sided or
    two-sided arguments and another variable is
    gender that is whether the salesperson is a woman
    (1) or a man (2). In this case, it is also
    possible to capture interaction effect as such
    (60-50)10 and (50-40)10 so no intercaction
    effect is present (see Table 4).

17
The effects of message and gender
Table 4 The effects of message and gender
18
Validity threats
  • Validity
  • The researcher wants to obtain valid knowledge,
    that is, wants results that are true.
  • e.g. If a study shows that advertisement A is
    more effective than advertisement B, the
    researcher should be confident that this is the
    case (see Scase and Goffee, 1989).
  • There are mainly two types of validity (1)
    internal validity, (2) external validity. (1)
    This is the question of whether the results
    obtained within the study are true. (2) It refers
    to the question of whether the findings can be
    generalized.
  • There are mainly four type of threats to
    validity (1) history, (2) maturation, (3) test
    effect, (4) selection bias (see Cook and
    Campell,1979).
  • e.g. (1)- Tv store ?reduce price 10 ? sales
    increase 20 ? next month? ? potential external
    threat. (2) patients without treatment ?what is
    the cause of the patients recovery ? the medical
    drug or their immune system? (3) the test itself
    may affect the observed response ?work with
    specific programme ? whether thier performance is
    caused by the programme or the fact that they use
    thier skills. (4) when the subjects are not
    assigned randomly ?advertisements questions for
    cigarettes.

19
Selection bias
  • Advertisement
  • In the following table selection bias can be
    observed well.
  • e.g. It could be argued that 20 percent of those
    who have seen the advertisement bought while only
    5 percent of those did not see the advertisement
    bought. Thus the ads has contributed 15 percent
    (20-5).
  • Is the observed findings valid? It may be, but
    the result may equally well be explained by the
    other factors as such preferences or selective
    perception..

20
Selection Bias
Table 5 Reading of advertisement and purchase
21
Other Designs
  • Other designs
  • When researchers want to study relationships
    such as organizational size and innovativeness,
    gender and career, they cannot easily manipulate
    size of organization or gender. Thus the research
    designs can be applied in the following way
  • Cross-sectional design
  • In Table 5, deviates from the classical
    experiment in several ways. There is no control
    group or randomization. The cause (advertisement
    reading and effect (purchase) variables are also
    measured at the same time. This is a
    cross-sectional research design.
  • e.g. In table 5, the researcher is confronted
    with several tasks in order to prove that ads may
    cause purchase. Here, it may be control for
    another variable for the potential effect of
    other factors. In table 6, innovativeness is
    higher in large organization than smaller ones.
    Here, industry may be an explanatory factor. In
    table 7, the control variable may be type of
    industry and size seems that does not have any
    effect.
  • In cross-sectional research, data on independent
    and dependent variables are gathered at the same
    point in time.
  • Time series
  • Data on independent and dependent variables are
    gathered over time. The researcher empirically
    investigate whether independent variable(s) can
    explain dependent variable.

22
Innovativeness byorganizational size
Table 6 Innovativeness by organizational size
23
Control for third variable
Table 7 Control for third variable
24
Measurement
  • Defining measurement- GIGO (garbage in garbage
    out)
  • Problems to be studied in business research are
    almost endless. Studies are empirical implying
    the gathering use of data so empirical studies
    always implies measurements.
  • Measurement can be defined as rules for
    assigning numbers to empirical properties.
    Numbers enabling the use of mathematical and
    statistical techniques for descriptive,
    explanatory and predictive purposes which may
    reveal new information about the items studied.
  • In everyday life, we all make use of
    measurement for example a beauty contest can
    be conceived as some sort of measurement. A key
    element here is the mapping of some properties.
  • e.g. measurement can be defined in the follwing
    format race can be coded such as white1,
    black2, Hispanic3, other4. This assignment
    means mapping and illustrated in figure 1. In the
    figure, gender are mapped into 1 (Women) and 0
    (men).

25
Defining Measurement
Figure 1 Mapping (assignment)
26
Objects, properties and indicators
  • Measuring object or properties
  • In fact we do not measure objects or phenomena,
    rather we measure specific properties of the
    object or phenomena
  • e.g. A medical doctor may be interested in
    measuring properties such as height, weight or
    blood pressure.
  • To map such properties, we use indicators, that
    is the scores obtained by using our operational
    definitions for example responses to a
    questionnaire (see fig 2).

27
Object/phenomenon, properties and indicators
Figure 2 Object/phenomenon, properties and
indicators
28
Levels (scales) of measurement
  • Levels or scales
  • In empirical research, distinctions are often
    made between different levels of measurement or
    scales of measurement.
  • Nominal level (scale) This is the lowest level
    of measurement. At this level numbers are used to
    classify objects or observations.
  • e.g. It is possible to classify a population
    into females (1) and male (0). Also the same
    population can be classified according to place
    of living as such 1city center, 2south, 3
    north, 4east, and 5west.
  • Ordinal level (scale) Some variables are not
    only classifiable, but also exhibit some kind of
    relationship, allowing for rank order.
  • e.g. If we do not know the exact number or
    distance between , for example, A and B or A
    greater than B, we can construct such as the
    following scale. In the following case, B is more
    satisfied than A, but we cannot say that how much
    more satisfied.
  • e.g. Very A
    B Very
  • dissatisfied -3
    3
    satisfied

29
Levels (scales) of measurement
  • Internal Level
  • when we know that the exact distance between
    each observations and this distance is constant,
    then an interval level measurement has been
    achieved. This means that the difference can be
    compared.
  • i.e. One should be compared to another one, the
    temperature rises from 80 C to 100 C.
  • Ratio Level.
  • The ratio scale differs from an interval scale
    and with a ratio scale, we can the comparison of
    absolute magnitude of numbers is legitimate.
  • i.e. A person weighing 200 pounds is said to be
    twice as heavy as one weighing 100 pounds.
  • see Table 8 for more information about the
    properties of the measurement scales.

30
Scales of Measurement
Table 8 Scales of measurement
31
Validity and Reliability in Measurement
  • Random error
  • Measurements often contain errors so when we
    measure something, we want valid measures. In
    order to clarify the notions of validity and
    reliability in measurement, let us focus on the
    following equation
  • X0XTXSXR
  • Where, X0 is observed score, XT is true score, XS
    is systematic bias and XR is random error.
  • In a valid measure the observed score should be
    equal to or close to the true score. Valid
    measures presume reliability and random error is
    modest. Reliability refers to the stability of
    the measure.
  • e.g. A boys true height is 175 and scale
    somehow measures 170. This tells us that a valid
    measure also is reliable, but a reliable measure
    does not need to be valid. The difference is
    assumed to be random error or component and if
    random error is higher than expected that measure
    is neither valid nor reliable.
  • The following figure (3) illustrates how random
    measurement errors may influence the findings.
    i.e. r XY 0.8 x 0.5 x 0.5 0.2.

32
Validity and Reliability in Measurement Random
Errors
In this case, unobserved cc between X and Y is
0.8. The cc between concept and obtained measure
for X and Y is 0.5 and the observed relationship
(cc) is 0.2.
Figure 3 Random errors
33
Multiple indicators
  • Multiple indicators
  • Multiple indicators are often used to capture a
    given construct. For example, attitudes are often
    measured by multiple items combined into a scale.
  • The main reason for using multiple indicators is
    to create measurement that covers the domain of
    the construct which it purports to measure.
    Random error in measurement is reduced.
  • e.g. Crohnbachs a is often reported because
    this is a measure of the intercorrelations
    between the various indicators used to capture
    the underlying construct.
  • Construct validity.
  • It can be defined as the extent to which an
    operationalization measures the concept which it
    claims to measure. (Zaltman et al., 197744)
  • It is necessary for meanningful and
    interpretable research findings can be assessed
    in the following ways.
  • Face validity (1), convergent validity (2),
    divergent validity (3).
  • (1) to what extent the measure used seems
    resonable, (2) to what extent multiple measure of
    multiple methods for measuring the same construct
    yield similar results, (3) to what extent a
    construct is distinguishable from another
    construct.

34
Two methods, two constructs
Table 9 reports the CC for X and Y by the two
method are 0.82 and 0.79 respectively. As CC for
the same construct measured by different methods
are high and substantially higher than any
between construct CC, it is reasonable to assume
convergent valitidy.
Table 9 Two methods, two constructs
35
Improving your measurements
  • Improving your measurements
  • Elaborate the conceptual definitions
  • Develope operational definitions (measurement)
  • Correct and redefine measurement
  • Pre-test the measures for their reliability
  • Use the final measurement instrument
Write a Comment
User Comments (0)
About PowerShow.com