Predictors - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

Predictors

Description:

Example - an IQ test is construct valid to the extent that it actually measures intelligence. ... the concept of IQ (intelligence quotient). Types of Tests ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 38
Provided by: brad87
Category:
Tags: iq | predictors | speed | test

less

Transcript and Presenter's Notes

Title: Predictors


1
Predictors
  • Definition A predictor is any variable used to
    forecast or predict a criterion.
  • I/O psychologists use a variety of predictors.
  • We need to evaluate the quality of these
    predictors.
  • To evaluate predictors we must determine their
    reliability and validity.

2
Predictors
  • Classical Measurement Theory
  • Formula x t e
  • t true score (e.g., actual ability level)
  • e measurement error (e.g., guessing, fatigue,
    biases, scoring).
  • X observed score (what you actually get)

3
Predictors
  • There is always some error involved, so a single
    predictor is never a perfect indicator. The goal
    is to have the predictor (e.g., IQ test) relate
    strongly to whatever it is your trying to predict
    (e.g., job performance). The stronger the
    relationship, the better you will be able to
    predict the criterion of interest.

4
Quality of Predictors - Reliability
  • Reliability The extent to which a measure is
    consistent or stable.
  • Reliability tells you about the consistency of
    the measure, not its validity. You may have a
    consistent measure, but your measure may be
    inaccurate.
  • Example Scale that is off.
  • Your measure must be reliable. If your measure
    is unreliable, it is useless.
  • Acceptable level is approximately .70.

5
Types of Reliability
  • Test-retest - stability of measure over time.
  • Often called coefficient of stability.
  • 1 test, 2 time periods
  • High reliability means those who score high at
    time one will also score high at time 2 and vice
    versa. Scores should be similar across the two
    time periods.

6
Types of Reliability
  • Internal-consistency Extent to which items (or
    sections) within a test measure the same thing.
  • Measures homogenous content (if thats what you
    expect)
  • Two methods
  • Split half score on odd even items is
    correlated
  • Calculate coefficients (based on inter-item
    correlations)
  • Cronbachs alpha
  • Kuder-Richardson 20 (KR20)
  • 1 test, 1 time

7
Types of Reliability (cont.)
  • Equivalent Forms - Extent to which 2 tests
    measure the same thing.
  • Parallel forms of the test are developed and
    administered.
  • The correlation between the scores on each test
    is referred to as the coefficient of equivalence.
  • Not popular. A lot of work. Many tests do not
    have equivalent forms. Intelligence and
    personality two exceptions.
  • 2 tests, 1 time period

8
Types of Reliability (cont.)
  • Inter-rater reliability - Specialized reliability
    for when ratings are used.
  • Extent to which 2 or more raters agree
  • Provides support that the ratings may be
    accurate
  • Example Interview ratings.

9
Validity
  • Validity is the extent to which a measure or
    predictor is accurate or correct.
  • Example An individuals college GPA is a valid
    predictor of his or her future work performance.
    However, an individuals race, gender, or sexual
    preference is not a valid predictor.
  • Validity depends partially on how a test is used,
    it is not just a characteristic of the measure.
  • Example Emotional stability is considered to be
    a valid predictor of performance for high risk
    jobs, such as nuclear power plant operator.
    However, it is not very valid for most jobs.

10
Validity (cont.)
  • Types of Validity
  • Criterion-related extent to which the predictor
    is related to (correlates with) the criterion of
    interest.
  • Concurrent - predictor and criterion are assessed
    at roughly the same time. For example, employees
    are given a test and the test scores are
    correlated with some measure of job performance.
  • Predictive - predictor is assessed at time 1 and
    criterion is assessed at time 2. For example,
    applicants are given a test, but are hired using
    other tests already in use. After 1 year, the
    scores on the test are correlated with some
    measure of job performance.
  • There are advantages and disadvantages to each of
    these methods.
  • Acceptable validity coefficients range from about
    .30 - .40.

11
Validity (cont.)
  • Types of Validity (cont.)
  • Content extent to which the measure covers a
    representative sample of the desired content.
  • Example - a test for this class is content valid
    to the extent that it covers a good sample of the
    discussed material. This is why longer tests are
    usually better.

12
Validity (cont.)
  • Construct Validity extent to which a measure is
    an accurate representation of a theoretical
    construct.
  • Example - an IQ test is construct valid to the
    extent that it actually measures intelligence.
  • How do we determine construct validity?
  • Does the test relate to other valid measures of
    the same construct? (convergent validity)
  • Does the test not relate to things we know are
    not related to the construct of interest?
    (divergent validity)

13
Validity (cont.)
  • Face Validity - perceptions of validity, usually
    by those using or affected by a test (e.g.,
    supervisors and applicants).
  • Although this is not actual validity, perceptions
    are important.
  • Individuals perceptions of validity do not
    always match well with actual validity.

14
Validity (cont.)
  • Reliability and Validity
  • Reliability is a necessary but not a sufficient
    condition for validity. In other words, to the
    degree that a measure is unreliable, it is not
    valid. But being reliable does not automatically
    make a measure valid.
  • For a measure to be useful, it should be both
    reliable and valid.

15
Psychological Tests and Inventories
  • Test contains right or wrong answers.
  • Examples ACTs, IQ tests, job knowledge tests,
    etc
  • Inventories no right or wrong answer.
  • Examples personality measures, interest
    inventories, etc
  • We will be referring to both types as tests.
  • Tests are designed to measure individual
    differences.

16
History of Psychological Tests
  • Galton (late 1800s)
  • Heredity (vision, hearing, muscular strength,
    etc).
  • 1st large scale collection of individual
    differences information.
  • Cattell (late 1800s early 1900s)
  • Developed the first mental test
  • Measured intelligence by sensory discrimination
    and reaction time.

17
History of Psychological Tests
  • Binet (early 1900s)
  • Appointed by French government to study the
    education of mentally retarded children.
  • Developed tests to determine intelligence.
  • Terman (early 1900s)
  • Continued Binets work and developed the concept
    of IQ (intelligence quotient).

18
Types of Tests
  • Speed v. Power
  • A speed test is made up of very easy questions.
    The goal is to determine the number of questions
    the person can answer in the time given.
  • A power test is made up of difficult questions
    (or a mix) and has no time limit, or a time limit
    that is extremely generous.
  • Tests are usually a combination, but are more
    power than speed focused.
  • If you use a speed test, you should have a good
    reason why. In other words, speed should be part
    of the job.

19
Types of Tests (cont.)
  • Individual vs. Group Administration
  • Individual tests are given to one person at a
    time, usually out of necessity.
  • Group tests are given to multiple people
    simultaneously. This is the most common type of
    test.

20
Types of Tests (cont.)
  • Paper and Pencil vs. Performance
  • Paper and pencil tests require the test-takers to
    answer questions (to display knowledge). Most
    common.
  • Performance tests require the test-taker to
    perform an action (to prove a skill or ability).

21
Ethical Issues
  • Due to the sensitive nature of psychological
    tests and their potential implications, there are
    some guidelines.
  • Some tests place restrictions on who can
    administer or score the measure (e.g., certain
    personality tests).
  • Privacy - avoiding inappropriate use of a test.
  • Confidentiality - restricting access to test
    results.
  • Testing is a very sensitive issue in many
    organizations.
  • Organizations are often concerned about having
    their tests challenged or facing lawsuits based
    on testing disputes.
  • It is important for organizations to validate the
    tests they use.

22
Test Content
  • Intelligence
  • General mental ability (g) - provides an
    understanding of an individuals general level of
    intellectual capacity.
  • Multiple intelligences (e.g., reasoning, music,
    finger dexterity)
  • Most jobs are well predicted by g (validity .40 -
    .60).
  • Low Cost
  • Disadvantage Racial differences.

23
Test Content
  • Mechanical Aptitude
  • Assesses ability to recognize mechanical
    principles.
  • Bennett Test of Mechanical Comprehension
  • Average validities in the .25 - .35 range.
  • Low Cost

24
Test Content
  • Physical Abilities
  • Measures attributes such as strength and stamina.
  • High Validity
  • Low Cost
  • Disadvantage Gender differences and only
    relevant for certain jobs.

25
Test Content
  • Sensory-motor
  • Assesses sensory abilities, such as vision and
    hearing.
  • Small parts tests for motor ability and
    coordination.
  • Moderate Validity
  • Low Cost

26
Test Content
  • Integrity Tests
  • Designed to detect those who will engage in
    dishonest behavior.
  • Overt vs. personality based
  • With overt the intent of the test is clear to the
    taker.
  • Personality based focus on predicting dishonest
    behavior using conventional personality
    variables, such as conscientiousness.
  • Low Cost.
  • Difficult to validate due to low occurrence of
    theft.
  • Possibility of faking.

27
Test Content
  • Personality and interest inventories
  • No right or wrong answer.
  • MMPI - clinical based, only appropriate for
    certain jobs.
  • Big Five theory of personality
  • Openness - imagination, curiosity
  • Conscientiousness - responsibility and
    dependability
  • Extraversion - sociability
  • Agreeableness - courtesy, tolerance
  • Neuroticism - emotional stability
  • Most support for job-relevance of
    conscientiousness (moderate validity).
    Extraversion tends to predict with sales and
    openness to experience tends to predict training
    success.
  • Possibility of faking
  • Recent research has shown that people can fake
    these tests.

28
Test Content Administration
  • Test batteries
  • Combination of several tests.
  • Advantages gives a lot of information.
  • Disadvantages cost and time consuming
  • Computer Adaptive Testing (CAT)
  • Tailored to the individual
  • Reduces testing time
  • Costly
  • Knowledge Tests
  • Assess specific knowledge bases
  • Example - certification tests

29
Interviews
  • Most popular selection method
  • Subjective
  • Structured vs. Unstructured (a continuum)
  • Structured
  • Interviewer asks the same questions of all
    applicants.
  • Questions should be based on job analysis.
  • Interviewer records responses.
  • Requires interviewer training.
  • Unstructured
  • No requirements are made on what questions are
    asked or how the interviewer conducts the
    interviews.
  • Which do you think is more valid? Which do you
    think interviewers and applicant like better?

30
Structured Interview Questions
  • What was your first job after completing college?
  • What were your major accomplishments on that job?
  • What were some of the things you did less well,
    things that point to the need for further
    training and development?
  • What did you learn about yourself on that job?
  • What aspects of the job did you find most
    challenging?
  • What sort of work would you like to be doing five
    years from now?

31
Interviews
  • Situational Interviews
  • Applicant is asked how they would respond to
    particular situations.
  • Performance is based on a preset scale.
  • Mixed support for interviews
  • High face validity.
  • Better support for structured than unstructured,
    both reliability and validity.
  • Many organizations include them to make
    applicants happy.
  • Improving Interviews
  • Interviewer training.
  • Multiple interviews or interviewers (e.g., Civil
    Service).

32
Interviews
  • Negative v. Positive Information
  • Primacy and Recency Effects
  • Primacy - remembering and/or utilizing
    information presented first (applies to general
    impressions).
  • Recency - remembering and/or utilizing
    information presented last (applies to details).
  • Contrast Effects
  • Interviewer Stereotypes Biases
  • Impression Management

33
Assessment Centers
  • An assessment center is not a place, but rather a
    series of appraisal activities and tests
    (predictors).
  • Usually for promotion into managerial positions.
  • Usually multiple participants and multiple
    raters.
  • Example exercises Leaderless group discussion,
    in-basket exercises, role play, problem solving
    simulation.
  • Example performance dimensions leadership,
    communication, decision making, planning,
    interpersonal attributes.
  • Advantages high criterion face validity
  • Disadvantages costly validity may be
    misleading (self-fulfilling prophecy)

34
Work Samples Situational Exercises
  • Work Sample
  • High fidelity simulations
  • Better for blue-collar jobs.
  • Assess what a person can do, but not necessarily
    what a person will do (maximal vs. typical
    performance).
  • Costly
  • High face validity
  • If designed properly, high criterion validity
  • Situational Exercises
  • Same idea as work sample.
  • Designed more for white-collar sample
  • In-basket and role plays are two good examples.

35
Biographical Information (biodata)
  • Biographical information assesses constructs that
    shape our behavior, such as sociability and
    ambition.
  • Wide variation in the types of questions asked.
  • Biases in opportunity.
  • Example, question about high school football.
  • Invasiveness and privacy.
  • Depends on the job (e.g., FBI)
  • Faking - answers should be verifiable
  • Despite these problems, biodata typically
    demonstrates high validity and fairness.

36
Other Selection Measures
  • References and Letters of Recommendation
  • Very common
  • Not very valid due to restriction of range.
  • Legal worries have limited the use of these.
  • Drug Testing
  • Imperfect reliability is a big problem.
  • Views tend to depend on the job.
  • Normand, Salyards, Mahoney (1990)
  • 5,465 job applicants tested for illicit drugs
  • After 1.3 years of employment
  • Users had absenteeism rate 59.3 higher
  • Users had 47 higher involuntary turnover rate.

37
Other Selection Measures
  • Polygraph
  • Lie detector
  • Reliability also an issue.
  • Typically used for only certain jobs (FBI).
  • Graphology
  • Handwritting analysis
  • Popular in Europe
  • Almost 0 validity (same as chance).
  • Emotional Intelligence
  • Includes things such as handling ones emotions,
    motivating oneself, and handling relationships.
  • Relatively new, needs to be researched.
Write a Comment
User Comments (0)
About PowerShow.com