Title: The Dual Tasks of Interviewers
1The Dual Tasks of Interviewers
- Ting Yan
- Colm OMuircheartaigh
- Jenny Kelly
- Pat Cagney
- Rebecca Jessoe
- NORC at University of Chicago
- Kenneth Rasinski
- University of Chicago
- Gary Euler
- Centers for Disease Control and Prevention
2What do interviewers do?
- Recruiting potential respondents
- Introducing survey to potential respondents
- Gaining cooperation
- Screening for eligible respondents
- Administering interviews
- Reading questions
- Recording answers
- Probing
- Providing definitions
3Desired qualities of interviewers
- When recruiting respondents
- Adaptive and flexible (Converse Schuman, 1974)
- Tailoring (Groves McGonagle, 2001
Houtkoop-Steenstra van den Bergh, 2002 Maynard
Schaeffer, 2002) - Maintaining interaction (Groves McGonagle,
2001) - Those who developed their own approach had lower
refusal and higher cooperation than those who
follow a standard script - When administering interviews
- Technician like (Converse Schuman, 1974)
- Standardized interviewing (Fowler and Magione,
1990) - Conflicting?
4How do interviewers affect survey error?
- Recruiting respondents
- Nonresponse error
- If interviewer consistently attract respondents
with a certain characteristic - Administering interviews
- Measurement error
- Interviewer bias
- Interviewer variance
- If interviewers consistently influence responses
in a certain way
5Research questions
- Is there a relationship between interviewers
performance at recruiting respondents and
administering interviews? - Are interviewers who are good at recruiting
respondents also good at collecting data of good
quality? - How does interviewer experience mediate this
relationship, if the relationship exists?
6Data
- National Immunization Survey (NIS)
- Nationwide, list-assisted random digit-dialing
(RDD) survey conducted by the NORC for the
Centers for Disease Control and Prevention - Monitors the vaccination rates of children
between the ages of 19 and 35 months. - 2007 Q3 data
- 712 interviewers worked
- 499,490 telephone numbers dialed
- 4,438 interviews obtained
7Which interviewers were includedin the analysis?
- Interviewers who had completed interview(s) on
first contact - 295 interviewers
- 3114 completes
8Measures of recruitment task
- (First contact) Refusal rate
- refusals/ first contact cases
- (First contact) Completion rate
- completes/ first contact cases
- (First contact) Eligibility rate
- eligibles/ first contact cases
- Denominator first contact cases
- Virgin (fresh) cases or cases that were dialed by
autodialers only. - They havent been touched by humans before sent
to the current interviewer. - Refusal conversion rate
- converted refusals/ refusals
9Measures of administration task
- Interviewer effect (?int)
- Adherence to standardized interviewing
(monitoring data) - Item nonresponse
- Interview time (cost)
10Good openers vs. Bad openers
- Good openers 3 out of 4 rates are above medians
 Good Openers Bad Openers
of interviewers 100 195
Average of interviews 13 9
Refusal Rate 10.46 13.09
Refusal Conversion Rate 1.34 0.32
Completion Rate 0.30 0.15
Eligibility Rate 3.91 2.37
11Good openers vs. Bad Openers (II)
- When experience is introduced
- Median split on of days worked at NORC
 Number of Interviewers Number of Interviewers Average Number of Interviews Completed Average Number of Interviews Completed
 Good Openers Bad Openers Good Openers Bad Openers
Experienced 67 83 15 11
Inexperienced 33 112 11 8
12Good openers vs. Bad Openers (III)
- When experience is introduced
- Median split on of days worked at NORC
 Refusal Rate Refusal Rate Refusal Conversion Rate Refusal Conversion Rate Completion Rate Completion Rate Eligibility Rate Eligibility Rate
 Good Bad Good Bad Good Bad Good Bad
Experienced 10.5 12.5 1.7 0.6 0.32 0.16 4.1 2.2
Inexperienced 10.5 13.5 0.5 0.1 0.27 0.15 3.6 2.5
13?int
- ?int Intra-interviewer correlation
- Deffint1 ?int(m-1)
- Hierarchical linear models
- Respondent data as level 1 data
- Interviewer data as level 2 data
- Unconditional model with no explanatory variables
at either level - ?intbetween-interviewer variance/total variance
14?int (II)
Family Income Family Income
Good openers Bad openers
Experienced 0.0825 0.0337
Inexperienced 0.2395 0.1053
0.1236 0.0786
15?int (III)
of people living in household of people living in household
Good openers Bad openers
Experienced 0.0082 0.0408
Inexperienced 0.0004 0.0139
0.0013 0.0239
16?int (IV)
of Vaccines Received (Average) of Vaccines Received (Average)
Good openers Bad openers
Experienced 0.0003 0.0003
Inexperienced 0.0041 0.0026
0.0234 0.0085
17Monitoring scores
- Monitoring items
- Read questionnaire verbatim
- Probe without biasing or leading/Probing for
Dont Knows - Reads scales as directed etc.
- Scores
- 1Error
- 2No Error
- 3Outstanding
- Item-level monitor score for each interviewer
- Overall summary score for each interviewer
18Monitoring scores (II)
- Good openers on average have higher mean scores
than bad openers, but difference sig. only for
one monitoring item - Read Questionnaire Verbatim
- Verifies dates and confirms spelling
- Properly obtains all provider information
- Use job aids as needed
- Reads scales as directed
- Records open-end response verbatim
- Probes without biasing or leading/Probes Dont
Knows
19Summary scores across monitoring items
20Item Nonresponse
- A set of 24 questions every one had to answer
- Item nonresponse rate of times R didnt provide
an answer /24
21Average Interview Duration (cost)
- Time spent on completing an interview
- The longer the interview time, the more costly
22Provider consent rate
(79.8)
(74.9)
23Conclusions and Discussion
- Good-opener interviewers
- More completes
- Higher refusal conversion, completion, and
eligibility rates - Lower refusal rate
- Good-opener interviewers
- Higher intra-interviewer correlation
- But more adherence to standardized interviews
(higher monitoring scores) - More missing data
- Are good openers also good at collecting data of
good quality? - No one clear answer
- Depends on which measures of interviewing tasks
- Experience didnt matter much
24Limitations and Next Steps
- Only used various rates to measure interviewers
performance at the recruitment stage - Demographic compositions by interviewer status
- Nonresponse error by interviewer status
- Only used proxy measures of data quality
- Direct measures of measurement bias
- Interviewer characteristics and respondent
characteristics not considered - Bringing in interviewer and respondents
characteristics into the picture - Examining the effect of matched interviewer and
respondents characteristics
25Thank You!
26(No Transcript)