Title: Validation of Intermediate Measures VIM Study Results
1Validation of Intermediate Measures (VIM) Study
Results
- Michael F. Green, PhD
- UCLA Semel Institute
- Department of Psychiatry and Biobehavioral
Sciences, Geffen School of Medicine at UCLA - Department of Veterans Affairs, VISN 22 Mental
Illness, Research, Education and Clinical Center
(MIRECC) - October 27th, 2009
- Bethesda MD
2Steps in the VIM Study
1. Select key criteria for selection Selection
Criteria Committee
2. Solicit nominations for intermediate
measures MATRICS-CT
3. Select and categorize nominated measures for
RAND Panel VIM Committee
4. Create data base on criteria for candidate
measures UCLA staff faculty
5. Evaluate measures on criteria with RAND Panel
Method RAND Panelists
6. Select measures for VIM Study VIM Committee
8. Review / summarize VIM results VIM Committee
/ R Kern
9. Public presentation of results Oct 2009
7. Conduct VIM Study Site PIs and VIM Committee
3VIM Study ResultsStages of Evaluation
1. Scientific criteria (reliability and
validity) - key scientific criteria -
test-retest reliability - correlation with
cognitive performance - additional scientific
criteria 2. Operational criteria - practicality
- tolerability - duration 3. Cross-cultural
considerations
4Process for Selecting Measures From VIM Study
Full Measures
4. Refined selection of measure(s) for further
consideration
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
3. Selected measure(s) evaluated for operational
criteria
Key scientific criteria
Additional scientific criteria
5Validation of Intermediate Measures (VIM)Study
Results - Recruitment
6Validation of Intermediate Measures (VIM)Study
Results - Recruitment
Includes 27 subjects who did not meet
eligibility and 3 who withdrew consent 3 of
these subjects had invalid assessments based on
outlier scores and behavioral observations final
sample 163
7VIM Study ResultsData Preparation
Data Cleaning 1. Comprehensive review of all
missing data. 2. Review all behavioral notes for
validity. 3. Comprehensive review of all scores
outside /- 2 SD. These scores are judged as
valid or not by local site. Data
Distribution Distributions were examined for all
key dependent variables by VIM Committee. - no
transformations needed
8VIM Study ResultsSample Demographics
Defined as current age minus age of onset
9VIM Study Results Degree of Independent Living
10VIM Study ResultsClinical Symptoms at Baseline
11VIM Study ResultsClinical Stability
No significant differences
12VIM Study ResultsKey Measures at Baseline
higher is worse
13VIM Study ResultsCorrelations with Clinical
Symptoms
Note for CAI and CGI cognition, higher is
worse plt .05 p lt .01
14Process for Selecting Measures From VIM Study
Full Measures
4. Refined selection of measure(s) for further
consideration
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
3. Selected measure(s) evaluated for operational
criteria
Key scientific criteria
Additional scientific criteria
15VIM Study ResultsTest-retest reliability
ILS, CAI gt TABS
inter-rater reliability for CAI ICC .73 VIM
study used same rater 88 of the time
16VIM Study Results Correlation with Cognitive
Performance
UPSA gt ILS gt CGI-cog. CAI TABS gt CGI-cog, CAI
17VIM Study Results Additional Scientific Criteria
Utility as a repeated measure / Corr with
functioning
18VIM Study ResultsPracticality and Tolerability
Practicality and Tolerability on 1-7 scale where
7 best Tolerability CAI, TABS gt UPSA gt
ILS Administration Time ILS gt TABS gt UPSA gt CAI
19VIM Study ResultsMissing data
20Process for Selecting Measures From VIM Study
Full Measures
4. Refined selection of measure(s) for further
consideration
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
3. Selected measure(s) evaluated for operational
criteria
Key scientific criteria
Additional scientific criteria
5. Selected measure(s) evaluated for cultural
adaptation
Only if measures are not at all culturally
adaptable
Short Forms
1. Measures evaluated for scientific criteria
2. Initial selection of measure(s) for further
consideration
4. Refined selection of measure(s) for further
consideration
3. Other considerations
Key scientific criteria
Additional scientific criteria
21VIM Study ResultsShort forms Test-retest
reliability
No sig differences among tests
22VIM Study ResultsShort forms Correlation with
cognitive performance
No sig differences among tests
23VIM Study ResultsShort forms Additional
Scientific CriteriaUtility as a repeated measure
/ Corr with functioning
24VIM Study ResultsDifference between short and
long forms
25VIM Study ResultsStudy Results Implications of
reduced reliability
A very simple hypothetical - effect size of d
.5 exists in reality between 2 groups - total
sample size needed to achieve power .80, alpha
.05, two-tailed at different levels of
reliability
G. Hellemann
26VIM Study ResultsConclusions
1. VIM Committee followed a clearly defined
process for evaluation of study results for 5
measures TABS, ILS, UPSA, CAI, CGI-cognition 2.
For the full measures, the UPSA was the leading
measure because - good test-retest
reliability - excellent shared variance w/
cognitive performance - good utility as
repeated measure no floor / ceiling effects -
reasonable tolerability and practicality 3. For
short forms, the TABS and UPSA were the leading
measures because - well-defined short forms
- moderate shared variance w/ cognitive
performance - acceptable utility as repeated
measure - but lower test-retest reliability,
requiring larger samples
27VIM Committee and Site PIs
- VIM Committee
- Michael F. Green (chair) UCLA, VISN 22 MIRECC
- Nina R. Schooler (co-chair) - SUNY Downstate,
VISN 5 MIRECC - Fred Frese - Northeastern Ohio Universities
College of Medicine - Wendy Granberry - GSK
- Philip D. Harvey Emory University
- Craig N. Karson - Merck
- Stephen R. Marder UCLA, VISN 22 MIRECC
- Nancy Peters Sanofi-aventis
- Michelle Stewart Pfizer
- Ellen Stover NIMH
- VIM Study Site PIs
- Robert Kern - UCLA, VISN 22 MIRECC
- Larry Seidman Beth Israel Deaconess / Harvard
- John Sonnenberg Uptown Research
- William Stone Beth Israel Deaconess / Harvard
- David Walling Collaborative Neuroscience Network