Measuring preservice teacher self-efficacy of technology integration - PowerPoint PPT Presentation

1 / 42
About This Presentation
Title:

Measuring preservice teacher self-efficacy of technology integration

Description:

Measuring preservice teacher self-efficacy of technology integration Jeremy Browne Department of Instructional Psychology & Technology Brigham Young University – PowerPoint PPT presentation

Number of Views:135
Avg rating:3.0/5.0
Slides: 43
Provided by: JeremyB159
Category:

less

Transcript and Presenter's Notes

Title: Measuring preservice teacher self-efficacy of technology integration


1
Measuring preservice teacher self-efficacy of
technology integration
  • Jeremy Browne
  • Department of Instructional Psychology
    Technology
  • Brigham Young University
  • United States
  • browne_at_byu.edu

2
IPT 286 / 287
  • Technology Integration
  • Not a computer course
  • Required for all preservice teachers
  • 286 Secondary education
  • 287 Elementary, Early Childhood, Special
    Education
  • Aligned with ISTEs NETS-T

3
Fostering Technology Integration
Skills Knowledge
National EducationalTechnology Standards
EffectiveIn-PracticeTechnologyIntegration
Can / Cant
Will / Wont
Dispositions
Confidence
Perceived Value
4
Why Self-Efficacy?
  1. More clearly defined than Confidence
  2. Well established measurement methodology
  3. Significant predictor of many in-practice
    behaviors

5
1. Self-Efficacy Defined
  • Self-efficacy is a personal belief about ones
    own ability to perform a given action. (Bandura,
    1997 Denzine et al., 2005)
  • Not to be confused with Teacher Efficacy
    (Tschannen-Moran et al., 1998)

6
2. Self-Efficacy Measures
  • Bandura (2006)

7
3. Predictive Power
  • Job-search self-efficacy was a significant
    predictor of interviews, offers, employment
    status, and PJ Person-Job fit perceptions
    (Saks, 2006).
  • Perceived math self-efficacy predicted interest
    in the subject (Özyürek, 2005).
  • Data analysis indicated that perceived
    self-efficacy was a significant predictor of new
    in-practice teacher performance (Jablonski,
    1995).

8
3. Predictive Power
  • Among the six subscales of empowerment,
    professional growth, status and self-efficacy
    were significant predictors of organizational and
    PC professional commitment (Bogler Somech,
    2004).
  • The perceived self-efficacy and context beliefs
    of teachers regarding the use of computer
    technology correlated significantly with reported
    hours of in-class use of technology (Whitehead,
    2002).

9
Self-efficacy Mediated
  • It does mediate distressing events.
  • Chwalisz et al., 1992
  • High self-efficacy Problem-focused coping
  • Low self-efficacy Emotion-focused coping
  • EFC, not PFC, was associated with higher levels
    of burnout of in-practice teachers.

10
Literature Review
  • Dont reinvent the wheel.
  • (Find an existing measure.)
  • Dont reuse a flat tire.
  • MUTEBI (Enoch et al., 1993)
  • Findings We needed to create our own measure.
  • The Technology Integration Confidence Scale
    (TICS).

11
TICS Item Development
  • Begin with NETS-T
  • Write 4-7 tasks for each
  • Review by faculty students
  • Pen paper comments
  • Return to step 2

12
Important Deviations
13
TICS v1
  • 28-item TICS
  • Web-based
  • 52 Spring-term preservice teachers
  • Administered at end of term
  • Described in proceedings

14
TICS v2
  • 33 Items
  • Expanded coverage of specific NETS-T
  • Targeted item revision (e.g. Item 13)
  • Larger sample (200)Pre- and post-course
    administration
  • New General Self-efficacy Scale (NGSE Chen et
    al., 2001) administered concurrently

15
Results Item Analysis (pretest)
  • Improvement from TICS v1
  • Lower means (10 items gt 4.0)
  • Higher variances (0 items lt .5)
  • Well represented NETS-T

16
Results Reliability Analysis (Pretest)
Projected number of items required for Projected number of items required for
NETS-T N of items Alpha a .80 a .90
I.A 233 6 .84 5 11
I.B 238 2 .80 2 5
II 235 7 .90 4 7
III 231 5 .88 3 7
IV 234 4 .82 4 9
V 234 5 .83 5 10
VI 234 4 .86 3 6
17
Results Factor Analysis (pretest)
of variance explained by of variance explained by
NETS-T of items Factor 1 Factor 2
I.A 6 57.7 --
I.B 2 84.1 --
II 7 63.8 --
III 5 68.7 --
IV 4 65.6 --
V 5 61.0 --
VI 4 70.7 --
18
RSM (Functional)
Strongly agree
Strongly disagree
Disagree
Agree
Neutral
19
RSM (Functional)
Strongly agree
Strongly disagree
Disagree
Agree
Neutral
20
RSM (Functioning)
Strongly agree
Strongly disagree
Disagree
Agree
Neutral
21
RSM (Malfunctioning)
Strongly agree
Strongly disagree
Disagree
Neutral
Agree
22
NGSE
23
NETS-T I.A (pre post)
24
NETS-T I.B (pre post)
25
NETS-T II (pre post)
26
NETS-T III (pre post)
27
NETS-T IV (pre post)
28
NETS-T V (pre post)
29
NETS-T VI (pre post)
30
Evidence of Validity
31
TICS v1 Construct Validity
Results of Item-Domain Congruence Survey.
Number and percent of judges who classified each item on the intended subscale Number and percent of judges who classified each item on the intended subscale
NETS-T Item number Number Percent
II 11 2 40
II 15 2 40
II 25 3 60
II 26 0 0
II 28 1 20
III 9 1 20
III 10 2 40
V 13 2 40
V 16 1 20
32
TICS v1 Content Validity
Item Relevancy Sores (Aikens V index).
Number of judges that classified this item as Number of judges that classified this item as Number of judges that classified this item as Number of judges that classified this item as
Subscale Item number Relevant Somewhat relevant Somewhat irrelevant Irrelevant Aikens V index
II 28 3 0 2 0 .730
III 10 2 3 0 0 .800
IV 27 2 1 2 0 .670
VI 20 2 2 1 0 .730
33
Anachronistic View of Validity
  • The Holy Trinity (Guion, 1980)
  • Content Validity
  • Construct Validity
  • Criterion Validity
  • Convergent Validity
  • Discriminate Validity
  • Others
  • Consequential Validities
  • Face Validity
  • Etc.

34
Modern View of Validity
  • There is no validity but construct validity.
  • Messick 1995 AERA, APA, NCME, 1999
  • Validities reassigned as sources of
    validity-supporting evidence.

35
Validity
  • is a property of your interpretation of the test
    data (not of the test or the data).
  • is an evaluative judgment of the soundness of
    your interpretations and uses of students
    assessment results(Nitko Brookhart, 2006)
  • changes based on purpose.

36
Applying Modern Validity Theoryto the TICS
  • Intended Purposes
  • Establish a baseline preservice teacher profile
  • Monitor the effects of curricular adjustments
  • Identify preservice teachers in most need of
    intervention
  • Predict in-practice technology integration

37
1. Establish a baseline preservice teacher profile
  • Assumes the TICS functions well psychometrically.
  • Internal structure analysis
  • Expert reviews
  • Low of correlation with NGSE
  • ( lt .28 or 8 variance explained)

38
2. Monitor the effects of curricular adjustments
  • Assumes the TICS is sensitive to changes in
    self-efficacy.
  • Pre-Post analysis
  • Comparisons of scores between IPT 286 and 287

39
3. Identify preservice teachers in most need of
intervention
  • Assumes TICS can predict in-classperformance.
  • RSM information analysis
  • Regression analysis
  • X
  • Pre-course TICS scores
  • Relevant demographics
  • Y
  • In-class performance indicators (assignment /
    assessment scores)

40
4. Predict in-practice technology integration
  • 5-year longitudinal, mixed methods study

41
4. Predict in-practice technology integration
  • Review of self-efficacy literature

42
Future Directions
  • TICS v2 showing promise
  • Expanded use
  • Inform NETS-T refreshing
  • Modern validity theory can be applied
    systematically.
Write a Comment
User Comments (0)
About PowerShow.com