S'W' 298 - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

S'W' 298

Description:

mkhan30_at_sbcglobal.net or. mhan_at_email.sjsu.edu. Let me get to know you... Had negative sign: post-value pre-value. F-test stats. P-value ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 30
Provided by: meek8
Category:
Tags: global | in | sbc | sign

less

Transcript and Presenter's Notes

Title: S'W' 298


1
S.W. 298
  •  
  • Instructor Meekyung Han, Ph.D.
  • mkhan30_at_sbcglobal.net or
  • mhan_at_email.sjsu.edu

2
  • Let me get to know you.
  • SYLLABUS ------ ANY QUESTIONS?
  • REVIEW SW 240/242

3
My Buddy
  • Your buddys program focus/specialization
  • Your buddys 240/242 project
  • Your buddys agency and its services.
  • Your buddys role there
  • What is your tentative research question for SW
    298?

4
Review of SW 240
  • Ethics and the researcher
  • Measurement and measuring variables
  • Constructing measurement instruments
  • Survey research approaches
  • Sampling
  • Group research design
  • Single system design
  • Qualitative research
  • Program evaluation

5
Ethics IRB proposal
  • CONFIDENTIALITY, ANONIMITY
  • POTENTIAL RISKS
  • FREEDOM TO PARTICIPATE OR WITHDRAW
  • INFORMED CONSENT

6
Measurement
  • Conceptual vs. Operational definition
  • Conceptual definition
  • A conceptual definition is a specific theoretical
    meaning of a term, but usually NOT one used for
    describing measurement
  • ex. Self-esteem is the personal judgment of
    self-worth
  • Operational definition
  • An operational definition is the explicit
    specification of a variable in such as way that
    is measurement is possible
  • ex. self-esteem is the self-rated score of
    personal worth measured by the Rosenberg
    Self-Esteem Inventory.

7
Measurement and Measuring Variables
  • 1. Variable types and levels of measurement
  • 2. Measurement errors Cultural bias, social
    desirability, error of central tendency
  • 3. Reliability 1) Definition 2) Types
  • 4. Validity 1) Definition 2) Types
  • 5. Reliability and Validity

8
Types of Variables
Continuous
Categorical
Nominal Ex)gender/ race/ occupation
Ordinal Ex) Ranked data
Interval Ex) Likert scale
Ratio Ex) age/ Height/ Weight
9
Constructing Measurement Instruments
  • 1. Questions to ask before measuring
  • Why do we want to make the measurement?
  • Assessment and diagnosis
  • Practice effectiveness
  • What do we want to measure?
  • Consider your focus and research statement,
    especially the variable of interest, and
    operational definitions
  • In what form will the measurement be (who or what
    will do the measuring)?
  • Interviews
  • Questionnaires
  • Self-administered questionnaires
  • observations

10
Use of existing instruments vs. developing your
own!
  • What are the strengths and weakness of each
    approach?
  • Where can I find instruments?
  • Library, reference books, internet
  • Research articles where researchers use and
    publish scales
  • If I cant find a good one, should I develop my
    own?
  • Modify an existing one (compromise)
  • Develop your own (task of pilot testing and
    establishing validity and reliability of a new
    instrument)

11
Survey research approaches
  • There are three main ways in which respondents
    are asked to complete the questionnaire
  • Self-administered questionnaires
  • Interview surveys (face-to-face)
  • Telephone surveys (or over the internet)
  • Questions to be considered
  • Costs (stamps, envelops, printing, etc)
  • Benefits?
  • Appearance presentable and interesting
  • Incentives?
  • Bias?
  • Problems with missing data?

12
Sampling
  • Probability Sampling
  • Simple random sampling
  • Systematic sampling
  • Proportionate stratified random sampling
  • Disproportionate stratified sampling
  • Cluster sampling
  • Non-probability Sampling
  • Convenience sampling
  • Purposive (or judgmental) sampling
  • Snowball sampling
  • Quota sampling
  • Selecting informants

13
  • Types of Group Research Design
  • Experimental Designs
  • The classic experimental research design
  • The posttest only control group design
  • The Solomon Four-group design
  • Placebo control design
  • Pre-Experimental Designs
  • One-group pretest-posttest design
  • One-group posttest only design
  • Posttest only design with non-equivalent groups
  • Quasi-Experimental Designs
  • Non-equivalent control group design
  • Time-series or interrupted Time series design
  • Multiple time-series design
  • Advanced Designs
  • Factorial Designs
  • Crossover Designs

14
Single System Designs?
  • Why/When use Single System Design?
  • Often we have no control groups, or we need to
    evaluate the effectiveness of a program or a
    group of individuals or a single person.
  • Useful for immediate, inexpensive, and practice
    feedback on whether their clients are improving.
  • Single systems can include a SINGLE client,
    community, organization, family, couple, setting,
    program, etc (N1 research)
  • Provides a bridge between research and practice
  • Types of Single System Designs
  • The case study or B design
  • The AB design
  • The ABA and ABAB design
  • The ABC and ABCD design
  • Multiple baseline design

15
Why use Qualitative research?
  • To investigate in-depth answers to complicated
    questions which may not be answerable through
    quantitative methods
  • To compliment quantitative research in expanding
    our understanding of the factors or variables
    measured as well as discovering issues and
    factors that may not have been assessed or
    measurable
  • To understand more of the processes that occur
    among variables or factors of interest
  • Remember sheet comparing qualitative vs.
  • quantitative research

16
Examples of Qualitative Methods
  • Ethnography or Naturalistic Inquiry
  • Biography
  • Phenomenology
  • Grounded Theory
  • Content Analysis
  • Case Study

17
Program Evaluation
  • What is Program evaluation?
  • Program evaluation is carefully collecting
    information about a program or some aspect of a
    program in order to make necessary decisions
    about that program
  • The type of evaluation undertaken depends on the
    type of information you want to generate
  • Types of Program evaluation
  • Needs assessment
  • Program implementation ( process evaluation)
  • Program outcomes (outcome evaluation)
  • Program impact evaluation

18
Statistics Without PainReview 242
19
Errors in Hypothesis Testing
  • Type I Error reject a true null hypothesis
  • Type II Error Fail to reject a false null
    hypothesis

20
The Tests
  • Chi-Square test ?
  • T- Test ?
  • ANOVA
  • Correlation
  • Regression

21
Chi-Square
  • Use when both your DV and IV are categorical
  • The test looks for ASSOCIATIONS
  • ? Use when you want to see if statistically
    significant differences exist between observed
    and the expected frequencies
  • ? Use to tell if the differences in values are
    systematic or have occurred by chance
  • note Cant use this test if the expected
    count in any cell is less than 5 --- its ok if
    the actual count is les tahn5, but not the
    expected count actual count and expected count
    are highly related
  • Careful associated does not mean causal..

22
T-Test and ANOVAVery commonly
used test tests to compare means. Only
difference is that in T-Tests you are comparing
the mans of only 2 groups.
  • ANOVA
  • Use when you want to compare several means
  • More than two categories of the IV, and a
    Continuous DV
  • Post hoc test Need to use Scheffe or Bonferroni
    adjusted alpha level
  • T-Test
  • Use with categorical IVs (2 categories) and
    continuous DV
  • Compare means of two groups two independent
    samples T-test and Paired samples T-test

23
Correlation
  • Use when both your DV and IV are continuous
  • Use when you want to see the relationships among
    variables you can use multiple variables
    together
  • Can have Positive or Negative correlation
  • Pearson vs. Spearman correlation coefficients
  • - Mostly we will use Pearson correlation
    coefficients

24
Regression
  • Use when you want to separate out the effects of
    many variables on your outcome
  • you can use both categorical and continuous IVs
  • You should use a CONTINUOUS DV
  • Logistic Regression (Probit and Logit) used with
    continuous or categorical IVs and CATEGORICAL DV
    (We wont do these in this class)

25
Look At the Cell and Freq.

P-VALUE
Chi-square
26
When t-test stats p-value is less than .05
then, you need to use this descriptive table to
report.
T-test stats
P-value
Levens test for equal variance P is bigger
than .05 Then, report the first line t-test value
and p-value
27
When t-test stats p-value is less than .05
then, you need to use this descriptive table to
report.
T-test stats
P-value
Had negative sign post-valuegt pre-value
28
When t-test stats p-value is less than .05
then, you need to use this descriptive table to
report.
P-value
F-test stats
29
P-value you should pay attention to see which
groups are different from each other.
Post Hoc Tests
Write a Comment
User Comments (0)
About PowerShow.com