Title: Indicator Performance: EMAP Western Pilot Study
1Indicator PerformanceEMAP Western Pilot Study
- David V. Peck
- U.S. EPA
- National Health and Environmental Effects
Research Laboratory - Western Ecology Division
- Corvallis, OR
- Presented at
- State of Washington Status and Trends Proposal
Workshop 2 - Olympia, WA
- October 27, 2005
2Overview
- What is EMAP-West?
- Indicator Evaluation Guidelines
- Metric selection/ Indicator Development
- Making Sausage (Biological Indicators)?
- Evaluating Indicator performance
- SignalNoise
- Chemical variables
- Habitat variables
- Biological Indicators
- Responsiveness (Least vs. Most disturbed)
- Biological Indicators
- Variance components (design alternatives)
- Status Estimation
- Trend Detection
- Other Issues and Lessons Learned (?)
3What is EMAP-West?
- Statistical design
- Representative sampling of flowing waters at
multiple scales - 965 survey sites sampled, plus 200 candidate
reference sites - Consistent sampling protocols
- Crew of 3-4
- One day per site
4What is EMAP-West?
- Focus on biological indicators
- Fish and amphibians
- Multimetric (MMI, IBI)
- Macroinvertebrates
- Multimetric (MMI, IBI)
- Predictive Model (RIVPACS, O/E)
- Eventually periphyton
5What is EMAP-West?
- Stressor Ranking
- How common are they (extent)
- How severe are they (relative risk)
6What is EMAP (Not)?
- Not a census of every stream reach/segment
- Survey, where inferences made from sampled sites
to much larger target population - Gallup Poll
- Assessments based on populations of sites, not
individual sites - Designed to address CWA 305b requirements
(status) - Not 303d (TMDL) yet
- Complementary, rather than a replacement for,
targeted monitoring programs
7DATA QUALITY OBJECTIVE PROCESS FOR ECOLOGICAL
INDICATOR EVALUATION
CONCEPTUAL RELEVANCE
STEP 1. State the Problem
1. Relevance to the Assessment
2. Relevance to Ecological Function
STEP 2. Identify the decision
FEASIBILITY OF IMPLEMENTATION
3. Data Collection Methods
6. Quality Assurance
STEP 3. Identify inputs to the decision
4. Logistics
7. Monetary Costs
5. Information Management
STEP 4. Identify boundaries to the study
EPA GUIDANCE
ORD INDICATOR EVALUATON GUIDELINES
RESPONSE VARIABILITY
STEP 5. Develop a decision rule
8. Estimation of Measurement Error
9. Temporal Variability Within the Field Season
STEP 6. Specify tolerable limits on decision
errors
10. Temporal Variability Across Years
11. Spatial Variability
12. Discriminatory ability
INTERPRETATION AND UTILITY
STEP 7. Optimize design
13. Data Quality Objectives
14. Assessment Thresholds
15. Linkage to Management Actions
8Metric Selection
Candidate Metrics
Range Test (range at least 0 2)
SignalNoise Test (SN variance ratio 3)
Redundancy Test (Pearson Coefficient Metrics Eliminated
Metrics Eliminated
Metrics Eliminated
Test and Correction for Natural Variability
Responsiveness Test (Least vs. Most Disturbed)
Final Metrics Best responsiveness from each
class, avoiding redundancy
Metrics Corrected
Metrics Eliminated
9Indicator Development (Multi-metric)
Final Metrics
Score each metric (5th, 95th percentiles of all
sites
Compute final index value
Examine behavior of index with independent set
of sites
Assign condition thresholds (25th, 5th
percentiles of least disturbed sites)
Examine SignalNoise and Responsiveness of Final
Index
Final MMI score, Condition Class of each site
for use in assessment
10Evaluating Indicator Performance
- SignalNoise ratio
- Signal among-site variation
- Noise within-site variation (repeat visits)
- Where FF value (among-site vs. within site)
- c1adjusted average sample size coefficient
from expression for Type III Expected Mean Square
(varies between 1 and 2) - Eliminate metrics where SN is low (
11Evaluating Indicator Performance
- Responsiveness
- Least disturbed vs. Most Disturbed (F-test)
- Different approaches
- Filters based on variables that are independent
of variable in question - More relevant for stressor variables (e.g., cant
use nutrient values in defining filters for
nutrients) - Primarily use reach-scale variables
- Looking across natural gradients for best and
worst sites - Delphi approach get consensus where different
approaches give different results
12Evaluating Indicator Performance
- Variance Components
- Variance structure can affect design decisions
(status estimation vs. trend detection) - More sites vs. repeat visits Length of time
before a change of desired magnitude can be
observed
13Indicator Performance Benthos
- From EMAP-West Statistical Summary
- Table BENTHOS-1 Metric descriptions
- Table BENTHOS-2 SN and F-test results
- Table BENTHOS-3, BENTHOS 4 SN and F-tests for
final indicators (MMI IBI and O/E) - Figure BENTHOS-1, BENTHOS-2 Results of comparing
calibration, validation data sets
14Indicator Performance Vertebrates
- From EMAP-West Statistical Summary
- Table VERTEBRATE-1 Metric Descriptions
- Table VERTEBRATE-2 SN and F-test results
- Table VERTEBRATE-3 Final metrics included in
MMI/IBI - Table VERTEBRATE-4 Performance of MMI
- Figure VERTEBRATE-1 Results of comparing
calibration, validation data sets
15Signal Noise-- Chemistry
- Table CHEM-1 in handout
- Low values may indicate natural lack of range
(signal), or high variability (noise) - Index period too long?
- Generally expect to see decrease in SN for
regions as compared to West
16SignalNoise-- Habitat
- Table HABITAT-1 Descriptions of variables used
in EMAP-West Statistical Summary - Table HABITAT-2 in Handout Results for entire
West (from Statistical Summary) - Table HABITAT-3 Results for a variety of
different variables by region - Some field measurements, some calculated
17Variance Components
- YEAR
- Concordant year-to-year variation across all
sites - Caused by regional phenomena such as
- Wet/Dry years
- Ocean conditions
- Major volcanic eruptions
---Stream Size ---
-- Gradient --
- INTERACTION
- Independent year-to-year variation among sites
- Driven by local factors
SITE VARIANCE Persistent Site-to-Site
Differences due to Different Landscape/Historical
Contexts Different Levels of Human Disturbance
- RESIDUAL
- The rest of it including
- Temporal or seasonal variation during sampling
window - Fine scale spatial variation
- Crew-to-crew differences in applying the protocol
- Measurement error
18Variance of a trend slope(New sites each year)
year
interaction
residual
site
Xi Year Ns Number of sites in survey Nv
Number of within-year revisits (typically
2). (Urquhart and Kincaid. 1999. J. Ag., Biol.,
and Env. Statistics 4404-414)
- IMPLICATIONS
- Effect of site 0 if sites are revisited across
years - Year is not sensitive to sample sizeand its
effect can become dominant (Important for trend
detection) - Residual is affected by within year revisits
- Interaction and residual are affected by number
of sites in survey, therefore other factors being
equal, better to add sites to the survey rather
than revisit sites (important for status
estimation)
19Variance Components for Habitat
- Example from Larsen et al. (2004) for PNW
salmon-bearing streams - Year component small (trend)
- Residualinteraction generally small relative to
site (status)
- Larsen, D.P., P.R. Kaufmann, T.M. Kincaid, and
N.S. Urquhart, 2004. Detecting persistent change
in the habitat of salmon-bearing streams in the
Pacific Northwest. Can. J. Fish. Aquat. Sci.
61283-291.
20Trend Detection Habitat Variables
- Example from Larsen et al. (2004) for PNW
salmon-bearing streams - Best trend design (same sites visited each year)
- 10-15 years for all but LWD volume
21Other Issues Lessons Learned
- Some level of recon required for some/all sites
to determine target status - Bakers Law Balance ease and efficiency of
adding field work with real time required - Spending 5 min per transect collecting additional
measurement data 1 hr extra in field - Information Managementeasy to forget how much is
involved - Balance ease of use in field/lab with database
requirements for eventual analysis and
interpretation - Data turnaround time (esp. biology)
- Field and laboratory capabilities, compatibility
if IM systems - More than 1 site per day?
- Usually survey sites are too far apart to make
this practical, even if onsite work is reduced - Constrained by index period, number of sites,
other required field work - Ability to do 1 visit to site per year expands
options for indicators, but at a cost - Streamlining indicator protocols (e.g., intensive
habitat) - Has to be done cautiously, given many relevant
indicators (e.g. RBS) utilize several different
types of measurements (depth, substrate, slope) - Biological sampling must be sufficient enough to
give you the right kind of sample (Number of
individuals, composition) - Reach length
- No. samples per reach
22Other Issues Lessons Learned
- What can be done is a function of expertise of
field crews, desire to be comparable (or
compatible) with other groups), ability to train - Benthos reachwide vs targeted habitat sampling
- Field ID of vertebrates vs. collecting everything
and doing it in a lab - Holding times for Chemical Variables
- Legal vs. real
- May limit how remote sites can be
- Vertebrates Collecting permit restrictions
- Approx. 25 of target stream length in Mountains
and Xeric regions (principally PNW) could not be
assessedno permits too small (naturally
fishless?) - Sample size constraints
- Population estimation vs. Relative Risk
- Estimation 50 sites for reasonable variance
estimates - Local variance estimation More precise, but
need at least 4 sites in each sampling category
(Stratumecoregion grouporder) - Relative Risk Need at least 5 sites each in
Good and Poor condition categories
23Other Issues Lessons Learned
- Where have all the Least Disturbed sites gone?
- Need independent means of assessing quality of
reference sites - May be less of issue in PNW?
- Sites with big difference in least vs. most
disturbed appear to get penalized in assessment - Catchment disturbance
- Tolerant taxa prevalence
- Can consider using existing indicators, protocols
developed for regions/State
24Of Interest?
- EMAP Page www.epa.gov/emap
- Documents
- Data (some day for EMAP-West)
- Aquatic Resource Monitoring www.epa.gov/nheerl/a
rm - Survey design analysis process and examples
- Software (R language) for doing analyses
- Coming soon
- EMAP-West Statistical Summary
- EMAP-West Assessment Report
- EPA Office of Water Wadeable Streams Assessment
Report (National scale)