Title: Uncertainty Analysis
1Uncertainty Analysis and
Model Validation or Confidence
Building
2Conclusions
- Calibrations are non-unique.
- A good calibration (even if ARM 0)
- does not ensure that the model will make
- good predictions.
- Field data are essential in constraining the
model - so that the model can capture the essential
- features of the system.
- Modelers need to maintain a healthy skepticism
- about their results.
3Conclusions
- Head predictions are more robust (consistent
- among different calibrated models) than
transport - (particle tracking) predictions.
- Need for an uncertainty analysis to accompany
- calibration results and predictions.
Ideally models should be maintained for the long
term and updated to establish confidence in the
model. Rather than a single calibration exercise,
a continual process of confidence building is
needed.
4Uncertainty in the Calibration
Involves uncertainty in
- Conceptual model including boundary conditions,
- zonation, geometry, etc.
5Zonation
Kriging
6Zonation vs Pilot Points
To use conventional inverse models/parameter
estimation models in calibration, you need to
have a pretty good idea of zonation (of K, for
example).
(New version of PEST with pilot points does not
need zonation as it works with continuous
distribution of parameter values.)
Also need to identify reasonable ranges for
the calibration parameters and weights.
7Parameter Values
- Field data are essential in constraining the
model - so that the model can capture the essential
- features of the system.
8Calibration Targets
Need to establish model specific calibration
criteria and define targets including associated
error.
associated error
calibration value
???0.80 m
20.24 m
Target with smaller associated error.
Target with relatively large associated error.
9Examples of Sources of Error in a Calibration
Target
- Surveying errors
- Errors in measuring water levels
- Interpolation error
- Transient effects
- Scaling effects
- Unmodeled heterogeneities
10Importance of Flux Targets
- When recharge rate (R) is a calibration
parameter, calibrating to fluxes can help in
estimating K and/or R.
R was not a calibration parameter in our final
project.
11In this example, flux information helps calibrate
K.
q KI
K ?
H1
H2
12Here discharge information helps calibrate R.
R ?
Q
13In this example, flux information helps calibrate
K.
q KI
K ?
H1
H2
14In our example, total recharge is known/assumed
to be 7.14E08 ft3/year and discharge recharge.
All water discharges to the playa. Calibration to
ET merely fine tunes the discharge rates within
the playa area. Calibration to ET does not help
calibrate the heads and K values except in the
immediate vicinity of the playa.
15- Smith Creek Valley (Thomas et al., 1989)
- Calibration Objectives (matching targets)
- Heads within 10 ft of measured heads. Allows for
- Measurement error and interpolation error.
- Absolute residual mean between measured and
- simulated heads close to zero (0.22 ft) and
standard - deviation minimal (4.5 ft).
- Head difference between layers 12 within 2 ft of
- field values.
- 4. Distribution of ET and ET rates match field
estimates.
16724 Project Results
?
Includes results from 2006 and 4 other years
A good calibration does not guarantee an
accurate prediction.
17Sensitivity analysis to analyze uncertainty in
the calibration
Use an inverse model (automated calibration) to
quantify uncertainties and optimize the
calibration.
Perform sensitivity analysis during
calibration. Sensitivity coefficients
18Steps in Modeling
calibration loop
Sensitivity analysis performed during the
calibration
(Zheng and Bennett)
19Uncertainty in the Prediction
- Reflects uncertainty in the calibration.
- Involves uncertainty in how parameter values
- (e.g., recharge) or pumping rates will vary
- in the future.
20Ways to quantify uncertainty in the prediction
Sensitivity analysis - parameters
Scenario analysis - stresses
Stochastic simulation
21Steps in Modeling
Traditional Paradigm
Sensitivity analysis performed after the
prediction
(Zheng and Bennett)
22New Paradigm for Sensitivity Scenario Analysis
Multi-model Analysis (MMA)
Predictions and sensitivity analysis are
inside the calibration loop
From J. Doherty 2007
23Ways to quantify uncertainty in the prediction
Sensitivity analysis - parameters
Scenario analysis - stresses
Stochastic simulation
24Stochastic simulation
Stochastic modeling option available in GW Vistas
MADE site Feehley and Zheng, 2000, WRR 36(9).
25(No Transcript)
26A Monte Carlo analysis considers 100 or more
realizations.
27(No Transcript)
28Zheng Bennett Fig. 13.2
29Hydraulic conductivity
Initial concentrations (plume configuration)
Both
Zheng Bennett Fig. 13.5
30Reducing Uncertainty
Hypothetical example
truth
Hard data only
Soft and hard data
With inverse flow modeling
ZB Fig. 13.6
31Model Validation
How do we validate a model so that we have
confidence that it will make accurate predictions?
Confidence Building
32Modeling Chronology
1960s Flow models are great! 1970s
Contaminant transport models are great!
1975 What about uncertainty of flow models?
1980s Contaminant transport models dont work.
(because of failure to account for
heterogeneity)
1990s Are models reliable? Concerns
over reliability in predictions arose over
efforts to model geologic repositories for high
level radioactive waste.
33The objective of model validation is to
determine how well the mathematical
representation of the processes describes the
actual system behavior in terms of the degree of
correlation between model calculations and actual
measured data (NRC, 1990)
Hmmmmm. Sounds like calibration What they
really mean is that a valid model will yield an
accurate prediction.
34What constitutes validation? (code vs.
model) NRC study (1990) Model validation is
not possible.
Oreskes et al. (1994) paper in Science
Calibration forced empirical adequacy
Verification assertion of truth (possible
in a closed system, e.g., testing of codes)
Validation establishment of legitimacy (does
not contain obvious errors),
confirmation, confidence building
35How to build confidence in a model Calibration
(history matching) steady-state
calibration(s) transient
calibration Verification requires
an independent set of field data Post-Audit
requires waiting for prediction to occur Models
as interactive management tools (e.g., the AEM
model of The Netherlands)
36HAPPY MODELING!