Title: Using FLUXNET data to evaluate land surface models
1Using FLUXNET data to evaluate land surface
models
- Ray Leuning and Gab Abramowitz
- 4 6 June 2008
2Land surface model evaluation framework
Reto Stocklis Model farm
3Schematic diagram of model components from
asystems perspective
- system boundary, B
- inputs, u
- initial states, x0
- parameters, ?
- model structure, M
- model states, x
- outputs, y
Errors in each component affects model
performance
Liu, Y. Q. and Gupta, H. V. (2007). Uncertainty
in Hydrologic Modeling Toward an Integrated Data
Assimilation Framework. Water Resources Research
43, W07401, doi10.1029/2006/WR005756.
4Parameter estimation Multiple objective
functions possible
5Parameter estimation Multiple criteria possible,
e.g. ?E, NEE
The dark line between the two criterias minima,
a and ß, represents the Pareto set
6Comparing RMSE of models of varying complexity
across sites after parameter optimization
?E
Sites
Ideal result (0,0)
Models
H
Hogue, T. S., Bastidas, L. A., Gupta, H. V., and
Sorooshian, S. (2006). Evaluating Model
Performance and Parameter Behavior for Varying
Levels of Land Surface Model Complexity. Water
Resources Research 42, W08430, doi10.1029/2005WR0
04440.
7SOLO neural network - cluster analysis
Abramowitz, G., Gupta, H., Pitman, A., Wang,
Y.P., Leuning, R. and Cleugh, H.A. (2006). Neural
Error Regression Diagnosis (NERD) A tool for
model bias identification and prognostic data
assimilation. Journal of Hydrometeorology,
7160-177.
8Poor model performance not just due to poor
parameter estimation
CABLE with 4 different parameter sets SOLO
cluster analysis
9No model or single performance measure is best
for all fluxes
CABLE, ORCHIDEE, CLM, MLR multiple linear
regression, ANN artificial neural network
10Model comparisons - average seasonal cycle
NEE
?E
Global default parameters for each PFT used
H
11Model comparisons - average daily cycle
NEE
?E
Global default parameters for each PFT used
H
12PDFs for NEE, ?E H across 6 sites
13NEE Perturbed-parameter ensemble simulations
Average diurnal cycle
Monthly averages
14?E Perturbed-parameter ensemble simulations
Average diurnal cycle
Monthly averages
15H Perturbed-parameter ensemble simulations
Average diurnal cycle
Monthly averages
16Partitioning climate space into 9 SOM nodes
S? Tair qair
S? Tair qair
S? Tair qair
night
night
night
17NEE PDFs at nodes 7 -9 at Tumbarumba
S? Tair qair
S? Tair qair
S? Tair qair
8
9
7
night
18Suggested set of discussion topics
- Primary objectives
- Establish a framework that provides standardised
data sets and an agreed set of analytical tools
for LSM evaluation - Analytical tools should provide a wide range of
diagnostic information about LSM performance - Datasets specifically formatted for LSM execution
and evaluation - Specific objectives
- To detect and eliminate systematic biases in
several LSMs in current use - To obtain optimal parameter values for LSMs after
biases have been diminished or eliminated - To evaluate the correlation between key model
parameters and bioclimatic space
19Tasks for meeting 1
- Discuss what form the LSM evaluation framework
should take - PILPS style?
- What will be asked of data providers?
- What will be asked of LS modellers?
- Agree on a minimal set of LSM flux performance
measures (model vs observations vs benchmark) - Average diurnal cycle?
- Average annual cycle (monthly means)?
- Some type of frequency analysis (wavelet, power
spectrum etc)? - Conditional analysis (SOM node analysis)
- Overlap of pdfs
- Multiple criteria cost function set (mean, rmse,
rsq, regression gradient and intercept) - Discuss other LSM outputs and datasets useful for
process evaluation - Discuss ways to include parameter uncertainty in
LSM evaluation (c.f. Abramowitz et al., 2008)
20Tasks for meeting 2
- Discuss options for the most effective way to
provide these services - Will individual groups do benchmarking,
evaluation of model states? - Preference for an automated web-based interface
and data server - Automatic processing through a website?
- Abramowitz suggests automation of basic LSM
performance measure plots, including benchmarking
(as in Abramowitz, 2005). - Uploaded output from LSM runs in ALMA format
netcdf could return standard plots to the user
and/or post on website. - Model detective work and improvement to be done
by individual groups
21Data analysis will use
- Several current LSMs
- Quality controlled Fluxnet datasets
- SOFM (Self-organizing feature maps) analysis
- to classify bioclimatic data into n2 nodes
- to evaluate model biases for each node to help
the detective work of identifying areas of
model weaknesses - to identify upper-boundary surfaces for stocks of
C and N and P in global ecosystems as a function
of the n2 climate nodes - Benchmarking
- to compare model predictions at each climate node
against multiple linear regression (MLR)
estimates
22Tools currently available from Abramowitz
- SOLO (SOFM MLR) software (Fortran)
- LSMs Model Farm of Reto Stöckli plus CABLE
- CSV to ALMA netcdf conversion routine (Fortran)
- Plotting routines in R
- Fluxnet database in CSV and netcdf formats