Title: Better%20Data%20Assimilation%20through%20Gradient%20Descent
1Better Data Assimilation through Gradient Descent
- Leonard A. Smith, Kevin Judd and Hailiang Du
- Centre for the Analysis of Time Series
- London School of Economics
2Outline
- Perfect model scenario (PMS)
- GD method
- GD is NOT 4DVAR
- Results compared with Ensemble KF
- Imperfect model scenario (IPMS)
- GD method with stopping criteria
- GD is NOT WC4DVAR
- Results compared with Ensemble KF
-
- Conclusion Further discussion
3Experiment Design (PMS)
4Ensemble techniques
- Generate ensemble directly, e.g. Particle Filter,
Ensemble Kalman Filter - Generate ensemble from perturbations of a
reference trajectory, e.g. SVD on 4DVAR
Gradient Descent (GD) Method
K Judd LA Smith (2001) Indistinguishable States
I The Perfect Model Scenario, Physica D 151
125-141.
5Gradient Descent (Shadowing Filter)
6Gradient Descent (Shadowing Filter)
7Gradient Descent (Shadowing Filter)
8Gradient Descent (Shadowing Filter)
9Gradient Descent (Shadowing Filter)
10GD is NOT 4DVAR
- Difference in cost function
- Noise model assumption
- Observational noise model 4DVAR cost
function - GD cost function not depend on noise model
- Assimilation window
- 4DVAR dilemma
- difficulties of locating the global minima with
long assimilation window - losing information of model dynamics and
observations without long window
11Methodology
12Form ensemble
GD result
13Form ensemble
- Sample the local space
- Perturb observations and run GD
14Form ensemble
t0
Ensemble trajectory
Draw ensemble members according to likelihood
15Form ensemble
Ensemble trajectory
16Ensemble members in the state space
- Compare ensemble members generated by
Gradient Descent method and Ensemble Adjustment
Kalman Filter method in the state space.
Low dimensional example to visualize, higher
dimensional results later.
17Ikeda Map, Std of observational noise 0.05, 512
ensemble members
18Evaluate ensemble via Ignorance
Ensemble-gtp(.)
- The Ignorance Score is defined by
- where Y is the verification.
Ikeda Map and Lorenz96 System, the noise model
is N(0, 0.4) and N(0, 0.05) respectively.
Lower and Upper are the 90 percent bootstrap
resampling bounds of Ignorance score
19Imperfect Model Scenario
20Toy model-system pairs
Imperfect model is obtained by using the
truncated polynomial, i.e.
21Toy model-system pairs
Imperfect model
22Insight of Gradient Descent
23Insight of Gradient Descent
24Insight of Gradient Descent
25Insight of Gradient Descent
26 Implied noise
Imperfection error
Distance from the truth
Statistics of the pseudo-orbit as a
function of the number of Gradient Descent
iterations for both higher dimension Lorenz96
system-model pair experiment (left) and low
dimension Ikeda system-model pair experiment
(right).
27GD with stopping criteria
- GD minimization with intermediate runs produces
more consistent pseudo-orbits - Certain criteria need to be defined in advance to
decide when to stop or how to tune the number of
iterations. - The stopping criteria can be built by testing the
consistency between implied noise and the noise
model - or by minimizing other relevant utility function
28Imperfection error vs model error
Obs Noise level 0.01
Model error
Imperfection error
Not accessible!
29Imperfection error vs model error
Obs Noise level 0.002
Obs Noise level 0.05
Imperfection error
30GD vs WC4DVAR
Model error assumption
Model error estimates
GD
31Forming ensemble
- Apply the GD method on perturbed observations.
- Apply the GD method on perturbed pseudo-orbit.
- Apply the GD method on the results of other data
assimilation methods.
Particle filter?
32 Imperfect model experiment Ikeda
system-model pair, Std of observational noise
0.05, 1024 EnKF ensemble members, 64 GD ensemble
members
33Evaluate ensemble via Ignorance
- The Ignorance Score is defined by
- where Y is the verification.
Systems Ignorance Ignorance Lower Lower Upper Upper
Systems EnKF GD EnKF GD EnKF GD
Ikeda -2.67 -3.62 -2.77 -3.70 -2.52 -3.55
Lorenz96 -3.52 -4.13 -3.60 -4.18 -3.39 -4.08
Ikeda system-model pair and Lorenz96
system-model pair, the noise model is N(0, 0.5)
and N(0, 0.05) respectively. Lower and Upper are
the 90 percent bootstrap resampling bounds of
Ignorance score
34Conclusion
- Methodology of applying GD for data assimilation
in PMS is demonstrated outperforms the 4DVAR and
Ensemble Kalman filter methods - Outside PMS, mmethodology of applying GD for data
assimilation with a stopping criteria is
introduced and shown to outperform the WC4DVAR
and Ensemble Kalman filter methods. - Applying the GD method with a stopping criteria
also produces informative estimation of model
error.
No data assimilation without dynamics.
35Thank you!
- H.L.Du_at_lse.ac.uk
- Centre for the Analysis of Time Series
- http//www2.lse.ac.uk/CATS/home.aspx
36(No Transcript)