Issues in ensemble weather prediction

1 / 38
About This Presentation
Title:

Issues in ensemble weather prediction

Description:

n ... ... – PowerPoint PPT presentation

Number of Views:25
Avg rating:3.0/5.0
Slides: 39
Provided by: TomHa53

less

Transcript and Presenter's Notes

Title: Issues in ensemble weather prediction


1
Issues in ensemble weather prediction
NOAA Earth System Research Laboratory
  • Tom Hamill
  • NOAA Earth System Research Lab, Boulder,
    Colorado, USA
  • tom.hamill_at_noaa.gov

2
Success story
ECMWFs Lothar forecast, c/o Tim Palmer. Many
members forecast storm several days hence
deterministic did not.
3
Success limited-area ensemble forecast provides
evidence of signal of severe weather two days
prior
Tornadoes related to large-scale patterns of
instability and shear, often predictable several
days hence.
48 hr SREF Forecast Valid 21 UTC 7 April 2006
Prob (MLCAPE gt 1000 Jkg-1) X Prob (6 km Shear gt
40 kt) X Prob (0-1 km SRH gt 100 m2s-2) X Prob
(MLLCL lt 1000 m) X Prob (3h conv. Pcpn gt 0.01
in) Shaded Area Prob gt 5
Max 40
3
c/o David Bright, NOAA SPC, and Jun Du, NCEP/EMC
4
Severe event of April 7, 2006
  • First ever day-2 outlook high risk of severe
    weather issued by NOAA Storm Prediction Center
    in past have been cautious
  • gt 800 total severe reports, 3 killer tornadoes,10
    deaths
  • Diagnostics from SREF and good past SREF
    performance aided forecaster confidence

4
5
Still, unresolved problems with ensemble weather
predictions
horizontal lines indicate distribution of
climatology
error bars from block bootstrap
  • Verification of ECMWF precipitation reforecasts
    (Fall, 1981-2000) using 2005 version of ECMWF
    forecast model, with 15 members.
  • Substantial lack of calibration of precipitation
    forecasts.
  • Similar story to be told for surface temperature
    forecasts raw ensemble still subject to
    significant systematic errors.
  • The variables we dont really care much about
    (Z500) we do pretty well at the variables we do
    care about (surface weather) are tougher to fix.

6
Topics
  • Limitations of ensemble forecasts summary of
    ongoing work
  • Sub-optimal sets of initial conditions
  • Model errors
  • Linkages where WGNE can help

7
Initial condition issues
  • Ensemble of initial conditions
  • Should sample distribution of plausible analyses
    (and subsequent forecasts should provide samples
    of forecast uncertainty).
  • Given limited size ensemble, maximal forecast
    error variance if initial conditions should
    project onto analysis-error covariance singular
    vectors (EhrendorferTribbia, JAS, 1997)
  • Are our methods providing suitable approximations
    to this?
  • Are we perturbing all aspects of the initial
    condition to which there is initial condition
    sensitivity?

8
Structure of analysis errors
  • Estimated from differences between operational
    analyses and forecasts from NCEP, US Navy, and
    Canada. Here, data for 45º N
  • Note small analysis errors in mid-troposphere,
    larger errors near surface and tropopause.

12 h
24
0 h
36
48
analysis spectrum
Ref Hakim 2005, MWR, 133, 567-578.
9
Analysis error spectrum
analysis spectrum
72h
12h
Analysis errors are a larger fraction of the
climatological variance at small scales than at
large scales. Still, there is more total error
in the large scales than in the small scales.
analysis error spectrum
9
9
Ref Hakim, MWR, March 2005
10
Singular vectors(selecting maximally growing
perturbations)
Property of singular vectors sized initial time
(dashed) and 48 h later (solid). Main point
analysis errors may be structurally different
than total-energy singular vectors. TESVs have
large amplitude in mid-troposphere, much more
power at small scales than analysis errors, this
suggests. Grow less rapidly. Note 1 ECMWFs
singular vectors are combination of initial-time
and evolved. Note 2 TESVs have produced, thus
far, generally better forecasts than AECSVs
Analysis-error covariance singular vectors
Total-energy singular vectors
48-h evolved
10
initial10
10
10
10
10
from Barkmeijer et al., QJRMS, 1999
11
The ensemble Kalman filter a schematic
(This schematic is a bit of an inappropriate simpl
ification, for EnKF uses every member to
estimate background- error covariances)
12
Will ensemble filters improve the initialization
of ensemble forecasts?
  • Test of ensemble square-root filter (EnKF
    Whitaker and Hamill, MWR, 2002) no perturbed
    observations
  • 64-member, T126L64 NCEP GFS, May-Jun 2007
  • All conventional data, cloud-track winds,
    COSMIC GPS, AMSU radiances
  • Compare with 20-member ensembles forecasts with
    from EnKF perturbations, vs. operational NCEP
    Ensemble Transform (ET) perturbations re-centered
    on EnKF ensemble mean.
  • ET technique is an update of old breeding
    technique that includes some orthogonalization
    of perturbations.

13
Evolution of T850 Spread,EnKF and ET
perturbation methods
EnKF spreads initially more homogeneous. Tough
here to visualize how spreads evolve with time,
so (next slide)
Ref WhitakerHamill presentation at AMS Annual,
2008
14
Evolution of T850 spread,EnKF and ET
perturbation methods
ET spreads grow quickly in midlatitudes, EnKF
does not. Hints at possible dynamical imbalances
in EnKF ICs. ET spread growth from leakage of
excess in polar regions?
15
Perturb the land surface?
The land state can be thought of as part of the
initial condition. Why not perturb
it? Perturbing the soil moisture (here, WRF
initialized with 2 different soil
moisture analyses) increased warm-season
precipitation forecast spread, modulated
the details of thunderstorm activity. Likely to
have biggest impact in warm season, when
insolation is large. Though In winter, perturb
snow cover/depth?
15
15
Ref Sutton et al., MWR, Nov 2006
16
Conclusions (initial conditions)
  • Each of existing operational methods fall short
    of optimal in some important ways
  • Singular vectors dont resemble analysis errors,
    detrimental especially to short-range forecasts
    (but they do grow quickly!)
  • Ensemble filters apparently slow growth of
    spread in ensembles, perhaps due to factors such
    as the introduction of imbalances through use of
    covariance localization.
  • Comparisons of methods between centers
    un-revealing, as different centers use different
    models.
  • Clean comparisons between methods is something
    the WMO might facilitate.

17
Ensemble model error
  • Deal with it directly
  • Ameliorate model bias (increase resolution,
    improve parameterizations, etc.). Wont discuss.
  • Increase spread among ensemble members
    (stochastic backscatter, multi-model,
    multi-parameterization, better treatment of
    lateral boundary conditions).
  • Deal with it indirectly (statistical
    post-processing).

18
Spectral Stochastic Backscatter Scheme (SSBS)
Rationale A fraction of the dissipated energy is
scattered upscale and acts as streamfunction
forcing for the resolved-scale flow (originally
developed in the context of LES now adapted to
NWP) Shutts, 2005 Berner et al., 2008a, Berner
et al., 2008b
Total dissipation rate from numerical
dissipation, convection, gravity / mountain wave
drag.
Spectral Markov chain temporal and spatial
correlations prescribed
19
SBSS resultstropical U850, T255L40 model
BSS, Climo 1.5?
RPSS
Solid operational ensemble (OPER) Dashed
ensemble with stochastic backscatter (SSBS)
Dotted operational con?guration with reduced
initial perturbations (OPER-IPRED).
Rank Histogram
outliers
Berner et al. 2008, JAS, in press, DOI
10.1175/2008JAS2677.1
20
Combinationmultiple models and/or multiple
parameterizations
  • One possible approach to operational mesoscale
    guidance is to produce an ensemble forecast using
    a combination of different initial conditions and
    different trigger functions.
  • Dave Stensrud, 1994, from Stensrud and Fritsch,
    MWR, Sep. 1994 (part III).

20
Photo credit Monty Python and the Holy Grail
21
Multi-(whatever) ensembles
  • Potential plusses
  • Provide different but still plausible
    predictions.
  • Models may have particular strong/weak aspects.
    Leverage the strong, discount the weak.
  • Implicitly samples analysis uncertainty through
    assimilation of somewhat different sets of
    observations, use of different data assimilation
    techniques.
  • Leverage each others hard work and CPU cycles.
  • Potential minuses
  • Models may all be developed under similar set of
    assumptions (e.g., which terms to neglect in
    equations of motion). What if some of these are
    consistently wrong and forecasts have similar
    biases?
  • Complex task to share data internationally in
    real time.
  • Must be flexible to use whatever is available,
    given outages / production delays.

21
22
THORPEX Interactive Grand Global Ensemble
(TIGGE) archive centers and data providers
UKMO
CMC
CMA
ECMWF
MeteoFrance
JMA
NCAR
NCEP
KMA
IDD/LDM
HTTP
FTP
Archive Centre
BoM
CPTEC
Current Data Provider
22
23
TIGGE,RPSSof Z500
Oct-Nov 2007
Dec 06-Feb 07
Oct-Nov 2007
  • Skill of forecasts against own analyses for 4
    different periods using TIGGE data
  • Models obviously of different quality.

Apr-May 2007
Jun-Aug 2007
23
Ref Park et al., 2008, ECMWF TM 548
24
TIGGE, Z500, ECMWF UKMO
  • Trained with bias correction using last 30 days
    of forecasts and analyses.
  • ECMWF analysis as reference.
  • Conclusions
  • Small benefit from multi-model relative to best
    model.
  • impact of bias correction at short leads, - at
    long leads. Reforecast seminar will discuss why

multi-model bias-corrected
UKMO raw
multi-model raw
ECMWF raw
24
Ref Park et al., 2008, ECMWF TM 548
25
T850, two-model
  • Trained with bias correction using last 30 days
    of forecasts and analyses.
  • ECMWF analysis as reference.
  • Conclusions
  • UK so contaminated by systematic errors that its
    raw forecasts add no value.
  • Multi-model calibrated uniformly beneficial
    (presumably because of large, consistent biases)

multi-model bias-corrected
ECMWF raw
UK raw
multi-model raw
25
Ref Park et al., 2008, ECMWF TM 548
26
Dealing with model error indirectly problems
wed like to correct through calibration
26
27
bias (drizzle over-forecast)
ensemble members too similar to each other.
Probabilities too smooth downscaling needed.
27
28
Calibration basic concept
  • Use time series of past (F,O) to help determine
    pdf of O todays F.
  • Major questions
  • What calibration scheme to use?
  • How much training data?
  • Are gains from calibration worth the drawbacks
    (extra computational expense)?

29
Disadvantages to calibration
  • Calibration wont correct the underlying problem.
    Prefer to achieve unbiased, reliable forecasts
    by doing numerical modeling correctly in the
    first place.
  • No one general approach that works best for all
    applications.
  • Corrections may be model-specific the
    calibrations for NCEP v 2.0 may not be useful for
    ECMWF, or even NCEP v 3.0.
  • Could constrain model development. Calibration
    ideally based on long database of prior forecasts
    (reforecasts, or hindcasts) from same model.
    Upgrading model good for improving raw forecasts,
    may be bad for skill of post-processed forecasts.
  • Users beware Several calibration techniques that
    have been recently proposed are conceptually
    flawed / only work properly in certain
    circumstances.

29
30
Analog calibration method using reforecasts
On the left are old forecasts similar to todays
ensemble- mean forecast. For feeding ensemble
streamflow model, form an ensemble from the
accompanying analyzed weather on the right-hand
side.
30
Hamill and Whitaker, Nov. 2006 MWR.
31
Analog calibration method using reforecasts
On the left are old forecasts similar to todays
ensemble- mean forecast. For feeding ensemble
streamflow model, form an ensemble from the
accompanying analyzed weather on the right-hand
side.
31
32
Downscaled analog probability forecasts
32
33
Very substantial impact on improving
forecast skill removes state-dependent bias,
improves reliability, permits statistical
downscaling.
Verified over 25 years of forecasts skill
scores use conventional method of calculation
which may overestimate skill (Hamill and Juras
2006).
33
34
Linkages where WGNE can help
35
HEPEXan international, cooperative project
toadvance ensemble hydrologic predictions.Needs
close linkage with global ensemble prediction
datasharing efforts.
35
36
Example 1-2 day lead hydrologic forecast for a
basin in Northern Italy
Hydrologic model forced with multi-model weather
ensemble data.
Skill of hydrologic forecast tied to the skill of
the precipitation/temperature forecasts. Here,
all forecasts missed timing of rainfall event, so
subsequent hydrologic forecasts missed event.
Reservoir regulation, hydrologic model may have
also had effects.
36
Source A meteo-hydrological prediction system
based on a multi-model approach for ensemble
precipitation forecasting. Tomasso Diomede et
al, ARPA-SIM, Bologna, Italy.
37
HEPEXsenvisionedEnsemble Hydrological
Prediction System
WMOs hydrological program not well integrated
with THORPEX, WWRP, WCRP. WGNE recommendation
helpful.
37
38
Ensemble forecast verification
  • Verification plan drafted under THORPEX/TIGGE,
    but little follow-through.
  • Need
  • Observations shared, readily accessible.
  • Sensible weather elements, not just Z500.
  • Defined standards on routine set of verification
    metrics.
  • Against own analyses?
  • Against observations?
  • Dealing with different grid spacings, etc.
  • Develop freely disseminate a toolbox of
    routines, display packages.
  • Either centralized production of verification
  • WGNE recommendation to boost prominence of
    cooperation on ensemble verification helpful.
Write a Comment
User Comments (0)