Title: 2002 MM5 36 km Model Evaluation
12002 MM5 36 km Model Evaluation
- Ralph Morris, Sue Kemball-Cook, Yiqin Jia and
Chris Emery - ENVIRON International Corporation
- Novato, CA
- (rmorris_at_environcorp.com)
- Zion Wang
- UCR CE-CERT
- WRAP Regional Modeling Center Workshop
- Tempe, Arizona
- January 28-29, 2004
22002 36 km MM5 Evaluation
- Use existing IA/WI 2002 36 km MM5 Set Up
- National RPO 36 km Grid
- Lambert Conformal Projection
- 164 x 128 x 34
- Invoke Reisner2 w/ Mixed Ice Physics
- Evaluation Methodology
- Synoptic Evaluation
- Statistical Evaluation using METSTAT and surface
data - WS, WD, T, RH
- Evaluation against upper-air met obs
3METSTAT Evaluation Package
- Average observed and predicted
- Absolute Bias and Error
- RMSE
- Index of Agreement (IOA)
- Daily and, where appropriate, Hourly Evaluation
- Statistical Performance Benchmarks
- Based on an analysis of gt 30 MM5 and RAMS runs
- Not meant as a pass/fail test, but to put
modeling results in the proper perspective
4Subdomains for Model Evaluation
1 Pacific NW 2 SW 3 North 4 Desert SW 5
CenrapN 6 CenrapS 7 Great Lakes 8 Ohio
Valley 9 SE 10 NE 11 MidAtlantic
5Datasets for Met Evaluation
- NCAR dataset ds472 airport surface met
observations -
-
-
-
- Twice-Daily Upper-Air Profile Obs (120 in US)
- Temperature
- Moisture
6Example MM5 Performance Plots
- Scatter plots of performance metrics
- Include box for benchmark
- Include historical MM5/RAMS simulation results
- WS RMSE vs. WD Gross Error
- Temperature Bias vs. Temperature Error
- Humidity Bias vs. Humidity Error
- Analysis by Month
- Examples for
- January
- March
- July
7January 2002 36 km MM5 Wind Performance
Performance Issues in WRAP Subdomains
8Wind Performance in North Subdomain
Wind Speed Underprediction Bias ?
9Wind Performance SW Region Jan 2002
Positive Wind Direction Bias ?
10January 2002 36 km MM5 Temp Performance
Pacific NW has a cold temperature bias
11Temp Performance, Pacific NW, Jan 2002
Cold bias due to underestimate daily max temp and
warmer episode periods (e.g., 1/7, 1/21 1/25)
12January 2002 36 km MM5 Humidity Performance
13March 2002 36 km MM5 Wind Performance
Same WRAP subdomains w/ performance issues
14Wind Performance PacificNW Region Mar 2002
15March 2002 36 km MM5 Temp Performance
PacificNW and DesertSW lie outside of benchmarks
16March 2002 36 km MM5 Humidity Performance
Overall, WRAP Subdomains indicate a wet cold bias
17July 2002 36 km MM5 Wind Performance
Many subdomains outside of benchmarks DesertSW,
North SW WS too low North, PacNW,
DesertSW pos bias in WD
18Wind Performance DesertSW July 2002
Severe Wind Speed Undeprediction Bias ? Slight
Positive Wind Direction Bias ?
19July 2002 36 km MM5 Temp Performance
WRAP Subdomains cold bias in July
20Temp Performance DesertSW July 2002
Cold temperature bias, especially in
afternoons Afternoon maximum temperature
underestimated 3-6 degrees C throughout July 2002
21Temp Performance Pacific NW July 2002
22 2002 36 km MM5 Humidity Performance
Reason for large pos humidity bias in DesertSW
subdomain unclear
23Humidity Performance DesertSW July 2002
Severe Humidity Underestimation Bias MM5
overstates Summer Monsoon in 2002 Desert Southwest
24Humidity Performance Pacific NW July 2002
25Months/Subdomains MM5 Exceed Benchmarks
26Summary 2002 MM5 Model Performance
- MM5 does a better job in Central and Eastern US
- General cool moist bias in Western US
- Difficulty with Western US Orography w/ 36 km
Grid? - May get better performance with higher resolution
- Pleim-Xiu scheme optimized more for eastern US?
- More optimization needed for desert and rocky
ground? - MM5 performs better in winter than in summer
- In summer forcing from mid-latitude weather
systems is weaker with diurnal cycle of solar
radiation being the main driver
27Summary 2002 MM5 Model Performance
- Western US temperature diurnal cycle amplitude is
underestimated in summer - Occurs in tandem with too wet surface humidity
- At least for January and July 2002, Subdomains
that fail to meet wind performance benchmarks
generally have a low bias in the wind speeds - Most statistical measures within benchmarks of
past applications - In Desert SW, temperature underestimation and
humidity overestimation bias suggest MM5
overstates summer monsoon effects
28Comparisons of Upper-Air Soundings
- Model able to simulate temperature profile more
accurately than dew point profile that is
smoother than observed - Partly due to coarse resolution?
- MM5 has more difficulty predicting temp/dew point
in PBL than above PBL - Not surprisingly given nudging approach
- Model performs better at 00Z (4pm PST) than 12Z
(4am PST) - MM5 easier time simulating the fully developed
convective than nocturnal boundary layer - MM5 frequently does not match surface pressure
- May be resolution issue
- MM5 overestimate how close lower troposphere is
to saturation - Overstate cloudiness
29Example of MM5 modeled smoother dew point
profiles than observed
Midland AFB TX MM5 Red Obs Black January 7,
2002 12Z (6am LST) Shallow Nocturnal Inversion
Not Captured by MM5
30Example of better MM5 performance above than
within the PBL
North Platte, NB January 7, 2002 12Z (6am
LST) Nocturnal Inversion Not Captured MM5
Red Obs Black Temperature on Right Dew Point on
Left
31Example of better MM5 performance at 00Z (left)
than 12Z (right) Spokane, WA
4pm LST
4am LST
32Example of upper-air positive WD an low WS bias
(as seen in METSTAT surface analysis)
Oakland, CA January 7, 2002 12Z (4am LST) Red MM5
Flags stronger easterly wind component and less
barbs than black observed flags
33Example of MM5 overstatement of Saturation Level
than Observed
Key West, FL January 7, 2002 12Z (8am LST) Near
surface MM5 temperature and dew point come
together indicating saturation, whereas observed
values stay apart
34- Spatial Distribution of Upper-Air Met Fields 500
mb Heights - Observed
- Reasonable agreement not surprising given nudging
above PBL - Predicted
- January 4, 2002 _at_ 00Z
35- Spatial Distribution of Upper-Air Met Fields 500
mb Heights - Observed
- Reasonable agreement not surprising given nudging
above PBL - Predicted
- July 2, 2002 _at_ 00Z
36Comparison of GOES Visible Satellite Image and
MM5 estimated low cloud fractions on July 21,
2002 18Z
37Comparison of GOES Infrared Satellite Image and
MM5 estimated middle and high cloud fractions on
July 21, 2002 18Z
38Evaluation of the 2002 MM5 36 km Simulation
Preliminary Conclusions
- Surface temperature and humidity performance
falls within benchmarks for much of the year and
most subdomains - Model has a marked cold wet bias, especially in
west - Surface winds are less accurate and fail to meet
benchmarks for entire year for some Subdomains - PacificNW, North and DesertSW
- Low WS and positive WD bias also reflected in
upper-air evaluation - Orographic effects may not be simulated correctly
using 36 km grid - Pleim-Xiu may not be optimized for drier
conditions and different land use categories in
western US
39Evaluation of the 2002 MM5 36 km Simulation
Preliminary Conclusions
- MM5 performs better in winter than in summer
- Weaker large-scale forcing in summer
- Model fails to capture daily maximum temperature
- May be related to wet bias
- MM5 has difficulty in getting the PBL structure
right, especially the nocturnal PBL height - May be important for AQ modeling
- Dew point performance issues raise questions on
whether clouds will be formed at right place and
time - Affect solar radiation and aqueous-phase
chemistry
40Preliminary Recommendations 2002 MM5 Modeling for
WRAP
- Run MM5 PX for July and January 2002 using 12 km
grid to determine whether higher resolution
improves model performance - If performance issues persist, may want to
consider sensitivity tests - LSM Scheme
- PBL Scheme
- Nudging Data and Assumptions
- Other