Reducing Air Pollution In Los Angeles - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

Reducing Air Pollution In Los Angeles

Description:

Title: Reducing Air Pollution In Los Angeles Author: James Lents Last modified by: Gail Tonnesen Created Date: 2/17/2000 3:52:51 AM Document presentation format – PowerPoint PPT presentation

Number of Views:107
Avg rating:3.0/5.0
Slides: 31
Provided by: JamesL193
Category:

less

Transcript and Presenter's Notes

Title: Reducing Air Pollution In Los Angeles


1
Regional Haze ModelingRecent Modeling Results
for VISTAS and WRAP
University of California, Riverside
October 27, 2003, CMAS Annual Meeting, RTP, NC
2
Modeling Team Participants
  • UC Riverside Gail Tonnesen, Zion Wang, Chao-Jung
    Chien, Mohammad Omary, Bo Wang
  • Ralph Morris et al., ENVIRON Corporation
  • Zac Adelman et al., Carolina Environmental
    Program
  • Tom Tesche et al., Alpine Geophysics
  • Don Olerud, BAMS

3
Acknowledgments
  • Western Regional Air Partnership John Vimont,
    Mary Uhl, Kevin Briggs, Tom Moore,
  • VISTAS Pat Brewer, Jim Boylan, Shiela Holman

4
Topics
  • Model Performance Evaluation
  • WRAP 1996 Model Performance Evaluation
  • VISTAS 2002 Sensitivity Results
  • CMAQ Benchmarks

5
WRAP Modeling
  • 1996 Annual Modeling
  • 36 km grid for western US, 95x85x18 layers
  • MM5 by Olerud et al.

6
WRAP Emissions Updates
  • Corrections to point sources
  • MOBILE6 beta for WRAP states
  • Monthly corrections for NH3 based on EPA/ORD
    inverse modeling.
  • Updated non-road model
  • Typical fires used for results shown here
  • 1996 NEI for non-WRAP states

7
WRAP - CMAQ revisions
  • v0301, released in March 2001
  • Used as the base case and all sensitivity cases
    for WRAPs 309 simulations.
  • v0602, released in June 2002
  • v4.2.2, released in March 2003
  • v4.3, released in Sept. 2003

8
Comparisons based on IMPROVE evaluation
9
Model Performance Metrics
  • How well does the model reproduces mean, modal,
    and variational characteristics ?
  • Using observations to normalize model error
    bias result in misleading conclusion
  • if observation is very small ? large bias or
    error
  • if model under prediction ? bounded by -1
  • model over prediction is weighted more than under
    prediction
  • We used Mean Normalized Err Bias in 309
  • Poor metric for clean conditions

10
Recommended Performance Metrics
  • Use fractional error and bias
  • bias and error is bounded symmetrical limits of
    2
  • Normalized Mean Error Bias
  • Divide the sum of the errors by the sum of the
    observations.
  • Coefficient of determination (R2)
  • explains how much of the variability in the model
    predictions can be explained by the fact that
    they are related to ambient observation, i.e. how
    close the points are to the observations.

11
Statistical measures used in model performance
evaluation
Measure Mathematical Expression Notation
Accuracy of unpaired peak (Au) Opeak peak observation Pupeak unpaired peak prediction within 2 grid cells of peak observation site
Accuracy of paired peak (Ap) P paired in time and space peak prediction
Coefficient of determination Pi prediction at time and location i Oi observation at time and location i arithmetic average of Pi, i1,2,, N arithmetic average of Oi, i1,2,,N
Normalized Mean Error (NME) Reported as
Root Mean Square Error (RMSE)
Fractional Gross Error (FE)
12
Statistical measures used in model performance
evaluation
Measure Mathematical Expression Notation
Mean Absolute Gross Error (MAGE)
Mean Normalized Gross Error (MNGE) Mean Normalized Error (MNE) Reported as
Mean Bias (MB)
Mean Normalized Bias (MNB) Reported as
Mean Fractionalized Bias (Fractional Bias, MFB) Reported as
Normalized Mean Bias (NMB) Reported as
13
Statistical measures used in model performance
evaluation
  • In addition
  • Mean observation
  • Mean prediction
  • Standard deviation (SD) of observation
  • Standard deviation (SD) of prediction
  • Correlation variance

14
  • Expanded Model Evaluation Software to include
  • Ambient data evaluation for air quality
    monitoring networks
  • IMPROVE (24-Hour average PM)
  • CASTNet (Weekly average PM Gas)
  • STN (24-Hour average PM)
  • AQS (Hourly Gas)
  • NADP (weekly total deposition)
  • SEARCH
  • 17 statistical measures in model performance
    evaluation
  • All performance metrics can be analyzed in an
    automated process for model and data selected by
  • allsite_daily onesite_daily
  • allsite_yearly onesite_monthly
  • allsite_monthly onesite_yearly

15
Community Model Evaluation Tool?
  • Facilitate model evaluation.
  • Benefit from shared development of tool.
  • Share monitoring data.
  • UCR software available at website
  • www.cert.ucr.edu/aqm

16
WRAP 1996 Evaluation, CMAQ v4.3
17
WRAP 1996 Evaluation, CMAQ v4.3
18
WRAP 1996 Evaluation, CMAQ v4.3
19
WRAP 1996 Evaluation, CMAQ v4.3
20
WRAP 1996 cases in progress
  • New fugitive dust emissions model
  • New NH3 emissions model
  • Actual Prescribed Ag burning emissions
  • 2002 annuals simulations being developed.

21
VISTAS Model 12 km Domain
  • 34 L MM5 by Olerud
  • 1999 NEI
  • CMAQ v3

22
VISTAS Sensitivity Cases
  • 3 Episodes Jan 2002, July 1999, July 2001
  • Sensitivity Cases
  • MM5 MRF and ETA-MY,
  • PBL height, Kz_min, Layer collapsing
  • CB4-2002
  • SAPRC99
  • CMAQ-AIM
  • GEO-CHEM for BC
  • NH3 emissions

23
VISTAS Key Findings
  • NO3 over predictions in winter, under predictions
    in summer.
  • Thorton et al ? N2O5 had small benefit, July MNB
    increased from 50 to 45
  • SO4 performance reasonably good
  • Problems with PBL height
  • Kz_min 1 improved performance
  • Investigating PBL height corrections
  • Minor differences in 19 vs 34 layers

24
Benchmarks
  • Athlon MP 2000 (1.66 GHz)
  • Opteron 246 (2.0 GHz)
  • 32 bit code
  • 64 bit code
  • Compare 1, 4 and 8 CPUs.
  • Ported CMAQ to the 64 bit SuSE
  • Pointers memory allocation for 64 bit

25
Test Case for benchmarks
  • VISTAS 12 km domain
  • 168 x 177 x 19 layers
  • Benchmarks for CMAQ 4.3
  • One day simulation, CB4, MEBI
  • Single CPU run time hourminutes
  • Athlon 2 GHz 1410
  • Opteron 32bit 2 GHz 1249
  • Opteron 64 bit 2 GHz 1057

26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
Optimal Cost Configuration
  • Small cluster lt 8 CPUs use Athlon
  • Large cluster gt16 CPUs use Opterons?

30
Conclusions
  • Major Improvements in WRAP 1996 Model
  • WRAP 2002 annual modeling underway
  • VISTAS Sensitivity Studies
  • still have problems in NO3
  • Need better NH3 inventory
  • Need more attention to PBL heights in MM5
  • Community model evaluation tool?
Write a Comment
User Comments (0)
About PowerShow.com