Title: How reliable are ONS indicators
1How reliable are ONS indicators?
- Paul Smith
- Methodology Group
- Office for National Statistics
2Outline
- What is reliability?
- Quality attributes
- Designing in quality
- Examples
- trade-offs of quality attributes
- trade-offs of quality and cost
- Conclusion
3What is reliability?
- Resists numerical definition
- Fitness for purpose
- Purpose is defined by the user
- Many statistics have multiple purposes
4Quality Attributes
- ONS concentrating on quality measures
- National Statistics quality measurement and
reporting framework - Relevance
- Completeness
- Timeliness
- Accessibility
- Accuracy
- Comparability
- Coherence
- Derived from Eurostat quality framework
5Relevance
- Statistical concepts should be those that users
need - Concepts
- international standards (eg ILO unemployment)
- How do we know what users need?
- triennial reviews, NS quality reviews, user
groups - the same (only more)
6Completeness
- Domains for which estimates are available should
reflect users needs - industrial classification domains
- SIC2003
- regional domains less well served
- Allsopp review
- may need survey redesigns
- microdata ultimate freedom for defining domains
7Timeliness
- How soon after the reference period are results
produced? - what other attributes of quality are affected by
speed? - Is the frequency right for decisions based on the
results? - Dates pre-announced
8Accessibility and clarity
- Data clearly presented
- Accessible in a variety of formats
- paper
- web
- electronic
- Unpublished data available on request
- Metadata
- Assistance for users
9Accuracy
- Sampling errors
- random sampling
- Non-sampling errors
- non-response error
- coverage error
- measurement error
- processing error
- modelling errors
10Accuracy (2)
- non-response error
- ideal measure bias due to non-response
- indicator response rates (by questionnaire/
activity) - coverage error
- ideal measure bias due to under/over-coverage
- requires supplementary survey
- measurement error
- ideal measure bias due to measurement error
- indicator questionnaire problems, proxy response
rate
11Accuracy (3)
- processing error
- ideal measure error due to poor processing
(miskeys/ bad scans etc). Can measure this with
some effort! - modelling errors
- ideal how sensitive are the results to the
choice of model/methods? - Do the models reflect the data structures (are
model assumptions valid)?
12Comparability...
- across space and time
- Consistent approach and methods in different
areas/regions - Breaks in time series accompanied by
- description of change
- estimate of effect of change
- clear labelling of change
- back series estimated on a consistent basis
13Coherence
- Coherence is promoted by common
- definitions
- classifications
- methods
- sources
- Do different sources tell a similar story?
- National Accounts balancing
- Changes from provisional to final estimates
unbiassed - Different periodicities
14Designing in quality
- ONS actively seeking information on how
statistics are used - NS theme groups, quality reviews
- triennial reviews
- Quality assurance procedures for all new
methodology
15Some examples
16What do users want?
- What do users look at first when new data are
published? - Has it changed from last time
- Decisions usually based on changes in statistics
- has policy X had an effect?
- have we reached a turning point in the economy?
- should we change interest rates?
17Stability of variance estimates
- Variance estimates often variable
- What to publish? Users prefer an indicator of
accuracy
18(No Transcript)
19Non-response error and coherence vs accuracy
- LFS weighting compensates for differential
non-response - Ensures consistency with population estimates...
- ...at detailed regional level
- Can potentially adjust to many variables
- Variance made up of variability due to data and
variability due to weights
20Existing balance
- LFS - many constraints, lots of consistency
- probably suboptimal accuracy at national level
- different impact on regional accuracy and
national accuracy - Business surveys - constraints in strata
- one constraint per stratum - accuracy within
strata - too much stratification?
- trade-off bias and variance in strata with small
sample sizes
21Constraints and variance
22RSI - stratum minimum sample size
23variance of changes
- To support users, want variance of changes
- If the population and sample stayed the same,
calculating - would be easy
- but things are not so simple
24Population and sample dynamics
25Illustrative example variances of changes
- Monthly production inquiry 2000 - nsa
26Sampling error for the IoP
- Variance calculation not straightforward for
complex statistics - Kokic (1998) bootstrap method for estimating the
sampling error of (the level of) the
non-seasonally adjusted IoP - Work includes the effects of the sampling
variability in the estimates of - turnover
- adjustment for inventories
- deflation
- Component sampling errors needed as an input
27Illustrative IoP variance estimates not
seasonally adjusted
28Decisions and variability
- Economic decisions are based on movements in
published series - The reliability of a movement is affected by
the variability - Take an LFS example (from David Steels work in
the mid-1990s) - How fast is turning point detection?
- How reliable is turning point detection?
29Artificial LFS - unemployment
30Artificial LFS3 month change measured in May
31Artificial LFS1 month change measured in May
32How reliable is the change?
33How reliable are ONS indicators?
- Depends
- on the various dimensions of quality
- on the use of the data
- ONS already produces some quality measures
- Quality measurement and reporting framework
leading to development of further measures - Not possible for ONS to evaluate all the uses of
data - Important for users to understand the quality
measures and documentation which is produced