Advanced Techniques for Modeling Loss Given Default

1 / 72
About This Presentation
Title:

Advanced Techniques for Modeling Loss Given Default

Description:

Measuring Probabilistic Model Performance from an investor's perspective ... Modeling of expected recovery one month after default and confidence intervals ... – PowerPoint PPT presentation

Number of Views:339
Avg rating:3.0/5.0
Slides: 73
Provided by: VandeC

less

Transcript and Presenter's Notes

Title: Advanced Techniques for Modeling Loss Given Default


1
Advanced Techniques for Modeling Loss Given
Default
Craig Friedman (craig_friedman_at_sandp.com) Sven
Sandow (sven_sandow_at_sandp.com) Risk Solutions
Group Standard Poors
2
  • Introduction
  • Measuring Probabilistic Model Performance from an
    investors perspective
  • Building Probabilistic Models for use by an
    investor
  • The Maximum Expected Utility Ultimate Recovery
    Model
  • Conclusion

3
Introduction Principal References
  • Friedman, C. and Sandow, S. Model Performance
    Measures for Expected Utility Maximizing
    Investors, International Journal of Theoretical
    and Applied Finance, Summer, 2003.
  • Friedman, C. and Sandow, S. Learning
    Probabilistic Models An Expected Utility
    Maximization Approach, Working Paper, 2003.
  • Friedman, C. and Sandow, S. Recovery Rates of
    Defaulted Debt A Maximum Expected Utility
    Approach, working paper, 2003

4
Introduction 2 Credit Modeling Problems
  • 1) A Probability of Default Problem
  • Find prob(defaultx)

5
Introduction 2 Credit Modeling Problems
2) A Recovery Distribution Problem Find
pdf(recoveryx)
6
Introduction Our Main Goal
  • Find good models
  • To do so, we must have a way to measure model
    performance
  • The models will be used by investors to make
    investment decisionsperformance should be
    measured accordingly

7
Performance Measures
  • Popular Performance Measures for Credit Risk
  • Utility Theory Basics
  • Our Paradigm
  • Information Theoretic Interpretation
  • An Important Class of Utility Functions

8
Performance Measures Popular Performance Measures
Classification Statistics for PD Models
  • Procedure
  • 1. Choose a cutoff probability
  • 2. Calculate classification errors
  • Percentage of low PD companies (below cutoff)
    that default
  • Percentage of high PD companies (above cutoff)
    that dont default
  • 3. Compare models based on the above errors.
  • This approach
  • converts a PD model to a Classification model and
    measures performance of the Classification model
  • ignores all differences and distinctions between
    PDs for the group of companies (above) below the
    cutoff
  • throws away lots of information
  • Is used by many market participants

9
Performance Measures Popular Performance Measures
Rank-Based Measures for PD Models
  • Rank-based measures are more sophisticated.
  • Avoids some of the shortcomings of classification
    statistics discussed above
  • Examples Receiver Operator Characteristic Curve,
    Accuracy Ratio, highly related ideas Gini
    coefficient, power curve,.
  • Idea
  • Make a continuum of cutoff probabilities
  • Use the results, indexed by cutoff.
  • We still measure PD Model Performance by
    considering performance of a bunch of
    Classification Models
  • Does this approach prevent terrible models from
    slipping through the cracks?

10
Performance Measures Popular Performance Measures
Rank-Based Measures for PD Models
  • Some problems are fixed. Others remain.
  • Weird transformations of models all have the same
    ROC curves, ROC curve areas, and AR scores!

  • Some probabilities are
    less than zero, others
    are greater than 1.


  • Extreme upward distortion

  • Extreme downward
    distortion


  • Are the ranks of the PDs enough? Or do we need
    the actual values of the probabilities to make
    well-informed investing and lending decisions?

11
Performance Measures Popular Performance Measures
Rank-Based Measures for PD Models
  • Ranks are not enough to make sound lending
    decisions.
  • Example A loan officer compares Loan A with
    Loan B, loans which are identical in all respects
    except A has a lower PD than B, and B has a
    higher return (in the absence of default) than A.
  • For sufficiently low PD levels, the loan officer
    will prefer loan B, for its high return.
  • For sufficiently high PD levels, the loan officer
    will prefer loan A, since it is less likely to
    default.
  • PD levels matter!
  • Difficult to generalize beyond 2-outcome models,
  • Not consistent with preferences of any expected
    utility maximizing investor
  • Can lead to disastrous model selection (examples
    available)

12
Performance Measures Utility Theory Basics
  • Utility functions assign values (utilities) to
    random wealth levels

  • (power2 utility used
    by

  • Morningstar to rank

  • funds)
  • Utility functions characterize the investors
    risk aversion.
  • Rational investors maximize their expected
    utility (from Utility Theory).

13
Performance Measures Utility Theory Basics
  • Utility Theory is based on reasonable
    assumptions, for example
  • More is preferred to less (the utility function
    is a strictly increasing function of wealth)
  • The slope of the utility function decreases as
    wealth increases (a gift of 1 provides more
    utility when your wealth is low than when your
    wealth is large.)
  • Utility Theory is one of the pillars of modern
    financial theory.

14
Performance Measures Our Paradigm
  • Our Model Performance Measures are
  • Natural Extensions of the Axioms of Utility
    Theory
  • Familiar, in important special cases
  • Enterprise-wide
  • PD, late payment, etc.
  • Recovery, dilution, aggregate default rate
    distribution, etc.
  • Multi-Horizon PD
  • Default correlation modeling
  • others
  • Consistent with our approach to Model Formulation

15
Performance Measures Our Paradigm
  • Assumptions
  • Investor with utility function
  • Market with odds ratio for each state (AAA bonds
    cost more than CCC bonds!)
  • Investor believes model and invests to maximize
    expected utility (a consequence of Utility
    Theory)
  • Paradigm We base our model performance measure
    on an (out of sample) estimate of expected
    utility.
  • Accurate models allow for effective investment
    strategies
  • Inaccurate models induce over-betting and
    under-betting
  • Our performance measures have financial
    interpretation

16
Performance Measures Our Paradigm
  • Given a benchmark model, we can construct a
    relative performance measure based on our
    paradigm
  • The benchmark model can be
  • An industry standard model
  • The non-informative model
  • The non-informative model is so simple that we
    can construct a single relative performance
    measure, without the effort of building a complex
    benchmark model.

17
Performance Measures Our Paradigm
  • Investor has utility function U(W),

18
Performance Measures Our Paradigm
19
Performance Measures Our Paradigm
20
Performance Measures Our Paradigm
21
Performance Measures Information Theoretic
Interpretation
  • Entropy is a measure of the uncertainty of a
    random variable
  • High Entropy Prob Measure Low Entropy
    Prob Measure
  • Hlog(10) H0

22
Performance Measures Information Theoretic
Interpretation
  • Kullback-Leibler Relative Entropy is a measure of
    the discrepancy from one probability measure to
    another
  • Large Discrepancy Small Discrepancy
  • D(pq)log(10) D(pq) is approximately 0

23
Performance Measures Information Theoretic
Interpretation
  • Entropy is the difference between fantasy and
    optimality
  • By analogy

24
Performance Measures Information Theoretic
Interpretation
  • We define the Generalized Relative Entropy (GRE),
    a measure of discrepancy between probability
    measures
  • GRE is convex in p and non-negative. GRE is zero
    if and only if pq

25
Performance Measures Information Theoretic
Interpretation
  • By putting U(W)log(W) we recover entropy,
    Kullback-Leibler relative entropy
  • We have the information theoretic interpretations

26
Performance Measures Important Class of
Utility Functions
  • Often, we dont know/trust the odds ratios
  • Note that for U(W)log(W)

27
Performance Measures Important Class of
Utility Functions

28
Performance Measures Important Class of
Utility Functions

29
Performance Measures Important Class of
Utility Functions
  • Difference in expected utility
  • Estimated wealth growth rate pickup (for a
    certain type of investor) who uses model 2 rather
    than model 1
  • Logarithm of likelihood ratio (deviance, Akaike
    Information Content)
  • Performance measure that generates an optimal (in
    the sense of the Neyman-Pearson Lemma) decision
    surface.
  • Difference between
  • relative entropy from empirical probs to model 1
    probs
  • relative entropy from empirical probs to model 2
    probs

30
Performance Measures Important Class of
Utility Functions
  • Error term from using the approximation
  • is two orders of magnitude smaller than the
    deviation from homogeneous expected returns.

31
Performance Measures An Important Class of
Utilities
  • Morningstar uses the power utility with power 2.
  • This is how a member of our family approximates
    Morningstars utility function

32
Performance Measures pdf(y),prob(Yyx),pdf(y
x)
  • Please see the paper
  • There are a few twists, but the results are
    basically the same.
  • For extension of these ideas to measure
    regression model performance, please see

33
Maximum Expected Utility Models Introduction
  • Maximize Model performance measures relevant for
    an INVESTOR who relies on the models to make
    INVESTMENT DECISIONS
  • Are flexible enough to accurately reflect the
    data
  • Do Not Over-fit the data
  • We learn/build models based on a coherent model
    learning theory specifically designed for
    investors

34
Maximum Expected Utility Models Introduction
  • Balance
  • Consistency with the Data
  • Consistency with Prior Beliefs
  • Result a 1 hyperparameter family of models, each
    of which is associated with a a given level of
    consistency with the data. Each model
  • is asymptotically maximizes expected utility over
    a potentially rich family of models
  • is robust maximizing outperformance of benchmark
    model under most adverse true measure (more
    later).
  • Choose optimal hyperparameter value by maximizing
    expected utility on an out of sample data set.
  • In this talk, we discuss our approach in the
    simplest setting Discrete Probability Models.

35
Maximum Expected Utility Models Formulation
  • Model feature means are deterministic quantities
  • Sample feature means are observations of a random
    vector
  • Central Limit Theorem random vector has Gaussian
    distribution
  • (asymptotically)
  • Equally consistent model measures lie on the
    level sets of this Gaussian

36
Maximum Expected Utility Models Formulation
37
Maximum Expected Utility Models Formulation
  • We define the notion of Dominance (of one model
    measure over another).

38
Maximum Expected Utility Models Formulation
39
Maximum Expected Utility Models Formulation
Primal Problem
40
Maximum Expected Utility Models Formulation
Robustness
41
Maximum Expected Utility Models Formulation
42
Maximum Expected Utility Models Formulation
43
Maximum Expected Utility Models Dual Problem
44
Maximum Expected Utility Models Dual Problem
  • Which is familiar, see, for example

45
Maximum Expected Utility Models Summary of
Approach
46
Maximum Expected Utility Models Losing the Os
47
Maximum Expected Utility Models More General
Context
48
Maximum Expected Utility Models Applications and
Performance
  • We can use the same methodology to model
    conditional
  • Default Probabilities (Friedman and Huang, 2003)
  • Recovery Rate Distributions (Friedman and Sandow,
    2003)
  • Aggregate Default Rate Distributions (Sandow, et
    al, 2003)
  • Late Payment Probabilities
  • Default Time Densities
  • Dilution Distributions
  • Asset Price Distributions
  • To date, our models have outperformed benchmark
    models based on industry standard approaches
    (e.g., PD, Ultimate Recovery) under a variety of
    performance measures

49
Recovery Model Motivation
  • Two major factors affect credit risk
  • Probability of default (We model prob(default1
    given x).)
  • Probability distribution over recoveries given
    default (RGD)
  • In the past little modeling effort for RGD
  • Best known model is Moodys LossCalc
  • Modeling of expected recovery one month after
    default and confidence intervals
  • No explicit modeling of full probability
    distributions
  • No modeling of ultimate recovery

TM
50
Recovery Model Data
  • Standard and Poors LossStatsTM Database
  • Contains discounted ultimate recovery rates
  • for more than 1800 bonds and loans
  • which defaulted and emerged since 1988
  • Contains a variety of bond/loan characteristics
  • Seniority of debt
  • Debt below class
  • Debt above class
  • Collateral type
  • Outstanding debt

51
Previous Recovery Research by SP
  • Empirical research by Van de Castle, Keisman
    (Credit Week, June 16, 1999) and Bos, Kelhoffer,
    Keisman (Credit Week, August 7, 2002) RGD
    depends strongly on
  • Seniority/debt cushion
  • Quality of collateral
  • Economic environment
  • Effect of economy can be captured by aggregate
    default rates (see also E. Altman, B. Brady, et.
    al., 2002)

52
Recovery Modeling ApproachConditional
probabilities
  • p(rx) probability density of recovery rate r
    conditioned on vector x of explanatory variables,
    which are
  • Collateral quality
  • Debt below class
  • Debt above class
  • Aggregate default rate
  • Others if necessary
  • Recoveries are mostly between 0 and 1.2.
  • There is a large number of defaults with complete
    or zero recovery.

53
Recovery Model Approach
  • Maximum Expected Utility Model
  • Global features
  • Point features

54
Recovery Model Performance
  • Model Performance Measure, Delta gain in
    expected logarithmic utility (wealth growth rate)
    with respect to a non-informative model

55
Recovery Model ResultsProbability density
versus RGD and collateral
56
Recovery Model ResultsProbability density
versus RGD and collateral
57
Recovery Model ResultsPoint probabilities
versus collateral
58
Recovery Model ResultsMoments versus collateral
59
Recovery Model ResultsProbability density
versus RGD and debt above
60
Recovery Model ResultsProbability density
versus RGD and debt above
61
Recovery Model ResultsPoint probabilities
versus debt above
62
Recovery Model ResultsMoments versus debt above
63
Recovery Model ResultsProbability density
versus RGD and debt below
64
Recovery Model ResultsProbability density
versus RGD and debt below
65
Recovery Model ResultsPoint probabilities
versus debt below
66
Recovery Model ResultsMoments versus debt below
67
Recovery Model ResultsProbability density
versus RGD and aggregate default rate
68
Recovery Model ResultsProbability density
versus RGD and aggregate default rate
69
Recovery Model ResultsPoint probabilities
versus aggregate default rate
70
Recovery Model ResultsMoments versus aggregate
default rate
71

Conclusion
  • We have utilized Utility Theory to construct
    Model Performance measures from an investors
    perspective
  • We have built Probabilistic Models that are
    approximately optimal with respect to the above
    performance measures. These models
  • Are numerically robust (convex programming)
  • Are theoretically robust (best-worst case
    measure)
  • Are flexible
  • Do not overfit
  • Perform well in practice
  • We have described the Maximum Expected Utility
    Ultimate Recovery Model

72

References
  • References available on request.
  • craig_friedman_at_sand.com
  • sven_sandow_at_sandp.com
Write a Comment
User Comments (0)