Title: Lecture 6 Resolution and Generalized Inverses
1Lecture 6 ResolutionandGeneralized Inverses
2Syllabus
Lecture 01 Describing Inverse ProblemsLecture
02 Probability and Measurement Error, Part
1Lecture 03 Probability and Measurement Error,
Part 2 Lecture 04 The L2 Norm and Simple Least
SquaresLecture 05 A Priori Information and
Weighted Least SquaredLecture 06 Resolution and
Generalized Inverses Lecture 07 Backus-Gilbert
Inverse and the Trade Off of Resolution and
VarianceLecture 08 The Principle of Maximum
LikelihoodLecture 09 Inexact TheoriesLecture
10 Nonuniqueness and Localized AveragesLecture
11 Vector Spaces and Singular Value
Decomposition Lecture 12 Equality and Inequality
ConstraintsLecture 13 L1 , L8 Norm Problems and
Linear ProgrammingLecture 14 Nonlinear
Problems Grid and Monte Carlo Searches Lecture
15 Nonlinear Problems Newtons Method Lecture
16 Nonlinear Problems Simulated Annealing and
Bootstrap Confidence Intervals Lecture
17 Factor AnalysisLecture 18 Varimax Factors,
Empircal Orthogonal FunctionsLecture
19 Backus-Gilbert Theory for Continuous
Problems Radons ProblemLecture 20 Linear
Operators and Their AdjointsLecture 21 Fréchet
DerivativesLecture 22 Exemplary Inverse
Problems, incl. Filter DesignLecture 23
Exemplary Inverse Problems, incl. Earthquake
LocationLecture 24 Exemplary Inverse Problems,
incl. Vibrational Problems
3Purpose of the Lecture
Introduce the idea of a Generalized Inverse, the
Data and Model Resolution Matrices and the Unit
Covariance Matrix Quantify the spread of
resolution and the size of the covariance Use
the maximization of resolution and/or covariance
as the guiding principle for solving inverse
problems
4Part 1The Generalized Inverse,the Data and
Model Resolution Matricesand the Unit
Covariance Matrix
5all of the solutions
of the form
mest Md v
6lets focus on this matrix
mest Md v
7rename it the generalized inverseand use the
symbol G-g
mest G-gd v
8(lets ignore the vector v for a
moment)Generalized Inverse G-goperates on the
data to give an estimate of the model
parametersifdpre Gmestthenmest G-gdobs
9Generalized Inverse G-gif dpre Gmest then
mest G-gdobs sort of looks like a matrix
inverseexceptM?N, not squareandGG-g?I and
G-gG?I
10so actuallythe generalized inverse is not a
matrix inverse at all
11dpre Gmest and mest G-gdobs
plug one equation into the other
dpre Ndobs with N GG-g
data resolution matrix
12Data Resolution Matrix, N
dpre Ndobs
How much does diobs contribute to its own
prediction?
13ifNI
dpre dobs
dipre diobs
diobs completely controls its own prediction
14The closer N is to I, the more diobs controls
its own prediction
15straight line problem
16 dpre N
dobs
only the data at the ends control their own
prediction
17dobs Gmtrue and mest G-gdobs
plug one equation into the other
mest Rmtrue with R G-gG
model resolution matrix
18Model Resolution Matrix, R
mest Rmtrue
How much does mitrue contribute to its own
estimated value?
19ifRI
mest mtrue
miest mitrue
miest reflects mitrue only
20else ifR?I
miest Ri,i-1mi-1true Ri,imitrue
Ri,i1mi1true
miest is a weighted average of all the
elements of mtrue
21The closer R is to I, the more miest reflects
only mitrue
22Discrete version ofLaplace Transform
large c d is shallow average of m(z) small c
d is deep average of m(z)
23e-chiz
?
m(z)
integrate
e-cloz
dlo
z
?
integrate
dhi
z
z
24 mest R
mtrue
the shallowest model parameters are best
resolved
25Covariance associated with the Generalized Inverse
unit covariance matrix divide by s2 to remove
effect of the overall magnitude of the
measurement error
26unit covariance for straight line problem
model parameters uncorrelated when this term
zero happens when data are centered about the
origin
27Part 2 The spread of resolution and the size of
the covariance
28a resolution matrix has small spread if only its
main diagonal has large elementsit is close to
the identity matrix
29Dirichlet Spread Functions
30a unit covariance matrix has small size if its
diagonal elements are smallerror in the data
corresponds to only small error in the model
parameters(ignore correlations)
31(No Transcript)
32Part 3 minimization ofspread of
resolutionand/orsize of covarianceas the
guiding principlefor creating a generalized
inverse
33over-determined casenote that forsimple least
squares
G-g GTG-1GT
model resolution RG-gG GTG-1GTGI always the
identify matrix
34suggests that we try to minimize the spread of
the data resolution matrix, Nfind G-g that
minimizes spread(N)
35spread of the k-th row of N
now compute
36first term
37second term
third term is zero
38which is justsimple least squares
putting it all together
G-g GTG-1GT
39the simple least squares solutionminimizes the
spread of data resolutionand has zero spread of
the model resolution
40under-determined casenote that forminimum
length solution
G-g GT GGT-1
data resolution NGG-g G GT GGT-1 I always
the identify matrix
41suggests that we try to minimize the spread of
the model resolution matrix, Rfind G-g that
minimizes spread(R)
42which is justminimum length solution
minimization leads to GGTG-g GT
G-g GT GGT-1
43the minimum length solutionminimizes the spread
of model resolutionand has zero spread of the
data resolution
44general case
leads to
45a Sylvester Equation, so explicit solution in
terms of matrices
general case
leads to
461
special case 1
0
e2
I
GTGe2IG-gGT G-gGTGe2I-1GT
damped least squares
470
special case 2
1
e2
I
G-gGGTe2I GT G-gGT GGTe2I-1
damped minimum length
48so
- no new solutions have arisen
- just a reinterpretation of previously-derived
solutions
49reinterpretation
- instead of solving for estimates of the model
parameters - We are solving for estimates of weighted averages
of the model parameters, - where the weights are given by the model
resolution matrix
50criticism of Direchlet spread() functionswhen m
represents m(x)is that they dont capture the
sense of being localized very well
51These two rows of the model resolution matrix
have the same spread
Rij
Rij
index, j
index, j
i
i
but the left case is better localized
52we will take up this issue in the next lecture