Title: Lecture 11 Vector Spaces and Singular Value Decomposition
1Lecture 11 Vector SpacesandSingular Value
Decomposition
2Syllabus
Lecture 01 Describing Inverse ProblemsLecture
02 Probability and Measurement Error, Part
1Lecture 03 Probability and Measurement Error,
Part 2 Lecture 04 The L2 Norm and Simple Least
SquaresLecture 05 A Priori Information and
Weighted Least SquaredLecture 06 Resolution and
Generalized Inverses Lecture 07 Backus-Gilbert
Inverse and the Trade Off of Resolution and
VarianceLecture 08 The Principle of Maximum
LikelihoodLecture 09 Inexact TheoriesLecture
10 Nonuniqueness and Localized AveragesLecture
11 Vector Spaces and Singular Value
Decomposition Lecture 12 Equality and Inequality
ConstraintsLecture 13 L1 , L8 Norm Problems and
Linear ProgrammingLecture 14 Nonlinear
Problems Grid and Monte Carlo Searches Lecture
15 Nonlinear Problems Newtons Method Lecture
16 Nonlinear Problems Simulated Annealing and
Bootstrap Confidence Intervals Lecture
17 Factor AnalysisLecture 18 Varimax Factors,
Empirical Orthogonal FunctionsLecture
19 Backus-Gilbert Theory for Continuous
Problems Radons ProblemLecture 20 Linear
Operators and Their AdjointsLecture 21 Fréchet
DerivativesLecture 22 Exemplary Inverse
Problems, incl. Filter DesignLecture 23
Exemplary Inverse Problems, incl. Earthquake
LocationLecture 24 Exemplary Inverse Problems,
incl. Vibrational Problems
3Purpose of the Lecture
View m and d as points in the space of model
parameters and data Develop the idea of
transformations of coordinate axes Show how
transformations can be used to convert a weighted
problem into an unweighted one Introduce the
Natural Solution and the Singular Value
Decomposition
4Part 1the spaces ofmodel parametersanddata
5what is a vector?
- algebraic viewpoint
- a vector is a quantity that is manipulated
- (especially, multiplied)
- via a specific set of rules
- geometric viewpoint
- a vector is a direction and length
- in space
6what is a vector?
- algebraic viewpoint
- a vector is a quantity that is manipulated
- (especially, multiplied)
- via a specific set of rules
- geometric viewpoint
- a vector is a direction and length
- in space
column-
in our case, a space of very high dimension
7(No Transcript)
8forward problem
- d Gm
- maps an m onto a d
- maps a point in S(m) to a point in S(d)
9Forward Problem Maps S(m) onto S(d)
10inverse problem
- m G-gd
- maps a d onto an m
- maps a point in S(m) to a point in S(d)
11Inverse Problem Maps S(d) onto S(m)
12Part 2 Transformations of coordinate axes
13coordinate axes are arbitrarygiven M
linearly-independentbasis vectors m(i)we can
write any vector m as ...
14(No Transcript)
15... as a linear combination of these basis vectors
16... as a linear combination of these basis vectors
components of m in new coordinate system mi
ai
17might it be fair to saythat the components of a
vectorare a column-vector?
18matrix formed from basis vectors Mij vj(i)
19transformation matrix T
20transformation matrix T
same vector different components
21Q does T preserve length ?(in the sense that
mTm mTm)
A only when TT T-1
22transformation of the model space axes
d Gm GIm GTm-1 Tmm Gm
d Gm d Gm
same equation different coordinate system for m
23transformation of the data space axes
d Tdd TdG m Gm
d Gm d Gm
same equation different coordinate system for d
24transformation of both data space and model space
axes
d Tdd TdGTm-1 Tmm Gm
d Gm d Gm
same equation different coordinate systems for d
and m
25Part 3 how transformations can be used to
convert a weighted problem into an unweighted one
26when are transformations useful ?
remember this?
27when are transformations useful ?
remember this?
massage this into a pair of transformations
28mTWmm
WmDTD or WmWm½Wm½Wm½TWm½
OK since Wm symmetric
mTWmm mTDTDm Dm TDm
Tm
29when are transformations useful ?
remember this?
massage this into a pair of transformations
30eTWee
WeWe½We½We½TWe½
OK since We symmetric
eTWee eTWe½TWe½e We½m TWe½m
Td
31we have converted weighted least-squares
into unweighted least-squares
minimize E L eTe mTm
32steps
- 1 Compute Transformations
- TmDWm½ and TeWe½
- 2 Transform data kernel and data to new
coordinate system - GTeGTm-1 and dTed
- 3 solve G m d for m using unweighted
method - 4 Transform m back to original coordinate
system - mTm-1m
33steps
extra work
- 1 Compute Transformations
- TmDWm½ and TeWe½
- 2 Transform data kernel and data to new
coordinate system - GTeGTm-1 and dTed
- 3 solve G m d for m using unweighted
method - 4 Transform m back to original coordinate
system - mTm-1m
34steps
to allow simpler solution method
- 1 Compute Transformations
- TmDWm½ and TeWe½
- 2 Transform data kernel and data to new
coordinate system - GTeGTm-1 and dTed
- 3 solve G m d for m using unweighted
method - 4 Transform m back to original coordinate
system - mTm-1m
35Part 4 The Natural Solution and the Singular
Value Decomposition (SVD)
36Gm d
suppose that we could divide up the problem like
this ...
37Gm d
only mp can affect d
since Gm0 0
38Gm d
Gmp can only affect dp
since no m can lead to a d0
39(No Transcript)
40determined by a priori information
determined by data
determined by mp
not possible to reduce
41natural solutiondetermine mp by solving
dp-Gmp0set m00
42what we need is a way to do
Gm d
43Singular Value Decomposition (SVD)
44singular value decomposition
UTUI and VTVI
45suppose only p ?s are non-zero
46suppose only p ?s are non-zero
only first p columns of U
only first p columns of V
47UpTUpI and VpTVpIsince vectors mutually
pependicular and of unit length
UpUpT?I and VpVpT?Isince vectors do not span
entire space
48The part of m that lies in V0 cannot effect d
since VpTV00
so V0 is the model null space
49The part of d that lies in U0 cannot be affected
by m
since ?pVpTm is multiplied by Up and U0 UpT 0
so U0 is the data null space
50The Natural Solution
51The part of mest in V0 has zero length
52The error has no component in Up
53computing the SVD
54determining puse plot of ?i vs. i
however
case of a clear division between ?igt0 and ?i0
rare
55(No Transcript)
56Natural Solution
57(No Transcript)
58resolution and covariance
59resolution and covariance
large covariance if any ?p are small
60Is the Natural Solution the best solution?
- Why restrict a priori information to the null
space - when the data are known to be in error?
- A solution that has slightly worse error but fits
the a priori information better might be
preferred ...