Title: Lecture 10 Nonuniqueness and Localized Averages
1Lecture 10 NonuniquenessandLocalized Averages
2Syllabus
Lecture 01 Describing Inverse ProblemsLecture
02 Probability and Measurement Error, Part
1Lecture 03 Probability and Measurement Error,
Part 2 Lecture 04 The L2 Norm and Simple Least
SquaresLecture 05 A Priori Information and
Weighted Least SquaredLecture 06 Resolution and
Generalized Inverses Lecture 07 Backus-Gilbert
Inverse and the Trade Off of Resolution and
VarianceLecture 08 The Principle of Maximum
LikelihoodLecture 09 Inexact TheoriesLecture
10 Nonuniqueness and Localized AveragesLecture
11 Vector Spaces and Singular Value
Decomposition Lecture 12 Equality and Inequality
ConstraintsLecture 13 L1 , L8 Norm Problems and
Linear ProgrammingLecture 14 Nonlinear
Problems Grid and Monte Carlo Searches Lecture
15 Nonlinear Problems Newtons Method Lecture
16 Nonlinear Problems Simulated Annealing and
Bootstrap Confidence Intervals Lecture
17 Factor AnalysisLecture 18 Varimax Factors,
Empirical Orthogonal FunctionsLecture
19 Backus-Gilbert Theory for Continuous
Problems Radons ProblemLecture 20 Linear
Operators and Their AdjointsLecture 21 Fréchet
DerivativesLecture 22 Exemplary Inverse
Problems, incl. Filter DesignLecture 23
Exemplary Inverse Problems, incl. Earthquake
LocationLecture 24 Exemplary Inverse Problems,
incl. Vibrational Problems
3Purpose of the Lecture
Show that null vectors are the source of
nonuniqueness Show why some localized averages
of model parameters are unique while others
arent Show how nonunique averages can be
bounded using prior information on the bounds of
the underlying model parameters Introduce the
Linear Programming Problem
4Part 1null vectors as the source
ofnonuniquenessin linear inverse problems
5suppose two different solutions exactly satisfy
the same data
since there are two the solution is nonunique
6then the difference between the solutions
satisfies
7the quantitymnull m(1) m(2)
is called a null vector it satisfies G mnull
0
8an inverse problem can have more than one null
vector mnull(1) mnull(2) mnull(3)...
any linear combination of null vectors is a null
vector amnull(1) ßmnull(2) ?mnull(3) is
a null vector for any a, ß, ?
9suppose that a particular choice of model
parametersmparsatisfiesG mpardobs with
error E
10then has the same error Efor any choice of
ai
11since e dobs-Gmgen dobs-Gmpar Si ai 0
12since since ai is arbitrarythe solution is
nonunique
13hencean inverse problem isnonuniqueif it has
null vectors
14Gm
example consider the inverse problem
a solution with zero error is mpard1, d1, d1,
d1T
15the null vectors are easy to work out
note that
times any
of these vectors is zero
16the general solution to the inverse problem is
17Part 2Why some localized averages
areuniquewhile others arent
18lets denote a weighted average of the model
parameters asltmgt aT mwhere a is the vector
of weights
19lets denote a weighted average of the model
parameters asltmgt aT mwhere a is the vector
of weights
a may or may not be localized
20a 0.25, 0.25, 0.25, 0.25Ta 0.
90, 0.07, 0.02, 0.01T
examples
not localized
localized near m1
21now compute the average of the general solution
22now compute the average of the general solution
if this term is zero for all i, then ltmgt does not
depend on ai, so average is unique
23an average ltmgtaTm is uniqueif the average of
all the null vectorsis zero
24- if we just pick an average
- out of the hat
- because we like it ... its nicely localized
- chances are that it will not zero all the null
vectors - so the average will not be unique
25relationship to model resolution R
26relationship to model resolution R
aT is a linear combination of the rows of the
data kernel G
27- if we just pick an average
- out of the hat
- because we like it ... its nicely localized
- its not likely that it can be built out of the
rows of G - so it will not be unique
28- suppose we pick a
- average that is not unique
- is it of any use?
29Part 3 bounding localized averageseven though
they are nonunique
30- we will now show
- if we can put weak bounds on m
- they may translate into stronger bounds on ltmgt
31with
so
32with
so
nonunique
33- but suppose mi is bounded
- 0 gt mi gt 2d1
smallest a3 -d1
largest a3 d1
34smallest a3 -d1
largest a3 d1
- (2/3) d1 gt ltmgt gt (4/3)d1
35smallest a3 -d1
largest a3 d1
- (2/3) d1 gt ltmgt gt (4/3)d1
bounds on ltmgt tighter than bounds on mi
36the question is how to do this in more
complicated cases
37Part 4The Linear Programming Problem
38the Linear Programming problem
39the Linear Programming problem
flipping sign switches minimization to
maximization
flipping signs of A and b switches to
40in Business
unit profit
quantity of each product
profit
maximizes
no negative production
physical limitations of factory government
regulations etc
care about both profit z and product quantities x
41in our case
first minimize then maximize
a
m
ltmgt
bounds on m
not needed
Gmd
care only about ltmgt, not m
42In MatLab
43Example 1simple data kernelone datumsum of mi
is zeroboundsmi 1averageunweighted
average of K model parameters
44(No Transcript)
45if you know that the sum of 20 things is
zero and if you know that the things are bounded
by 1 then you know the sum of 19 of the things
is bounded by about 0.1
46for Kgt10 ltmgt has tigher bounds than mi
47Example 2more complicated data kerneldk
weighted average of first 5k/2 msbounds0 mi
1averagelocalized average of 5
neighboringmodel parameters
48(A)
mi (zi)
(B)
dobs
G
mtrue
j
width, w
depth, zi
i
i
j
49(A)
mi (zi)
(B)
dobs
G
mtrue
j
width, w
depth, zi
i
i
complicated G but reminiscent of Laplace
Transform kernel
j
50(A)
mi (zi)
(B)
dobs
G
mtrue
j
width, w
depth, zi
i
i
j
true mi increased with depth zi
51(A)
mi (zi)
(B)
dobs
G
mtrue
j
width, w
depth, zi
i
i
j
minimum length solution
52(A)
mi (zi)
(B)
dobs
G
mtrue
j
width, w
upper bound on solution
depth, zi
i
i
lower bound on solution
j
53(A)
mi (zi)
(B)
dobs
G
mtrue
j
width, w
depth, zi
i
i
upper bound on average
lower bound on average
j