Title: Engineering Applications
1Engineering Applications
- Dr. Darrin Leleux
- Lecture 8 Matrices
- Chapter 3
2Introduction
- 3.5 Orthogonal and Triangular Matrices
- 3.6 Systems of Linear Equations
- 3.7 MATLAB Matrix Functions
- 3.8 Linear Transformations
- Homework 3
3Orthogonal and Triangular Matrices
- An orthogonal matrix Q is a square matrix that
has the property - Q-1 QT
- Thus the following is true QTQQQTI
- Good example of an orthogonal matrix is
Thus,
4Theorem 3.1
- The n x n matrix Q is orthogonal if and only if
the columns (and rows) of Q form an orthonormal
system
5Orthogonal Transformations
- Multiplication by an orthogonal matrix preserves
the length of a vector and the value of the inner
product
Thus it follows that,
pg 121
6Upper and Lower Triangular Matrices
- A matrix is upper triangular if all its elements
below the diagonal are zero - A matrix is lower triangular if all its elements
above the diagonal are zero - A matrix is diagonal if all it elements not on
the diagonal are zero - Pg 122 Theorems
- Theorem 3.2 det(A) a11a22ann
- Theorem 3.3 det(AB) det(A) det(B)
7LU Factorization
- Suppose the matrix A can be factored into the
product of a lower triangular matrix L and an
upper triangular matrix U so that A LU - det(A)det(L) det(U)
- Thus (A)-1 (LU)-1 U-1L-1
- Example 3.11, pg. 123
- Doolittle method
- MATLAB command lu command performs LU
decomposition
8(No Transcript)
9(No Transcript)
10Systems of Linear Equations
- 10I1 9I2 0I3 100
- -9I1 20I2 - 9I3 0
- 0I1 9I2 15I3 0
The system of equations can be written in the
form Ax b
11Terminology
- A solution is a set of n scalars x1, x2, , xn
that satisfy the equations. - A linear system of equations is consistent if it
has a solution - If there is no solution, the system is called
inconsistent
12Possible states of a system of linear equations
- Consistent, with a unique (one) solution
- Consistent, with infinitely many possible
solutions - Inconsistent, with no solutions
13Terminology (continued)
- The linear system Ax 0 is called homogeneous,
i.e. always has 0 has a solution - If b ? 0, then it is called a nonhomogenous
system - If n gt m, the system has more unknowns than
equations and is called underdetermined - If n lt m, the system has more equations than
unknowns and is called overdetermined - To solve overdetermined systems, use methods such
as least-squares treated in Chapter 7 - For underdetermined systems there are an infinite
number of solutions - Example 3.12, pg. 126
n of variables m of equations
14Solution by Matrix Inverse
- Let A be an n x n matrix. Then, the system Ax
b has a unique solution if and only if A is
invertible (nonsingular). - x A-1 b
- Theorem 3.4
- Using the inverse involves more computations and
potentially more roundoff error - Example 7x21, using matrix inverse is akin to
solving this by x 7-1 21 0.142857 21
2.99997 instead of x 3 in a hypothetical
machine - Direction solutions of systems of equations are
better than solving for the inverse
15Solution by Elementary Row Operations (Gaussian
Elimination)
- Form the augmented matrix with b as the fourth
column - Replace a row of the matrix by a nonzero
multiple of the row - Replace a row by the sum of that row and a
multiple of another row - Interchange two rows of the matrix
- Theorem 3.5 Equivalence of systems, pg 128
- Forms a reduced matrix
16Example 3.13
17Rank of a Square Matrix
- The rank of a matrix A is the number of linearly
independent rows (or columns) - Remember from Ch. 2, the determinant was used to
determine linear independence - The rows of a non-singular matrix are linearly
independent - rank can also be defined as the number of
non-zero rows of the reduced matrix - Also applies to non-square matrices
18Rank Theorems
- Theorem 3.6 Rank of nonsingular matrix
- The n x n matrix A is nonsingular if and only if
the rank of A is n - Theorem 3.7 Solution and rank of augmented
matrix - The nonhomogenous system of equations Axb has a
solution if and only if rank(A) rank( Ab ) - Theorem 3.8 Inverse Matrix
- If an n x n matrix A can be converted to the n x
n identity matrix I by a sequence of elementary
operations, then the inverse A-1 is equal to the
result of applying the same sequence of
elementary operations to I
19MATLAB Matrix Functions
- det Determinant
- inv Inverse
- rank Rank
- rref (row echelon form) Reduced Matrix
- rrefmovie step-by-step reduction
- A\b Solve Ax b
- lu LU decomposition
20EXAMPLE 3.14
pg 132
21(No Transcript)
22(No Transcript)
23Pg. 135, finding A-1 with row reductions
24MATLAB Solution of Systems of Linear Equations
- Gaussian Elimination with partial pivoting
- From previous discussion
- det(A) det(L) det(U)
- (A)-1 U-1 L-1
- Ax b can be rewritten as Ax LUx b
- Define z Ux, thus Ax L(Ux) Lz b
- First solve Lz b
- Lastly solve Ux z
- Example 3.16, pg. 136
25so that x 1, 2, 3T
26Gaussian Elimination and Numerical Errors
- cond(A) gives you the condition number of a
matrix A (how close to singular) - 1 is well conditioned
- ill-conditioned if a small error in data causes a
large relative error in the solution - large condition numbers indicate a an ill
conditioned matrix - log10(cond(A))
- Gives you the estimated loss in precision
- Example 3.17, pg. 138
273.8 Linear Transformations
- An operation is linear if the following is true
- f ?a ?b ? f a ? f b
- Differentiation, integration are linear
- So are transformations such as Laplace and
Fourier - Example 3.18, pg. 140
- Theorem 3.9, pg. 141
28Transformations in the Plane
Example pg 143
orthogonal matrix
293-D rotations
Pg. 144 for rotation matrices Order of rotations
is important, since the matrices do not commute
30Homogeneous Transformations
- The homogenous coordinate representation provides
for rotation, translation, scaling and
perspective transformation - Transformation of an n-D vector in an (n1)-D
space - Example 3.19
31Jack B. Kuipers, Quaternions and Rotation
Sequences, Princeton Princeton University
Press, 1999.
32(No Transcript)