LA Chapter 7 by SGLee
Chapter 7
Dimension and Subspaces
7.1 Properties of bases and dimensions
7.2 Basic spaces of matrix
7.3 Rank-Nullity theorem
7.4 Rank theorem
7.5 Projection theorem
*7.6 Least square solution, https://youtu.be/GwHh5lh5wEs
7.7 Gram-Schmidt orthonomalization process
7.8 QR-Decomposition; Householder transformations
7.9 Coordinate vectors
7.10 Exercises
(English Textbook) http://matrix.skku.ac.kr/2015-Album/Big-Book-LinearAlgebra-Eng-2015.pdf
(e-book : Korean) http://matrix.skku.ac.kr/2015-Album/BigBook-LinearAlgebra-2015.pdf
http://matrix.skku.ac.kr/LA-Sage/
선형대수학 http://matrix.skku.ac.kr/LinearAlgebra.htm
Credu http://matrix.skku.ac.kr/Credu-CLA/index.htm
OCW http://matrix.skku.ac.kr/OCW-MT/index.htm
행렬 계산기 http://matrix.skku.ac.kr/2014-Album/MC-2.html
그래프 이론 http://matrix.skku.ac.kr/2014-Album/Graph-Project.html
행렬론 http://matrix.skku.ac.kr/MT2010/MT2010.htm
JFC http://matrix.skku.ac.kr/JCF/index.htm
The vector space has a basis, and it is a key concept to understand the vector space.
In particular, a basis provides a tool to compare sizes of different vector spaces with infinitely many elements.
By understanding the size and structure of a vector space, one can visualize the space and efficiently use the data sitting contained within it.
In this chapter, we discuss bases and dimensions of vector spaces and then study their properties.
We also study fundamental vector spaces associated with a matrix such as row space, column space, and nullspace, along with their properties.
We then derive the Dimension Theorem describing the relationship between the dimensions of those spaces.
In addition, the orthogonal projection of vectors in will be generalized to vectors in
,
and we will study a standard matrix associated with an orthogonal projection which is a linear transformation.
This matrix representation of an orthogonal projection will be used to study Gram-Schmidt Orthonomalization process and QR-Factorization.
It will be shown that there are many different bases for , but the number of elements in every basis for
is always
.
We also show that every nontrivial subspace of has a basis, and study how to compute an orthogonal basis from a given basis.
Furthermore, we show how to represent a vector as a coordinate vector relative to a given basis,
which is not necessarily a standard basis, and
find a matrix that maps a coordinate vector relative to a basis to a coordinator vector relative to another basis.
7.1 Properties of bases and dimensions
Lecture Movie : https://youtu.be/eePPvXLiffo http://youtu.be/or9c97J3Uk0 ,
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-9-sec-7-1.html
Having learned about standard bases, we will now discuss the concept of dimension of a vector space.
Previously, we learned that an axis representing time can be added to the 3-dimensional physical space.
We will now study the mathematical meaning of dimension.
In this section, we define a basis and dimension of using the concept of linear independence and study their properties.
Basis of a vector space
Definition |
[Basis] |
||
|
|
|
|
|
If a subset
(1) (2) |
|
|
|
|
||
|
(1) If is the subset of
consisting of all the points on a line going through the origin, then any nonzero vector in
forms a basis for
.
(2) If a subset of
represents a plane going through the origin,
then any two nonzero vectors in that are not a scalar multiple of the other form a basis for
. ■
Let where
. Since
is linearly independent and spans
,
is a basis for
. ■
In general
is a basis for
, and it is called the standard basis for
.
How to show linear independence of vectors in ?
Set of vectors
in
is linear independent if
Let
where
's are column vectors and
.
If the homogeneous linear system has the unique solution
,
then the columns of the matrix are linearly independent.
In particular, for ,
implies the linear independence of the columns of
.
Theorem |
7.1.1 |
The following
are linearly independent if and only if
|
For
,
.
This gives us the following linear system
.
This linear system always has the trivial solution .
Furthermore, if and only if
.
Therefore are linearly independent if and only if
. ■
By Theorem 7.1.1, the following three vectors in
are linearly independent because . ■
● http://matrix.skku.ac.kr/RPG_English/7-TF-linearly-independent.html
9 ■
We can also use an inbuilt function of Sage to check whether sets of vectors are linearly independent or not.
Show that with
is a basis for
.
To show that is a basis for
, we need to show that
is linearly independent and it spans
.
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080/
Since the computed determinant above is not zero,
is linearly independent.
We now show that spans
. Let
be a vector in
.
Consider a linear system in
.
Note that if this linear system has a solution, then we can say is spanned by
and so span(
)=
.
The linear system can be written as
,
more explicitly, we have a linear system in ,
(1)
Hence we need to show that the linear system (1) has a solution to show that spans
.
Indeed, the coefficient matrix of the linear system (1) is invertible,
so the linear system (1) has a solution. Therefore is a basis for
. ■
Theorem |
7.1.2 |
Let |
http://matrix.skku.ac.kr/CLAMC/chap7/Page6.htm
Since is a basis for
, each vector in
can be written as a linear combination of
.
That is, there are such that
,
(2)
We now consider a formal equation with :
.
Then, from (2), we get
.
Since are linearly independent,
.
Hence we get the following linear system
. (3)
The homogeneous linear system (3) has unknowns,
, and
linear equations.
Since , the linear system (3) must have a nontrivial solution. Therefore,
is linearly dependent. ■
Theorem |
7.1.3 |
If |
The proof of this theorem follows the theorem 7.1.2.
There are infinitely many bases for
. However, all the bases have the same number of vectors.
Definition |
[Dimension] |
||
|
|
|
|
|
If |
|
|
|
|
||
|
Note that
. If its subspace
is the trivial subspace,
, then
.
Theorem |
7.1.4 |
For
(1) If (2) If |
The determinant of the matrix having the vectors ,
in
as its column vectors is
.
Hence is linearly independent.
By Theorem 7.1.4, is a basis for
. ■
Theorem |
7.1.5 |
If |
Since
spans
, a vector
in
can be written as a linear combination of the vectors in
. Suppose
and
.
By subtracting the second equation from the first one, we get
.
Since is linearly independent,
.
Therefore for each
and
can be written as a unique linear
combination of the vectors in . ■
[Remark] Many a times a basis of is defined to a set which satisfies conditions of Theorem 7.1.4 or Theorem 7.1.5.
Let . Then
.
However, the vector can also be written as follows:
and
.
This is possible because is not a basis for
. ■
7.2 Basic spaces of matrix
Lecture Movie : https://youtu.be/BL9Bhj2ufHg http://youtu.be/KDM0-kBjRoM
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-9-sec-7-2.html
Associated with an matrix
, there are four important vector spaces: row space, column space, nullspace, and eigenspace.
These vector spaces are crucial to study the algebraic and geometric properties of the matrix
as well as the solution space of a linear system having as its coefficient matrix.
In this section, we study the relationship between the column space and the row space of and how to find a basis for the nullspace of
.
Eigenspace and null space
Definition |
[Solution space, Null space] |
||
|
|
|
|
|
The eigenspace |
|
|
|
|
||
|
Basis and dimension of a solution space
Let
be an
matrix. For the given augmented matrix
of a homogeneous linear system with
,
by the Gauss-Jordan Elimination, we can get its RREF, .
Suppose that matrix has
nonzero rows.
(1) If , then the only solution to
is
. Hence the dimension of the solution space is zero.
(2) If , then with permitting column exchanges, we can transform
as
.
Then the linear system is equivalent to
,
,
.
Here, are
free variables. Hence, for any real numbers
,
setting ,
any solution can be written as a linear combination of vectors as follows:
.
Since are arbitrary,
are also solutions to the linear system. Hence, the previous linear combination of the vectors can be written as
.
This implies that spans the solution space of
.
In addition, it can be shown that is linearly independent.
Therefore is a basis for the null space
of
and the dimension of the null space is
.
Definition |
[Dimension of Null space] |
||
|
|
|
|
|
For an |
|
|
|
|
||
|
For the following matrix , find a basis for the null space of
and the nullity of
.
The RREF of the augmented matrix
for
is
.
Hence the general solution is
.
Therefore a basis and the dimension of the null space of is
, nullity(
)
2. ■
Find a basis for the solution space of the following homogeneous linear system and its dimension.
Using Sage we can find the RREF of the coefficient matrix :
[1 3 0 5]
[0 0 1 2]
[0 0 0 0]
Hence the linear system is equivalent to
Since and
are free variables, letting
for real numbers
, the solution can be written
.
Hence we get the following basis and nullity:
, nullity(
)
□
① Finding a basis for a null space
Free module of degree 4 and rank 2 over Integer Ring
Echelon basis matrix:
[ 1 3 4 -2]
[ 0 5 6 –3]
② Computation of nullity
2 ■
Column space and row space
Definition |
|
||
|
|
|
|
|
For a given
are called row vectors and the vectors obtained from the columns of
are called column vectors. The subspace of
is called the row space of
is called the column space of dim Row |
|
|
|
|
||
|
Theorem |
7.2.1 |
If two matrices |
http://www.millersville.edu/~bikenaga/linear-algebra/matrix-subspaces/matrix-subspaces.html
Note that the nonzero rows in the RREF of
form a basis for the row space of
.
The same result can be applied to the column space of .
For the following set , find a basis for
which is a subspace of
:
Note that the subspace is equal to the row space of the following matrix
.
By Theorem 7.2.1, it is also equal to the row space of the RREF of
.
Therefore the collection of nonzero row vectors of
is a basis for .
Free module of degree 5 and rank 3 over Integer Ring
Echelon basis matrix:
[ 1 0 7 0 -39]
[ 0 1 -3 0 31]
[ 0 0 0 1 -7] ■
Find a basis for the column space of :
The column space of is equal to the row space of
.
By Theorem 7.2.1, it is also equal to the row space of the RREF of :
.
Therefore is a basis for the column space of
.
Free module of degree 4 and rank 3 over Integer Ring
Echelon basis matrix:
[ 1 0 0 -1]
[ 0 1 0 1]
[ 0 0 1 0] ■
Theorem |
7.2.2 |
For |
For the proof of Theorem 7.2.2, see http://mtts.org.in//expository-articles
The same number for the column rank and the row rank of
is called the rank of
, and denoted by
.
[Remark] |
Relationship between vector spaces associated with a matrix |
||
|
|
|
|
|
● ● Row( ● Col( |
|
|
|
|
|
|
|
For ,
is a hyperplane of
.
It is easy to see that is a subspace of
.■
(1) If . Then
is a line in the plane passing through the origin perpendicular to the vector .
(2) Let . Then
is the plane in passing through the origin and perpendicular to the vector
. ■
7.3 Dimension theorem (Rank-Nullity Theorem)
Lecture Movie: https://youtu.be/H1mFmDT5pUY http://youtu.be/ez7_JYRGsb4
Lab: http://matrix.skku.ac.kr/knou-knowls/cla-week-9-sec-7-3.html
In Section 7.2, we have studied the vector spaces associated to a matrix .
In this section, we study the relationship between the size of matrix and the dimensions of the associated vector spaces.
Rank
Definition |
[Rank] |
||
|
|
|
|
|
The rank of a matrix |
|
|
|
|
||
|
Let
be an
matrix. If
, then
can be written as the following:
Hence rank()
and nullity(
)
.
Theorem |
7.3.1 [Rank-Nullity theorem] |
For any rank( |
Let ) and the number of leading 1 in
U be r
. Then
.
But the dimension of solution space of 0 is
(n-r) which is the number of free variables in it.
Since 0 and
0 are equivalent, the dimension of solution space of
0 is
(n-r) which is the nullity(
). So
■
The Rank-Nullity Theorem can be written as follows in terms of a linear transformation:
If is the standard matrix for a linear transformation
, then
,
.
Hence
.
http://matrix.skku.ac.kr/sglee/krf-1/linearalgebra/multimediaproject/8week/img/pf5-2-5.gif
The RREF of is
.
Hence rank()
. Since
,
the dimension of the solution space for is equal to nullity(
)
. ■
Compute the rank and nullity of the matrix , where
.
The RREF of can be computed as follows
[1 0 3 7 0]
[0 1 1 3 0]
[0 0 0 0 1]
[0 0 0 0 0]
Hence rank()
, and by Theorem 7.3.1,
. ■
● http://matrix.skku.ac.kr/RPG_English/7-B2-rank-nullity.html
3
2 ■
Theorem |
7.3.2 |
A linear system
|
Let
,
,
. Then the linear system
can be written as
. (1)
Hence we have the following:
has a solution.
There exist
satisfying the linear system (1).
is a linear combination of the columns of
.
Col
. ■
The linear system has its matrix form as following.
.
Since , Theorem 7.3.2 implies that the linear system has a solution. ■
Definition |
[Hyperplane] |
||
|
|
|
|
|
Let (that is |
|
|
|
|
||
|
Note that
nullity(
)
since
has
variables and one equation.
The orthogonal complement of is a hyperplane of
(
dimensional subspace of
).
Theorem |
7.3.3 |
Let |
Since
, by the Rank-Nullity Theorem,
. Thus
for a nonzero vector
. Therefore
. ■
Note: The Four Fundamental Subspaces
http://www.itshared.org/2015/06/the-four-fundamental-subspaces.html
7.4 Rank theorem
Lecture Movie : https://youtu.be/eYVZiWwB89A http://youtu.be/8P7cd-Eh328
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-9-sec-7-4.html
In this section, we study the relationship between the rank of a matrix and the theorems
that is related to the dimension of subspaces associated to .
Theorem |
7.4.1 [Rank theorem] |
For any |
We have seen that there exist an invertible matrix
and an invertible
matrix
such that has the block form
where
is an
identity matrix for some
,
and the rest of the matrix is zero. For this matrix, it is obvious that row rank = column rank = .
The strategy is to reduce an arbitrary matrix to this form. see the detail in the following.
http://ocw.mit.edu/courses/mathematics/18-701-algebra-i-fall-2010/study-materials/MIT18_701F10_rrk_crk.pdf ■
Theorem |
7.4.2 |
For any |
Since dim Row(
)≤
, dim Col(
)≤
, and
rank()=dim Row(
)
dim Col(
), it follows that rank(
)≤min {
}. ■
Theorem |
7.4.3 [Rank theorem] |
Given
(1) dim Row( (that is, rank(
(2) dim Col( (that is, rank( |
(1) follows from Theorem 7.3.1,
(2) follows from the fact that RowCol
and rank
rank
along with replacing
in (1) by
. ■
Theorem |
7.4.4 |
For a square matrix rank( |
If
is invertible, then
has the trivial solution only and hence Null(
)
,
giving nullity()
. By the Rank-Nullity Theorem, we have
.
This can be reversed. ■
Find the rank and nullity of the following matrix:
Using Gaussian Elimination,
REF(
).
Hence and the Rank-Nullity Theorem gives
.
3
1 ■
Theorem |
7.4.5 |
For matrices
(1) Null( (2) Null( (3) Col( (4) Row( |
We prove only (1) here.
.
. This implies Null(
)
Null(
).
Others can be shown similarly. ■
Theorem |
7.4.6 |
rank( |
Follows from theorem 7.4.5.
Theorem |
7.4.7 |
Multiplying a matrix
rank( |
Follows from theorem 7.4.6.
Theorem |
7.4.8 |
Suppose
(1) Every submatrix (2) |
(1) Suppose the submatrix
is obtained by taking
rows and
columns of
.
Since ,
.
(2) Since the rank of is
, there are
linearly independent rows of
.
Then the matrix consisting of the
linearly independent rows has the rank equal to
.
We now form a matrix by taking
linearly independent columns of
.
Then is an
submatrix of
whose rank is equal to
. ■
Main Theorem of Inverse Matrices
Theorem |
7.4.9 [Invertible Matrix Theorem] |
For an (1) (2) (3) (4) *(5) (6) For any (7) (8) The column vectors of (9) The column vectors of *(10) (11) (12) The row vectors of (13) The row vectors of *(14) (15) (16) (17) (18) |
We first prove the following equivalence:
① (10) (7)
(8)
(11)
(10)
(10) (7): Suppose
has a left inverse
such that
. If
satisfies
, then
gives
.
Hence has the unique solution
.
(7) (8): Suppose
has only the trivial solution.
If denotes the
th column vector of
and
, then
Hence the set of the column vectors of
is linearly independent.
(8) (11): Suppose the column vectors of
are linearly independent. Then
,
which is equal to the maximum number of linearly independent columns of , is equal to
.
(11) (10): Suppose
. Then the rows of
are linearly independent.
Let be the
th standard basis vector. Then the following linear systems
,
are consistent for all , since rank(
)
rank
.
Letting be a solution to the linear systems,
is (a left) inverse of
.
② (1) (6)
(14)
(2)
(1)
(1) (6): Suppose
is invertible. Then, for any
vector
,
.
Hence has a solution
.
For the uniqueness of the solution, suppose is another solution. Then
.
Therefore has a unique solution.
(6) (14): Suppose that for each
, the linear system
has a unique solution.
If we take to be
, the
th standard basis vector, then the following linear system
,
also has a unique solution. If is the solution to the linear system,
then the matrix is a right inverse of
.
(14) (2): Suppose
has a right inverse
such that
. Then
.
Hence .
(2) (1): Suppose
. If we let
, then it can be shown that
.
Hence is invertible. ■
7.5 Projection Theorem
Lecture Movie : http://youtu.be/GlcA4l8SmlM, http://youtu.be/Rv1rd3u-oYg
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-10-sec-7-5.html
In Chapter 1, we have studied the orthogonal projection in where the vectors and their projections can be visualized.
In this section, we generalize the concept of projection in .
We also show that the projection is a linear transformation and find its standard matrix,
which will be crucial to study the Gram-Schmidt Orthogonalization and the QR-Decomposition*.
Orthogonal Projection in
Projection (in 1-Dimensional subspace) on
Theorem |
7.5.1 [Projection] |
For any nonzero vector
where |
The proof of the above theorem is similar to that in case of orthogonal projection in the and
.
In the above theorem, the vector
is called the orthogonal projection of
onto
and
denoted by .
The vector is called the orthogonal complement of the vector
.
Definition |
[Orthogonal projection on |
||
|
|
|
|
|
The transformation
is called the orthogonal projection of |
|
|
|
|
||
|
It can be shown that the orthogonal projection
is a linear transformation.
(http://www.math.lsa.umich.edu/~speyer/417/OrthoProj.pdf)
Theorem |
7.5.2 |
Let
is
Note that |
For the proof of this theorem, see the website:
Using the above theorem, find the standard matrix of the orthogonal projection in
onto the line passing through the origin.
(Compare this with
in Chapter 6.)
This is a problem of finding the orthogonal projection of a vector onto the subspace spanned by a vector
.
Hence we take as a unit vector
on the line
.
Since the slope of the line is ,
and
.
Therefore, by the previous theorem,
. ■
Find the standard matrix for the orthogonal projection
in
onto the subspace spanned by the vector
.
,
Hence, . ■
Projection of on subspace
in
Theorem |
7.5.3 |
Let
In this case |
,
http://www.math.lsa.umich.edu/~speyer/417/OrthoProj.pdf
Theorem |
7.5.4 |
Let If
|
A rigorous proof uses facts from Sec 7.7 and *Sec 7.8.
http://www.math.lsa.umich.edu/~speyer/417/OrthoProj.pdf ■
Find the standard matrix for the orthogonal projection in onto the plane
.
The general solution to
is
(
).
Thus is a basis for the solution space of the plane
.
Hence, by taking , the standard matrix is
.
Since and
,
■
[20/21 4/21 -2/21]
[ 4/21 5/21 8/21]
[-2/21 8/21 17/21] ■
The standard matrix
for an orthogonal projection is symmetric and idempotent (
).
|
[Remark] Simulation of the projection of two vectors |
||
|
|
|
|
|
● http://www.geogebratube.org/student/m9503
|
|
|
|
|
|
|
|
7.6 * Least square solutions
Lecture Movie : https://youtu.be/GwHh5lh5wEs https://youtu.be/BC9qeR0JWis
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-10-sec-7-6.html
Previously, we have studied how to find solve the linear system when the linear system has a solution.
In this section, we study how to find an optimal solution using projection when the linear system does not have any solution.
● Details can be found in the following websites:
http://www.seas.ucla.edu/~vandenbe/103/lectures/ls.pdf
● Least square solutions with GeoGebra
<Simulations> http://www.geogebratube.org/student/m12933
● Least square solutions with Sage
<Simulations> http://matrix.skku.ac.kr/2012-album/11.html
7.7 Gram-Schmidt Orthonomalization process
Lecture Movie: http://youtu.be/gt4-EuXvx1Y
, http://youtu.be/EBCi1nR7EuE
Lab: http://matrix.skku.ac.kr/knou-knowls/cla-week-10-sec-7-7.html
Every basis of has
elements, but all the bases are distinct. In this section,
we show that every nontrivial subspace of has a basis and
how to find an orthonormal basis from a given basis.
[Remark] |
|
||
|
|
|
|
|
The subspaces There are many different bases for |
|
|
|
|
|
|
|
Orthogonal set and orthonormal set
Definition |
|
||
|
|
|
|
|
For vectors
If every pair of vectors in |
|
|
|
|
||
|
The above definition can be summarized as follows:
is an orthogonal set.
(
)
is an orthonormal set.
(
Kronecker delta)
(1) The standard basis for
is orthonormal.
(2) In , let
.
Then is orthogonal, but not orthonormal.
(3) In , let
.
Then the set is orthonormal.
(4) If is an orthogonal set, then
is an orthonormal set. ■
Orthogonality and Linear independence
Theorem |
7.7.1 |
Let |
For
, suppose
.
Then, for each (
,
,
,
),
.
That is,
Since, for ,
, we have
.
Since implies
, we have
.
Therefore, is linearly independent. ■
Orthogonal Basis and Orthonormal Basis
Definition |
[Orthonormal basis] |
||
|
|
|
|
|
Let |
|
|
|
|
||
|
Sets in (1) and (3) of are orthonormal bases of
and the set in (2) is an orthogonal basis of
.
Theorem |
7.7.2 |
Let
(1) If
where
(2) If |
We prove (1) only. Since
is a basis for
,
each vector can be expressed as a linear combination of vectors in
as follows:
.
For each , we have
.
Since is orthonormal,
. Hence
. ■
Write as a linear combination of the vectors in
that is the orthonormal basis for in (3)
Let . Then, by Theorem 7.7.2,
. Hence
,
,
.
. ■
Theorem |
7.7.3 (General form of Theorem 1.3.1 in |
(1) Suppose
(2) If |
Let be a subspace of
spanned by the two vectors
in an orthonormal set
. Find the orthogonal projection of
onto
and
the orthogonal component of perpendicular to
.
.
The orthogonal component of perpendicular to
is
. ■
Gram-Schmidt orthonormalization process
Theorem |
7.7.4 |
Let |
[Gram-Schmidt Orthonomalization]
We first derive an orthogonal basis for
from the basis
as follows:
[Step 1] Take .
[Step 2] Let be a subspace spanned by
and let
.
[Step 3] Let be a subspace spanned by
and
and let
.
[Step 4] Repeat the same procedure to get
where
.
It is clear that is orthogonal. By taking
,
we get an orthonormal basis for
. ■
The above process of producing and orthonormal basis from a given basis is called the Gram-Schmidt Orthonormalization process.
[Remark] |
Simulation for Gram-Schmidt Orthonomalization |
||
|
|
|
|
|
● http://www.geogebratube.org/student/m58812
|
|
|
|
|
|
|
|
Use the Gram-Schmidt Orthonomalization to find an orthonormal basis for
from the two linearly independent vectors
and
.
We first find orthogonal vectors ,
as follows:
[Step 1]
[Step 2]
.
Finally where
,
is an orthonormal basis. ■
Let . Use the Gram- Schmidt Orthonomalization to find an orthonormal basis
for
using the basis
for
.
We first find orthogonal vectors :
[Step 1] Take .
[Step 2]
[Step 3]
By normalizing , we get
,
,
.
Therefore,
□
① Computation for an orthogonal basis
[ 1 1 0]
[-1/2 1/2 2]
[-2/9 2/9 –1/9]
② Normalization
[ 1/2*sqrt(2) 1/2*sqrt(2) 0]
[-1/3*sqrt(1/2) 1/3*sqrt(1/2) 4/3*sqrt(1/2)]
[ -2/3 2/3 -1/3]
Therefore, we get an orthonormal basis
.
We can verify if is orthonormal as follows:
③ Checking for orthonormality
[1 0 0] [1 0 0]
[0 1 0] [0 1 0]
[0 0 1] [0 0 1] ■
Let . Use the Gram-Schmidt Orthonomalization to find an orthonormal basis
for a subspace of
for which
is a basis.
■
7.8 * QR-Decomposition; Householder Transformations
Lecture Movie : http://www.youtube.com/watch?v=crMXPi2lgGs
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-10-sec-7-8.html
If an matrix
has
linearly independent columns, then the Gram-Schmidt Orthogonalization can be used to decompose the matrix
in the form of
where the columns of
are the orthonormal vectors obtained by applying the Gram-Schmidt Orthogonalization to the columns of
and
is an upper triangular matrix.
The -decomposition is widely used to compute numerical solutions to linear systems, least-squares problems, and eigenvalue and eigenvector problems.
In this section, we briefly introduce the -decomposition.
● Details can be found in the following websites:
● http://www.math.ucla.edu/~yanovsky/Teaching/Math151B/handouts/GramSchmidt.pdf
● https://inst.eecs.berkeley.edu/~ee127a/book/login/l_mats_qr.html
● http://www.ugcs.caltech.edu/~chandran/cs20/qr.html
7.9 Coordinate vectors
Lecture Movie : http://youtu.be/M4peLF7Xur0, http://youtu.be/tdd7gbtCCRg
Lab : http://matrix.skku.ac.kr/knou-knowls/cla-week-10-sec-7-9.html
In a finite-dimensional vector space, a basis is closely related to a coordinate system.
We have so far used the coordinate system associated to the standard basis of .
In this section, we introduce coordinate systems based on non-standard bases.
We also study the relationship between coordinate systems associated to different bases.
If
is an ordered basis for
,
then any vector in
is uniquely expressed as a linear combination of the vectors in
as follows:
(1)
Then are called coordinates of the vector
relative to the basis
.
Definition |
[Coordinate vectors] |
||
|
|
|
|
|
The scalars
is called the coordinate vector of |
|
|
|
|
||
|
The vector in
can be expressed as follows relative to the standard basis
for
:
.
Therefore
. ■
Let .
For find the coordinate vector
relative to the basis
for
.
From
,
we get the linear system .
By solving this linear system, we get .
. ■
As described above, finding the coordinate vector relative to a basis is equivalent to solving a linear system.
Theorem |
7.9.1 |
Let
(1) (2) |
In general we have
.
Change of Basis
Let
and
be two different ordered bases for
.
In the following, we consider a relationship between and
.
Letting
, the coordinate vector of
relative to
is
,
and the coordinate vector of
relative to
can be expressed as
.
Let be the coordinate vector of
relative to
and matrix
be
.
Then we have
,
that is, . (2)
In the equation (2) matrix
transforms the coordinate vector
to another coordinate vector
.
Hence the matrix is called a transition matrix from ordered basis
to ordered basis
and denoted by
. Therefore,
.
This transformation is called change of basis.
Note that the change of basis does not modify the nature of a vector,
but it changes coordinate vectors. The following example illustrates this.
Let be the standard basis for
and
.
For the two different ordered bases ,
:
(1) Find the transition matrix from basis
to basis
.
(2) Suppose . Find the coordinate vector
.
(3) For , show that equation (2) holds.
(1) Since , we need to compute the coordinate vectors for
relative to
. Since
,
. Hence
.
(2)
(3) Since and also
,
.
It can be easily checked that . ■
For and
,
,
,
let and
, both of which are bases for
. Find
.
Since , we first find the coordinate vectors for
relative to
. Letting
,
we get the following three linear systems:
Note that all of the above linear systems have as their coefficient matrix.
Hence we can solve the linear systems simultaneously using the RREF of the coefficient matrix.
That is, by converting the augmented matrix in its RREF,
we can find the values of ,
,
at the same time:
has the RREF
.
Therefore, the transition matrix from to
is
. □
● http://matrix.skku.ac.kr/RPG_English/7-MA-transition-matrix.html
[ 1 0 0|-1 2 1]
[ 0 1 0| 1 1 1]
[ 0 0 1| 2 1 2] ■
Theorem |
7.9.2 |
Suppose Then |
For the two bases for
in
, compute the following:
(1) The transition matrix from basis
to basis
.
(2) The coordinate vector relative to basis
for given
.
(1) Since the transition matrix from to
is
, by Theorem 7.9.2, we have
.
(2) .
[ 1 0 0|-1/2 3/2 -1/2]
[ 0 1 0| 0 2 -1]
[ 0 0 1| 1/2 -5/2 3/2] ■
Ch. 7 Exercises
● http://matrix.skku.ac.kr/LA-Lab/index.htm
● http://matrix.skku.ac.kr/knou-knowls/cla-sage-reference.htm
http://matrix.skku.ac.kr/LA-Lab/7-1/7-1.htm
http://matrix.skku.ac.kr/LA-Lab/7-2/7-2.htm
http://matrix.skku.ac.kr/LA-Lab/7-3/7-3.htm
http://matrix.skku.ac.kr/LA-Lab/7-4/7-4.htm
http://matrix.skku.ac.kr/LA-Lab/7-5/7-5.htm
http://matrix.skku.ac.kr/LA-Lab/7-6/7-6.htm
http://matrix.skku.ac.kr/LA-Lab/7-7/7-7.htm
http://matrix.skku.ac.kr/LA-Lab/7-8/7-8.htm
http://matrix.skku.ac.kr/LA-Lab/7-9/7-9.htm
About the Author
https://www.researchgate.net/profile/Sang_Gu_Lee
https://scholar.google.com/citations?user=FjOjyHIAAAAJ&hl=en&cstart=0&pagesize=20
http://orcid.org/0000-0002-7408-9648
http://www.scopus.com/authid/detail.uri?authorId=35292447100
http://matrix.skku.ac.kr/sglee/vita/LeeSG.htm
Made by SGLee http://matrix.skku.ac.kr/sglee/