SKKU-LA-CH-3-SGLee
Chapter 3
Matrix and Matrix Algebra
3.1 Matrix operation
3.2 Inverse matrix
3.3 Elementary matrix
3.4 Subsapce and linear independence
3.5 Solution set of a linear system and matrix
3.6 Special matrices
*3.7 LU-decomposition
Matrix is widely used as a tool to transmit digital sounds and images through internet as well as solving linear systems. We define the addition and product of two matrices.
These operations are tools to solve various linear systems. Matrix product also becomes an excellent tool in dealing with function composition.
In the previous chapter, we have found the solution set using the Gauss elimination method.
In this chapter, we define the addition and scalar multiplication of matrices and introduce algebraic properties of matrix operations.
It will be used to describe the relation between solution set and matrix, Then using the Gauss elimination, we show how to find the inverse matrix.
Furthermore, we investigate the concepts such as linearly independence and subspace which are necessary in understanding the structure of a linear system.
Finally we introduce some interesting special matrices.
3.1 Matrix operation
Reference video: https://youtu.be/C56kVi-AZW8 (http://youtu.be/DmtMvQR7cwA)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-3-Sec-3-1.html
|
|
|
This chapter introduces the definition of the addition and scalar multiplication of matrices and the algebraic properties of the matrix operations. Although many of the properties are identical to those of the operations on real numbers, some properties are different. Matrix operation is a generalization of the operation on real numbers. |
Definition |
[Equality of Matrices] |
||
|
|
|
|
|
Two matrices and of same size are equal if for all , and denote it by . |
|
|
|
|
||
|
To define equal matrices, the size of two matrices should be the same.
For what values of the two matrices
,
are equal?
For , each entry should be equal. Thus (that is, ) , , , . ■
Definition |
[Addition and scalar multiplication of matrix] |
||
|
|
|
|
|
Given two matrices and and a real number , the sum of and , and the scalar multiple of by are defined by , . |
|
|
|
|
||
|
To define addition, the size of two matrices should be the same.
For , what is , , ?
,
. □
● http://matrix.skku.ac.kr/RPG_English/3-MA-operation.html
● http://matrix.skku.ac.kr/RPG_English/3-MA-operation-1.html
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
[ 1 3 0] [ 2 4 –8] [-1 -1]
[-3 4 4] [-4 2 6] [-2 –2] ■
Definition |
[Matrix product] |
||
|
|
|
|
|
Given two matrices and , the product of and is defined below. , where . |
|
|
|
|
||
|
For two matrices and to be compatible for multiplication, we require the number of columns of to be equal to the number of rows of .
The resultant matrix is of size number of rows of by the number of columns of .
[Remark] |
|
||
|
|
||
|
|
||
|
|
||
|
[Remark] |
Meaning of matrix product |
||
|
|
|
|
|
Let , and denote the th row of by and the th column of by . Then Thus, Note that the inner product of th row vector of and the th column vector of is the entry of .
[King Sejong's 'ㄱ' rule] |
|
|
|
|
|
|
|
Let , . Then □
● http://matrix.skku.ac.kr/RPG_English/3-MA-operation-1-multiply.html
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
|
Using matrix product, one can express a linear system easily. Let us consider the following linear system
and let
, ,
be the coefficient matrix, the unknown vector and the constant vector respectively.
Then we can express the linear system as
Theorem |
3.1.1 |
Let be matrices of proper sizes (oeprations are well defined) and let be scalars. Then the following hold. (1) (commutative law of addition) (2) (associative law of addition) (3) (associative law of multiplication) (4) (distributive law) (5) (distributive law) (6) (7) (8) (9) |
The proof of the above facts are easy and readers are encouraged to prove them.
Check the associative law of the matrix product. Since , we have
Since , we have . Hence, . ■ |
||
|
|
The properties of operations on matrices are similar to those of operations on real numbers which are well known,
Exception: For matrices , we do not have in general.
|
Suppose that we are given the following matrices . , . Then is defined but is not defined. Similarly is a matrix but is a matrix, and hence . Also although and are matrices, as we can see below, we have . ■ |
|
|
|
[Remark] |
Computer simulation |
||
|
|
|
|
|
[matrix product] (Commutative law does not hold.) |
|
|
|
|
|
|
|
Definition |
[Zero matrix] |
||
|
|
|
|
|
A zero matrix consists of entries of 0's and denoted by (or ). |
|
|
|
|
||
|
Theorem |
3.1.2 |
For any matrix and a zero matrix of a proper size, the following hold. (1) (2) (3) (4) |
Note: Although , it is possible to have , . Similarly,
although , , it is possible to have .
|
Let . Then . But and . Also but . ■ |
|
|
|
We should first define scalar matrices.
Definition |
[Identity matrix] |
||
|
|
|
|
|
A scalar matrix of order with diagonal entries all 1's is called an identity matrix of order and is denoted by . That is, |
|
|
|
|
||
|
Let be an matrix and the identity matrix . It is easy to see that
.
Let . Then ,
.
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
[ 4 -2 3]
[ 5 0 2]
[ 4 -2 3]
[ 5 0 2]
[0 0]
[0 0] ■
Definition |
|
||
|
|
|
|
|
Let be a square matrix of order . The th power of is defined by ( times) |
|
|
|
|
||
|
Theorem |
3.1.3 |
If is a square matrix and are non negative integers, then . |
Let . Find , , and confirm that .
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
[ 6 -8]
[ 20 -10]
[-16 -12]
[ 30 -40]
[1 0]
[0 1]
True ■
In the set of real numbers, we have .
However, the commutative law under matrix product does not work and thus we only have the following.
.
When , we have .
Definition |
[Transpose matrix] |
||
|
|
|
|
|
For a matrix , the transpose of is denoted by and defined by , . |
|
|
|
|
||
|
The transpose of is obtained by interchanging the rows and columns of .
Find the transpose of the following matrices.
,
,
. □
1 4] [ 5 –3 2] [3]
[-2 5] [ 4 2 1] [0]
[ 3 0] [1] ■
Theorem |
3.1.4 |
Let be matrices of appropriate sizes and a scalar. The following hold. (1) (2) (3) (4) . |
Let . Show that (3) of Theorem 3.1.4 is true.
Since , .
Also, . Thus .
■
Definition |
[Trace] |
||
|
|
|
|
|
The trace of is defined by . |
|
|
|
|
||
|
Theorem |
3.1.5 |
If , are square matrices of the same size and , then (1) (2) , (3) (4) (5) |
We prove the item (5) only and leave the rest as an exercise.
. ■
Let . Show that (5) of Theorem 3.1.5 is true.
37
37 ■
3.2 Inverse matrix
Reference video: https://youtu.be/naFiYy4RTxA (http://youtu.be/GCKM2VlU7bw)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-3-Sec-3-2.html
In this chapter, we introduce an inverse matrix of a square matrix which plays like a multiplicative inverse of a real number.
We investigate the properties of an inverse matrix.
You will see that some properties holding in the inverse of a real number are not true in the matrix inverse operation although most hold in both inverses.
Definition |
|
||
|
|
|
|
|
A square matrix of order is called invertible (or nonsingular) if there is a square matrix such that . This matrix if exists is called the inverse matrix of . If such a matrix does not exist, is called noninvertible, (orsingular). |
|
|
|
|
||
|
Let . Note that the third row of has all zeroes. Thus for any matrix
the third row of is . Therefore there does not exist such that , that is, is singular.
False ■
Theorem |
3.2.1 |
If is an invertible square matrix of order , then an inverse of is unique. |
Suppose that are inverses of . Then as
,
we have
Thus an inverse of is unique. ■
A necessary and sufficient condition for to be invertible is that . Hence one has
.
It is straightforward to check
■
Theorem |
3.2.2 |
If are invertible square matrices of order and is a nonzero scalar, then the following hold. (1) is invertible and . (2) is invertible and . (3) is invertible and . (4) is invertible and . |
(2) .
(3)~(4) Just check that the product of matrices are the identity matrix. ■
Theorem |
3.2.3 |
If is an invertible matrix, then so is and the following holds. . |
. ■
Let . Check that .
Since ,
, we have
. Also since
we have
. ■
3.3 Elementary matrices
Reference video: https://youtu.be/pcnFDa8K8ZY (http://youtu.be/GCKM2VlU7bw)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-3-Sec-3-3.html
In the previous section, we defined an inverse of square matrices.
In this section, we shall discuss how to find an inverse of square matrices by using elementary row operations and elementary matrices.
Definition |
|
||
|
|
|
|
|
An by matrix is called an elementary matrix if it can be obtained from by performing a single elementary row operation (ERO). A permutation matrix is obtained by exchanging rows of . |
|
|
|
|
||
|
Listed below are three type of elementary matrices (Type 1, 2, 3) and the operations that produce them.
: Interchange the 2nd and the 3rd rows.
: Multiply the 2nd row by 3.
: Add 2 times the 1st row to the 2nd row.
[1 0 0 0] [ 1 0 0 0] [1 0 0 7]
[0 0 1 0] [ 0 1 0 0] [0 1 0 0]
[0 1 0 0] [ 0 0 -3 0] [0 0 1 0]
[0 0 0 1] [ 0 0 0 1] [0 0 0 1] ■
[Property of elementary matrix] The product of an elementary matrix on the left and any matrix is the matrix that results when the corresponding same row operation is performed on .
[Type 1]
[Type 2]
[Type 3]
[Property of elementary matrix] The product of an elementary matrix on the left and any matrix is the matrix that results when the corresponding same row operation is performed on .
[Type 1] [Type 2] [Type 3] |
[1 2 3] [1 2 3] [1 2 3]
[0 1 3] [3 5 7] [3 3 3]
[1 1 1] [0 1 3] [0 1 3] ■
[Remark] The inverse of an elementary matrix is elementary.
Since , [Type 1]
Since , [Type 2]
Since , [Type 3]
[1 0 0] [ 1 0 0] [ 1 0 0]
[0 0 1] [ 0 1/3 0] [ 0 1 0]
[0 1 0] [ 0 0 1] [ 0 -4 1]
Finding the inverse of an invertible matrix.
We investigate the method to find the inverse of an invertible matrix using elementary matrices.
First consider equivalent statements of an invertible matrix (its proof will be treated in Chapter 7).
Theorem |
3.3.1 [Equivalent statements] |
For any matrix , the followings are equivalent. (1) is invertible. (2) is row equivalent to . (i.e. RREF) (3) can be expressed as a product of elementary matrices. (4) has only the trivial solution . |
[Remark] |
|
||
|
|
|
|
|
|
|
|
|
|
||
|
Theorem |
3.3.2 [Computation of an inverse] |
|
[Remark] |
Finding an inverse using the Gauss-Jordan elimination. |
||
|
|
|
|
|
[Step 1] For a given , augment on the right side so that we make a matrix . [Step 2] Compute the RREF of . [Step 3] Let be the RREF of in the step 2. Then, following hold. (ⅰ) If , then . (ⅱ) If , then is not invertible so that does not exist. |
|
|
|
|
|
|
|
Find the inverse of Consider . Then
and, its RREF is given as follows.
Since , .
∴ ■ |
||
|
|
|
Find the inverse of It follows from a similar way to Example 03,
Since , does not exist. . ■ |
|
|
|
Find the inverse of
● http://matrix.skku.ac.kr/RPG_English/3-MA-Inverse_by_RREF.html
[ 1 0 0 | 8/15 -19/15 2/15]
[ 0 1 0 | 1/15 -23/15 4/15]
[ 0 0 1 | 4/15 –2/15 1/15]
We can extract inverse of using slicing of the above matrix.
Aug[:, 3:6]
[ 8/15 -19/15 2/15]
[ 1/15 -23/15 4/15]
[ 4/15 -2/15 1/15]
Thus . ■
3.4 Subspaces and Linear Independence
Reference video: https://youtu.be/bFh4MM9sJek, (http://youtu.be/HFq_-8B47xM)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-4-Sec-3-4.html
In this section, we define a linear combination, a spanning set, a linear (in)dependence and a subspace of .
We will also learn how to solve the system of linear equations by using the fact
that solutions for a system of homogeneous linear equations form a subspace of .
* Note that with standard addition and scalar multiplication is also called a vector space over and its elements are called vectors.
Definition |
[Subspace] |
||
|
|
|
|
|
Let be a nonempty subset of . Then is called a subspace of if satisfies the following two conditions.
(1) (closed under the addition) (2) (closed under the scalar multiplication) |
|
|
|
|
||
|
All subspaces of contain zero vector.
|
and are subspaces of where is denoted by the origin. They are called the trivial subspaces.■
|
|
|
|
|
A subset of satisfies two conditions for subspace. Hence, is a subspace of . On the other hand, a subset of does not satisfy conditions for subspace so that is not a subspace of .
, but ■ |
|
|
|
|
All subspaces of are one of the followings. 1. zero subspace : 2. Lines through the origin. 3. All subspaces of are one of the following. 1. zero subspace : 2. Lines through the origin 3. Planes through the origin 4. ■ |
|
|
|
|
Show that a subset is a subspace of.
For , , the following hold. (ⅰ) (ⅱ) Therefore, is a subspace of . ■ |
|
|
|
Let denote the set of all matrices over .
|
For , show that is a subspace of . (This is called a solution space or null spaceof )
Clearly, so that and . Since for , , , we can obtain that and . This implies and . Therefore, is a subspace of . ■ |
|
|
|
Definition |
[linear combination] |
||
|
|
|
|
|
If can be expressed in the form
with , then is called a linear combination of vectors . |
|
|
|
|
||
|
|
Let be vectors of . Can be a linear combination of and ?
The answer is depend on whether there exist in such that
.
From this observation, we can obtain
□
One can easily show that the above system has no solution.
[1 0 0]
[0 1 0]
[0 0 1]
Since this system of linear equation has no solution, there are no such scalars exist. Consequently, is not a linear combination of . ■
|
Show that the set of all linear combinations of is a subspace of .
Let , . Then there exist such that . Hence , and . This implies . Hence, is a subspace of . ■ |
|
|
|
In , we saw that for a subset ,
the set of all linear combinations of is a subspace of .
We say is a subspace of spanned by . In this case, we say spans and S is a spanning set of . We denote it
or .
In particular, if all vectors in can be expressed a linear combination of , then spans . That is,
|
(i) Show that is a spanning set of . (ii) Show that is a spanning set of .
|
|
|
|
Definition |
[column space and row space] |
||
|
|
|
|
|
Let . Then, columns of span a subspace of . This subspace is called a column space of , denote by or Col(). Similarly, a row space of is defined by a subspace of spanned by rows of , denoted by or Row(). |
|
|
|
|
||
|
For
determine whether spans or not.
This is a question whether there exist , , such that a given vector is written as
.
(Using column vectors)
□
[1 0 1]
[0 1 1]
[0 0 0]
This means that one of cannot be determined. Therefore this linear system has a case that the system cannot determine a unique solution. ■
Definition |
[Linearly Independent and Linearly Dependent] |
||
|
|
|
|
|
If satisfies
then (or subset ) are called linearly independent. If (or subset ) are not linearly independent, then it is called linearly dependent. |
|
|
|
|
||
|
If is linearly dependent, there exist at least one non-zero scalar
in such that
.
The unit vectors of
are linearly independent. This is because
.
Show that for , is linearly independent.
For any ,
Thus , and is linearly independent. ■ |
|
Show that if in are linearly independent, then are also linearly independent.
For any , . Since are linearly independent,
Therefore are linearly independent. ■ |
|
|
|
For
in , Show that is linearly dependent.
For any , if , then
□
[ 1 0 -1]
[ 0 1 1]
[ 0 0 0]
This means that the above equations can be reduced to two equations of three variables. Since it has three variables more than the number of equations so that there are non-trivial solutions. One of them is given by , , . Therefore there exist non zero scalars , is linearly dependent. ■
Theorem |
3.4.1 |
For a set , the followings hold. (1) A set is linearly dependent if and only if some element in can be expressed as a linear combination of the other elements in . (2) If contains the zero vector, then is a linearly dependent. (3) If a subset is linearly dependent, then is also linearly dependent. If is linearly independent, then is also linearly independent. |
(1) () If is linearly dependent, then there exist such that
where at least one element in is a nonzero.
Without loss of generality, if then,
so that can be expressed as a linear combination of the other vectors in
() Without loss of generality, we can write
so that
Hence, is linearly dependent since .
Proofs of the rest are left as an exercise. ■
In other words, that set is linearly independent means that any vector in cannot be written as a linear combination of the other vectors in .
In , there are at most vectors in a linearly independent set.
Theorem |
3.4.2 (For proof, see Theorem 7.1.2) |
In , vectors are always linearly dependent. |
|
For in , we can easily check that is linearly dependent from Theorem 3.4.2. ■ |
|
|
|
[Remark] |
Lines and planes (from the viewpoint of subspace) |
||
|
|
|
|
|
(1) Note that the span of nonzero vector in . is a subspace containing the zero vector. Also forms a line through and parallel to . In other words, is translate of by . (2) In general, if are vectors in , then the set of vectors () is a subset of which is the translation of a subspace that pass through the origin, by . |
|
|
|
|
|
|
|
3.5 Solution set and matrices
Reference video: https://youtu.be/E9HHrchqXus (http://youtu.be/daIxHJBHL_g )
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-4-Sec-3-5.html
|
|
|
In this section, we first state the relationship between invertibility of matrices and solutions to systems of linear equations, and then consider homogeneous systems. |
Theorem |
3.5.1 [Relation between an invertible matrix and its solution] |
If an matrix is invertible and is a vector in , the system
has a unique solution . |
The following system can be written as .
where . It is easy to show that is invertible,
and .
Thus the solution of the above system is given by
.
That is . □
x= (-1, 1, 0), x= (-1, 1, 0) ■
[Remark] |
The homogeneous linear system |
||
|
|
|
|
|
can be written as , where
The vector is called a trivial solution, and the solution is called a nontrivial solution. Since a homogeneous linear system always has a trivial solution, there are two cases as follows.
(1) It has only a trivial solution. (2) It has infinitely many solutions (i.e. it has nontrivial solutions as well.)
|
|
|
|
|
|
|
|
Theorem |
3.5.2 [Nontrivial solution of a homogeneous system] |
A homogeneous system with equations and variables such that (i.e. the number of variables is greater than that of equations) has nontrivial solutions. |
Since the existence of multiple solutions (provided that there is any solution at all) depends only on
the coefficient matrix and since a homogeneous system always has at least one solution (namely the trivial one),
multiple solutions for a linear system are possible only if the corresponding homogeneous system has multiple solutions.
But the homogeneous system has multiple solutions if and only if it has a non-trivial solution.
The homogeneous linear system
has the following augmented matrix and its RREF.
A=
[1 1 1 1 0]
[1 0 0 1 0]
[1 2 1 0 0]
RREF(A)=
[ 1 0 0 1 0]
[ 0 1 0 -1 0]
[ 0 0 1 1 0]
The corresponding system of equations is
Let (: a real number). Then the solution to (2) is
.
The solution is trivial if , and nontrivial if . ■
Definition |
[The associated homogeneous system of linear equations] |
||
|
|
|
|
|
Given a linear system , is called the associated homogeneous system of linear equations of . |
|
|
|
|
||
|
Consider a system of linear equations.
The associated homogeneous linear system is as the following:
Since the matrix size is greater than 2, let us use Sage.
The RREF of the augmented matrix of the above system is as follows :
[ 1 0 0 4 2 0 0]
[ 0 1 0 0 0 0 0]
[ 0 0 1 2 0 0 0]
[ 0 0 0 0 0 1 1/3]
Thus the above system reduces to
, , , .
Note that and are free variables.
Let , . Then we have
, .
Consider the augmented matrix of RREF of its associated homogeneous linear system.
[1 0 0 4 2 0 0]
[0 1 0 0 0 0 0]
[0 0 1 2 0 0 0]
[0 0 0 0 0 1 0]
It is easy to see that the solution to this system is given by
, . ■
When compared geometrically the solutions to a system and
those of an associated homogeneous linear system,
the solution set for the associated homogeneous linear system is
merely translated by the vector below.
We call the vector a particular solution which can be obtained by substituting .
[Remark]
|
Relation between the solution set of the linear system and that of the associated homogeneous linear system. |
||
|
|
|
|
|
If and , then . Thus a system of linear equation has solutions. Let be a solution space to . If is a solution to , then
is a solution set of . |
|
|
|
|
|
|
|
|
and |
|
|
|
|
|
|
|
|
A geometric meaning of which is a solution set of is a set of translation
when a particular solution is added to a solution set of .
Since does not contain a zero vector, it is not a subspace of .
Theorem |
3.5.3 [Invertible Matrix Theorem] |
For an matrix , the following are equivalent. (1) RREF (2) is a product of elementary matrices. (3) is invertible. (4) is the unique solution to . (5) has a unique solution for any . (6) The columns of are linearly independent. (7) The rows of are linearly independent. |
[Remark] |
The vectors of the solution space of are orthogonal to the rows of . |
||
|
|
|
|
|
Let us think of the homogeneous system with variables. If the system has linear equations, then the size of matrix is . It can be rewritten using inner product. Let , , , indicate rows of a matrix .
Thus () if is a solution to . That is, the vectors in this solution space of are all orthogonal to the row vectors of the matrix . |
|
|
|
|
|
|
|
Consider the system of linear equations:
, , .
It is easy to check that is non-trivial solution of this system.
Let us verify that is orthogonal to row vectors of the coefficient matrix of the above system.
0
0
0
Thus is orthogonal to row vectors of the coefficient matrix .
[Remark] |
Line, Plane, Hyperplane |
||
|
|
|
|
|
(1) Line of -plane: the solution set of a linear equation where
(2) Plane of -space: the solution set of a linear equation where
Note: The solution set of in where forms a plane. If , then it is a hyperplanepassing through the origin. This hyperplane can be considered as a solution set of for a nonzero vector . This solution set is called an orthogonal complement of (or perp, ), and is called a normal vector of the hyperplane , which is .
: a normal vector of the hyperplane , which is = the orthogonal complement of .
|
|
|
|
|
|
|
|
3.6 Special matrices
Reference video: https://youtu.be/FNRT0d_c9Pg (http://youtu.be/daIxHJBHL_g)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-4-Sec-3-6.html
|
|
|
We saw various properties of matrix operations. In this section, we introduce special matrices and consider some of their crucial properties. |
Diagonal matrix: A square matrix with the entries 0 except the main diagonal.
A diagonal matrix with its main diagonal entries can be written as diag
diag
Identity matrix: the matrix with its main diagonal entries all 1’s, denoted by
Scalar matrix:
,
The following are all diagonal matrices. and are scalar matrices.
and are written as and
.
[ 2 0] [-3 0 0]
[ 0 -1] [ 0 -2 0]
[ 0 0 1] ■
Consider the following matrix.
If and ,
.
For a general matrix , is obtained by multiplying each row of by the corresponding entry of ,
and is obtained by multiplying each column of by the corresponding entry of ,
Furthermore, it satisfies the following.
, ,
In other words, the power of a diagonal matrix is the same as the diagonal matrix with the powers of the entries of the main diagonal. □
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
D^(-1)=
[ 1 0 0]
[ 0 -1/3 0]
[ 0 0 1/2]
D^5=
[ 1 0 0]
[ 0 -243 0
[ 0 0 32 ■
Definition |
|
||
|
|
|
|
|
If a square matrix satisfies , is called a symmetric matrix. If , then is called a skew-symmetric matrix. |
|
|
|
|
||
|
In the following matrices, and are symmetric matrices and is a skew-symmetric matrix.
● http://matrix.skku.ac.kr/RPG_English/3-SO-Symmetric-M.html
|
If is a square matrix, prove the following. (1) is a symmetric matrix. (2) is a skew-symmetric matrix. (1) Since , is a symmetric matrix. (2)Since , is a skew-symmetric matrix. ■ |
|
|
|
[Remark] |
|
||
|
|
|
|
|
A given matrix can be written uniquely as a sum of a symmetric matrix and a skew-symmetric matrix. |
|
|
|
|
|
|
|
For any given matrix and is a symmetric matrix and
is a skew-symmetric matrix. ■
Upper triangular matrix: A square matrix whose entries under the main diagonal are all zeros
Lower triangular matrix: A square matrix whose entries above the main diagonal are all zeros
In general, triangular matrices are as follows.
Theorem |
3.6.1 [Property of a triangular matrix] |
Let and be a lower triangular matrix. (1) is a lower triangular matrix. (2) If is an invertible matrix, then is a lower triangular matrix. (3) If for all , then the main diagonal entries of is all 1’s. |
Let be a square matrix. If there exists an positive integer such that ( is called nilpotent), is invertible and . This is because
■ |
● http://matrix.skku.ac.kr/LA-Lab/index.htm
● http://matrix.skku.ac.kr/knou-knowls/cla-sage-reference.htm
[Solution] Section 3-1 http://youtu.be/LaAAruKbGyc
Section 3-2 http://youtu.be/-MPszmMNvLE
Section 3-3 http://youtu.be/ceI80eXp6xU
Section 3-4 http://youtu.be/s7jxVvVAel4
Section 3-5 http://youtu.be/IygHFdWacds
Section 3-6 http://youtu.be/rYBsPkeVhQ0
Indicate whether the statement is true (T) or false (F). Justify your answer.
(a) If three nonzero vectors form a linearly independent set, then each vector in the set can be expressed as a linear combination of the other two.
False
(b) The set of all linear combinations of two vectors and in is a plane.
False
(c) If u cannot be expressed as a linear combination of and , then the three vectors are linearly independent.
False
(d) A set of vectors in that contains is linearly dependent.
True
(e) If {, , } is a linearly independent set, then so is the set {, , } for every nonzero scalar .
True
Note :
(a) If three vectors are linear independent, it is impossible to make one vector with other two vector's linear combination
(b) If and are linear dependent, linear combinations of two vectors are not plane.
(c) If = (1, 0), = (0, 1), = (0, 2), we cannot make by linear combination , . However, and are linearly dependent.
(e) If a solution of is only , a solution of is . So {} are linearly independent.
When , confirm the following. .
i)
ii)
■
Sage) http://math1.skku.ac.kr/home/math2013/297/
■
When , confirm that and that .
,
but ■
Sage) http://math3.skku.ac.kr/home/pub/20
A=matrix(2, 2, [-2, 3, 2, -3])
B=matrix(2, 2, [-1, 3, 2, 0])
C=matrix(2, 2, [-4, -3, 0, -4])
print "AB="
print A*B
print "AC="
print A*C
AB=
[ 8 -6]
[-8 6]
AC=
[ 8 -6]
[-8 6] ■
When , compute the following.
∴ Answer is ■
Sage)
■
Show that is the inverse of . And confirm that .
====
∴=
=
=
==
∴= ■
If , show that .
∴ If , then . ■
Solved by 주영은, 김원경, Refinalized by 서승완, 이나을, Final OK by SGLee
Find a elementary matrix corresponding to each elementary operation.
(1)
(2)
(3)
(Elementary matrix)
(1) : Interchange the 2nd and the 3rd rows on
(2) : Multiply the 2nd row by 2.
(3) : Add –2 times the 1st row to the 3rd row. ■
Double checked by Sage) http://math3.skku.ac.kr/home/pub/55 by 주영은
#elementary_matrix=matrix([[1,0,0], [0,1,0],[0,0,1]]) E1=elementary_matrix(3, row1=1, row2=2) E2=elementary_matrix(3, row1=1, scale=2) E3=elementary_matrix(3, row1=2, row2=0, scale=-2) print "E1 =" print E1 print "E2 =" print E2 print "E3 =" print E3 |
E1 = [1 0 0] [0 0 1] [0 1 0] E2 = [1 0 0] [0 2 0] [0 0 1] E3 = [ 1 0 0] [ 0 1 0] [-2 0 1] |
Note) Sage의 Index는 0부터 시작함을 주의 ■
Using elementary operations, find the inverse of the following matrix.
(1) (2)
(1) = → → =. =
(2) =
→ →
= . . ■
Let and be any matrix.
(1) What is and confirm how affects on .
(2) What is and confirm how affects on .
Let = ,
(1) = =
affects on the 3rd row of .
(2) = =
affects on the 1st column of . ■
Determine if is a subspace of .
Show 1) is closed under the addition.
2) is closed under the scalar multiplication.
1)
2)
Therefore, is not a subspace of . ■
Determine if is a subspace of .
Show 1) is closed under the addition.
2) is closed under the scalar multiplication.
1)
2)
Therefore, is a subspace of . ■
Find a vector equation and a parameterized equation of the subspace spanned by the following vectors.
(a) ,
(b) ,
(a) , , where , in ℝ.
(b) , , , , . ■
Give a solution by finding the inverse of the coefficient matrix of the system.
Set the coefficient matrix .
Use ERO to get :
Ans) = ■
Sage ) Find Inverse
[ 2/3 -5/3 4/3] [ 0 -1 1] [ 1 -5 4] |
Sage ) Find solution set (해를 구하는 방법)
x= (5/3, 1, 4) x= (5/3, 1, 4)
Determine if the homogeneous system has a nontrivial solutoin.
Let = : Augmented matrix
=
: RREF()
(3, 0, -2, 1) is one of solutions for the given homogeneous system of equations.
Therefore the system has a non trivial solution. ■
Check if the following matrix is invertible. If so,
find its inverse by using a property of special matrices.
The matrix is a diagonal matrix.
Therefore .
Let , , .
, , .
=>
=>
∴ The inverse matrix of is
■
Sage)
[ 1/2 0 0] [ 0 -1/5 0] [ 0 0 1/3]
Find the product by using a property of special matrices.
, : diagonal matrices
1) = : , ,
was multiplied on the left.
2) = : ,
was multiplied on the right.
∴ The answer is . ■
Double checked by Sage)
http://math3.skku.ac.kr/home/pub/56 by. 주영은
A=matrix([[2,0,0], [0,-1/2,0], [0,0,-5]])
B=matrix([[2,4], [-4,2], [3,2]])
C=matrix([[2,0], [0,-1/2]])
print A*B*C
[ 8 -4]
[ 4 1/2]
[-30 5] : OK
http://math3.skku.ac.kr/home/pub/58 by 김원경 -(Use Diagonal)
A=diagonal_matrix([2,-1/2,-5])
B=matrix([[2,4], [-4,2], [3,2]])
C=diagonal_matrix([2,-1/2])
print A*B*C
[ 8 -4]
[ 4 1/2]
[-30 5] : OK ■
Determine so that is skew-symmetric matrix.
The matrix is a skew-symmetric matrix then and .
The answer is
. ■
If satisfies and ,
show that can be expressed as follows.
What is the value of ?
=>
■
Let be a square matrix. Explain why the following hold.
(1) If contains a row or a column consisting of 0's, is not invertible.
(2) If contains the same rows or columns, is not invertible.
(3) If contains a row or column which is a scalar multiple of another row or column of .
(1) is not invertible. det=0
det
( contain a row or a column of all zeros)
is not invertible.
(2) If a matrix has , which are , we can make a new matrix
which take on . Because the matrix has a row or a column
consisting of 0's and det=det, det=det=0. So, is not invertible.
is not invertible.
(3) If a matrix has , which are ( is constant), we can make
a new matrix which take on . Because the matrix has a
row or a column consisting of 0's and det=det, det=det=0. So, is not
invertible.
is not invertible. ■
Let be an square matrix. Discuss what condition is need to have .
If there is an inverse matrix ,
So there must be an inverse matrix of the matrix . ■
Note : 가 invertible이 아니면 성립하지 않을 수 있다.
Find matrices , and explain the relation with ERO.
■
Decide if the following 4 vectors are linearly independent.
, , ,
Ex)
Ans) are linearly dependent. ■
Checked by Sage
http://math1.skku.ac.kr/home/pub/2491
A=matrix([[4,2,6,4],[-5,-2,-3,-1],[2,1,3,5],[6,3,9,6]])
print A.rref()
[ 1 0 -3 0]
[ 0 1 9 0]
[ 0 0 0 1]
[ 0 0 0 0] ■
If and have a solution, prove that has a solution.
If and have a solution, prove that has a solution.
Let and be solutions of and respectively.
=> and
=>
=> is a solution of
Therefore , if both and have a solution, then has a solution. ■
Suppose is an invertible matrix of order .
If in is orthogonal to every row of , what is ?
Justify your answer.
For in is orthogonal to every row of ,
=0,=0=0 =0
Null()
is a solution of .
Prove that a necessary and sufficient condition for a diagonal matrix to be invertible is
that there is no zero entry in the main diagonal.
det= 0 for all (). ■
If is invertible and symmetric, so is .
, and .
=> => => is symmetric. ■
|
Version 2 |
Mar. 11, 2016 |
|
About the Author
http://www.researchgate.net/profile/Sang_Gu_Lee
https://scholar.google.com/citations?user=FjOjyHIAAAAJ&hl=en&cstart=0&pagesize=20
http://orcid.org/0000-0002-7408-9648
http://www.scopus.com/authid/detail.uri?authorId=35292447100
http://matrix.skku.ac.kr/sglee/vita/LeeSG.htm