SKKU-LA-CH-3-SGLee
Chapter 3
Matrix and Matrix Algebra
3.1 Matrix operation
3.2 Inverse matrix
3.3 Elementary matrix
3.4 Subsapce and linear independence
3.5 Solution set of a linear system and matrix
3.6 Special matrices
*3.7 LU-decomposition
Matrix is widely used as a tool to transmit digital sounds and images through internet as well as solving linear systems. We define the addition and product of two matrices.
These operations are tools to solve various linear systems. Matrix product also becomes an excellent tool in dealing with function composition.
In the previous chapter, we have found the solution set using the Gauss elimination method.
In this chapter, we define the addition and scalar multiplication of matrices and introduce algebraic properties of matrix operations.
It will be used to describe the relation between solution set and matrix, Then using the Gauss elimination, we show how to find the inverse matrix.
Furthermore, we investigate the concepts such as linearly independence and subspace which are necessary in understanding the structure of a linear system.
Finally we introduce some interesting special matrices.
3.1 Matrix operation
Reference video: https://youtu.be/C56kVi-AZW8 (http://youtu.be/DmtMvQR7cwA)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-3-Sec-3-1.html
|
|
|
This chapter introduces the definition of the addition and scalar multiplication of matrices and the algebraic properties of the matrix operations. Although many of the properties are identical to those of the operations on real numbers, some properties are different. Matrix operation is a generalization of the operation on real numbers. |
Definition |
[Equality of Matrices] |
||
|
|
|
|
|
Two matrices |
|
|
|
|
||
|
To define equal matrices, the size of two matrices should be the same.
For what values of the two matrices
,
are equal?
For
, each entry should be equal. Thus (that is,
)
,
,
,
. ■
Definition |
[Addition and scalar multiplication of matrix] |
||
|
|
|
|
|
Given two matrices
|
|
|
|
|
||
|
To define addition, the size of two matrices should be the same.
For
, what is
,
,
?
,
. □
● http://matrix.skku.ac.kr/RPG_English/3-MA-operation.html
● http://matrix.skku.ac.kr/RPG_English/3-MA-operation-1.html
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
[ 1 3 0] [ 2 4 –8] [-1 -1]
[-3 4 4] [-4 2 6] [-2 –2] ■
Definition |
[Matrix product] |
||
|
|
|
|
|
Given two matrices
where |
|
|
|
|
||
|
For two matrices and
to be compatible for multiplication, we require the number of columns of
to be equal to the number of rows of
.
The resultant matrix is of size number of rows of
by the number of columns of
.
[Remark] |
|
||
|
|
|
|
|
|
||
|
|
||
|
[Remark] |
Meaning of matrix product |
||
|
|
|
|
|
Let Thus, Note that the inner product of
[King Sejong's 'ㄱ' rule] |
|
|
|
|
|
|
|
Let
● http://matrix.skku.ac.kr/RPG_English/3-MA-operation-1-multiply.html
|
Using matrix product, one can express a linear system easily. Let us consider the following linear system
and let
,
,
be the coefficient matrix, the unknown vector and the constant vector respectively.
Then we can express the linear system as
Theorem |
3.1.1 |
Let (1) (2) (3) (4) (5) (6) (7) (8) (9) |
The proof of the above facts are easy and readers are encouraged to prove them.
|
Check the associative law of the matrix product.
Since ■ |
|
|
|
The properties of operations on matrices are similar to those of operations on real numbers which are well known,
Exception: For matrices
, we do not have
in general.
|
Suppose that we are given the following matrices
Then |
|
|
|
[Remark] |
Computer simulation |
||
|
|
|
|
|
[matrix product] (Commutative law does not hold.) http://www.geogebratube.org/student/m12831 |
|
|
|
|
|
|
|
Definition |
[Zero matrix] |
||
|
|
|
|
|
A zero matrix consists of entries of 0's and denoted by |
|
|
|
|
||
|
Theorem |
3.1.2 |
For any matrix (1) (2) (3) (4) |
Note: Although
, it is possible to have
,
. Similarly,
although ,
, it is possible to have
.
|
Let Then Also |
|
|
|
We should first define scalar matrices.
Definition |
[Identity matrix] |
||
|
|
|
|
|
A scalar matrix of order |
|
|
|
|
||
|
Let
be an
matrix and the identity matrix
. It is easy to see that
.
Let . Then
,
.
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
[ 4 -2 3]
[ 5 0 2]
[ 4 -2 3]
[ 5 0 2]
[0 0]
[0 0] ■
Definition |
|
||
|
|
|
|
|
Let
|
|
|
|
|
||
|
Theorem |
3.1.3 |
If
|
Let . Find
,
,
and confirm that
.
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
[ 6 -8]
[ 20 -10]
[-16 -12]
[ 30 -40]
[1 0]
[0 1]
True ■
In the set of real numbers, we have
.
However, the commutative law under matrix product does not work and thus we only have the following.
.
When , we have
.
Definition |
[Transpose matrix] |
||
|
|
|
|
|
For a matrix
|
|
|
|
|
||
|
The transpose
of
is obtained by interchanging the rows and columns of
.
Find the transpose of the following matrices.
,
,
. □
1 4] [ 5 –3 2] [3]
[-2 5] [ 4 2 1] [0]
[ 3 0] [1] ■
Theorem |
3.1.4 |
Let (1) (2) (3) (4) |
Let . Show that (3) of Theorem 3.1.4 is true.
Since
,
.
Also, . Thus
.
■
Definition |
[Trace] |
||
|
|
|
|
|
The trace of |
|
|
|
|
||
|
Theorem |
3.1.5 |
If (1) (2) (3) (4) (5) |
We prove the item (5) only and leave the rest as an exercise.
. ■
Let . Show that (5) of Theorem 3.1.5 is true.
37
37 ■
3.2 Inverse matrix
Reference video: https://youtu.be/naFiYy4RTxA (http://youtu.be/GCKM2VlU7bw)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-3-Sec-3-2.html
In this chapter, we introduce an inverse matrix of a square matrix which plays like a multiplicative inverse of a real number.
We investigate the properties of an inverse matrix.
You will see that some properties holding in the inverse of a real number are not true in the matrix inverse operation although most hold in both inverses.
Definition |
|
||
|
|
|
|
|
A square matrix
This matrix |
|
|
|
|
||
|
Let . Note that the third row of
has all zeroes. Thus for any matrix
the third row of
is
. Therefore there does not exist
such that
, that is,
is singular.
False ■
Theorem |
3.2.1 |
If |
Suppose that
are inverses of
. Then as
,
we have
Thus an inverse of is unique. ■
A necessary and sufficient condition for to be invertible is that
. Hence one has
.
It is straightforward to check
■
Theorem |
3.2.2 |
If (1) (2) (3) (4) |
(2)
.
(3)~(4) Just check that the product of matrices are the identity matrix. ■
Theorem |
3.2.3 |
If
|
. ■
Let . Check that
.
Since
,
, we have
. Also since
we have
. ■
3.3 Elementary matrices
Reference video: https://youtu.be/pcnFDa8K8ZY (http://youtu.be/GCKM2VlU7bw)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-3-Sec-3-3.html
In the previous section, we defined an inverse of square matrices.
In this section, we shall discuss how to find an inverse of square matrices by using elementary row operations and elementary matrices.
Definition |
|
||
|
|
|
|
|
An |
|
|
|
|
||
|
Listed below are three type of elementary matrices (Type 1, 2, 3) and the operations that produce them.
: Interchange the 2nd and the 3rd rows.
: Multiply the 2nd row by 3.
: Add 2 times the 1st row to the 2nd row.
[1 0 0 0] [ 1 0 0 0] [1 0 0 7]
[0 0 1 0] [ 0 1 0 0] [0 1 0 0]
[0 1 0 0] [ 0 0 -3 0] [0 0 1 0]
[0 0 0 1] [ 0 0 0 1] [0 0 0 1] ■
[Property of elementary matrix] The product of an elementary matrix on the left and any matrix
is the matrix that results when the corresponding same row operation is performed on
.
[Type 1]
[Type 2]
[Type 3]
[Property of elementary matrix] The product of an elementary matrix
|
[1 2 3] [1 2 3] [1 2 3]
[0 1 3] [3 5 7] [3 3 3]
[1 1 1] [0 1 3] [0 1 3] ■
[Remark] The inverse of an elementary matrix is elementary.
Since ,
[Type 1]
Since ,
[Type 2]
Since ,
[Type 3]
[1 0 0] [ 1 0 0] [ 1 0 0]
[0 0 1] [ 0 1/3 0] [ 0 1 0]
[0 1 0] [ 0 0 1] [ 0 -4 1]
Finding the inverse of an invertible matrix.
We investigate the method to find the inverse of an invertible matrix using elementary matrices.
First consider equivalent statements of an invertible matrix (its proof will be treated in Chapter 7).
Theorem |
3.3.1 [Equivalent statements] |
For any (1) (2) (3) (4) |
[Remark] |
|
||
|
|
|
|
|
|
|
|
|
|
||
|
Theorem |
3.3.2 [Computation of an inverse] |
|
[Remark] |
Finding an inverse using the Gauss-Jordan elimination. |
||
|
|
|
|
|
[Step 1] For a given a [Step 2] Compute the RREF of [Step 3] Let following hold. (ⅰ) If (ⅱ) If |
|
|
|
|
|
|
|
|
Find the inverse of
and, its RREF is given as follows. Since
∴ |
|
|
|
|
Find the inverse of
Since |
|
|
|
Find the inverse of
● http://matrix.skku.ac.kr/RPG_English/3-MA-Inverse_by_RREF.html
[ 1 0 0 | 8/15 -19/15 2/15]
[ 0 1 0 | 1/15 -23/15 4/15]
[ 0 0 1 | 4/15 –2/15 1/15]
We can extract inverse of using slicing of the above matrix.
Aug[:, 3:6]
[ 8/15 -19/15 2/15]
[ 1/15 -23/15 4/15]
[ 4/15 -2/15 1/15]
Thus . ■
3.4 Subspaces and Linear Independence
Reference video: https://youtu.be/bFh4MM9sJek, (http://youtu.be/HFq_-8B47xM)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-4-Sec-3-4.html
In this section, we define a linear combination, a spanning set, a linear (in)dependence and a subspace of .
We will also learn how to solve the system of linear equations by using the fact
that solutions for a system of homogeneous linear equations form a subspace of .
* Note that with standard addition and scalar multiplication is also called a vector space over
and its elements are called vectors.
Definition |
[Subspace] |
||
|
|
|
|
|
Let
(1) (2) |
|
|
|
|
||
|
All subspaces of
contain zero vector.
|
|
|
|
|
|
A subset
|
|
|
|
|
All subspaces of 1. zero subspace : 2. Lines through the origin. 3. All subspaces of 1. zero subspace : 2. Lines through the origin 3. Planes through the origin 4. |
|
|
|
|
Show that a subset
the following hold. (ⅰ) (ⅱ) Therefore, |
|
|
|
Let denote the set of all
matrices over
.
|
For is a subspace of
we can obtain that
This implies Therefore, |
|
|
|
Definition |
[linear combination] |
||
|
|
|
|
|
If
with |
|
|
|
|
||
|
|
Let be vectors of
. Can
be a linear combination of
and
?
The answer is depend on whether there exist
in
such that
.
From this observation, we can obtain
□
One can easily show that the above system has no solution.
[1 0 0]
[0 1 0]
[0 0 1]
Since this system of linear equation has no solution, there are no such scalars exist. Consequently,
is not a linear combination of
. ■
|
Show that the set of all linear combinations of is a subspace of
Hence
and This implies Hence, |
|
|
|
In
, we saw that for a subset
,
the set of all linear combinations of
is a subspace of
.
We say is a subspace of
spanned by
. In this case, we say
spans
and S is a spanning set of
. We denote it
or
.
In particular, if all vectors in can be expressed a linear combination of
, then
spans
. That is,
|
(i) Show that (ii) Show that
|
|
|
|
Definition |
[column space and row space] |
||
|
|
|
|
|
Let
Similarly, a row space of
|
|
|
|
|
||
|
For
determine whether spans
or not.
This is a question whether there exist
,
,
such that a given vector
is written as
.
(Using column vectors)
□
[1 0 1]
[0 1 1]
[0 0 0]
This means that one of cannot be determined. Therefore this linear system has a case that the system cannot determine a unique solution. ■
Definition |
[Linearly Independent and Linearly Dependent] |
||
|
|
|
|
|
If then If linearly dependent. |
|
|
|
|
||
|
If
is linearly dependent, there exist at least one non-zero scalar
in such that
.
The unit vectors of
are linearly independent. This is because
.
Show that for
Thus |
|
Show that if are also linearly independent.
Since Therefore |
|
|
|
For
in , Show that
is linearly dependent.
For any
, if
, then
□
[ 1 0 -1]
[ 0 1 1]
[ 0 0 0]
This means that the above equations can be reduced to two equations of three variables. Since it has three variables more than the number of equations so that there are non-trivial solutions. One of them is given by ,
,
. Therefore there exist non zero scalars
,
is linearly dependent. ■
Theorem |
3.4.1 |
For a set (1) A set (2) If (3) If a subset If |
(1) (
) If
is linearly dependent, then there exist
such that
where at least one element in is a nonzero.
Without loss of generality, if then,
so that can be expressed as a linear combination of the other vectors in
() Without loss of generality, we can write
so that
Hence, is linearly dependent since
.
Proofs of the rest are left as an exercise. ■
In other words, that set
is linearly independent means that any vector in
cannot be written as a linear combination of the other vectors in
.
In
, there are at most
vectors in a linearly independent set.
Theorem |
3.4.2 (For proof, see Theorem 7.1.2) |
In |
|
For |
|
|
|
[Remark] |
Lines and planes (from the viewpoint of subspace) |
||
|
|
|
|
|
(1) Note that the span of nonzero vector (2) In general, if |
|
|
|
|
|
|
|
3.5 Solution set and matrices
Reference video: https://youtu.be/E9HHrchqXus (http://youtu.be/daIxHJBHL_g )
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-4-Sec-3-5.html
|
|
|
In this section, we first state the relationship between invertibility of matrices and solutions to systems of linear equations, and then consider homogeneous systems. |
Theorem |
3.5.1 [Relation between an invertible matrix and its solution] |
If an has a unique solution |
The following system can be written as .
where . It is easy to show that
is invertible,
and .
Thus the solution of the above system is given by
.
That is . □
x= (-1, 1, 0), x= (-1, 1, 0) ■
[Remark] |
The homogeneous linear system |
||
|
|
|
|
|
can be written as
The vector
(1) It has only a trivial solution. (2) It has infinitely many solutions (i.e. it has nontrivial solutions as well.)
|
|
|
|
|
|
|
|
Theorem |
3.5.2 [Nontrivial solution of a homogeneous system] |
A homogeneous system with (i.e. the number of variables is greater than that of equations) has nontrivial solutions. |
Since the existence of multiple solutions (provided that there is any solution at all) depends only on
the coefficient matrix and since a homogeneous system always has at least one solution (namely the trivial one),
multiple solutions for a linear system are possible only if the corresponding homogeneous system has multiple solutions.
But the homogeneous system has multiple solutions if and only if it has a non-trivial solution.
The homogeneous linear system
has the following augmented matrix and its RREF.
A=
[1 1 1 1 0]
[1 0 0 1 0]
[1 2 1 0 0]
RREF(A)=
[ 1 0 0 1 0]
[ 0 1 0 -1 0]
[ 0 0 1 1 0]
The corresponding system of equations is
Let (
: a real number). Then the solution to (2) is
.
The solution is trivial if , and nontrivial if
. ■
Definition |
[The associated homogeneous system of linear equations] |
||
|
|
|
|
|
Given a linear system |
|
|
|
|
||
|
Consider a system of linear equations.
The associated homogeneous linear system is as the following:
Since the matrix size is greater than 2, let us use Sage.
The RREF of the augmented matrix of the above system is as follows :
[ 1 0 0 4 2 0 0]
[ 0 1 0 0 0 0 0]
[ 0 0 1 2 0 0 0]
[ 0 0 0 0 0 1 1/3]
Thus the above system reduces to
,
,
,
.
Note that and
are free variables.
Let ,
. Then we have
,
.
Consider the augmented matrix of RREF of its associated homogeneous linear system.
[1 0 0 4 2 0 0]
[0 1 0 0 0 0 0]
[0 0 1 2 0 0 0]
[0 0 0 0 0 1 0]
It is easy to see that the solution to this system is given by
,
. ■
When compared geometrically the solutions to a system and
those of an associated homogeneous linear system,
the solution set for the associated homogeneous linear system is
merely translated by the vector below.
We call the vector a particular solution which can be obtained by substituting
.
[Remark]
|
Relation between the solution set of the linear system and that of the associated homogeneous linear system. |
||
|
|
|
|
|
If
Thus a system of linear equation is a solution set of |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
A geometric meaning of
which is a solution set of
is a set of translation
when a particular solution is added to a solution set
of
.
Since does not contain a zero vector, it is not a subspace of
.
Theorem |
3.5.3 [Invertible Matrix Theorem] |
For an (1) RREF (2) (3) (4) (5) (6) The columns of (7) The rows of |
[Remark] |
The vectors of the solution space of |
||
|
|
|
|
|
Let us think of the homogeneous system Thus |
|
|
|
|
|
|
|
Consider the system of linear equations:
,
,
.
It is easy to check that is non-trivial solution of this system.
Let us verify that is orthogonal to row vectors of the coefficient matrix
of the above system.
0
0
0
Thus is orthogonal to row vectors of the coefficient matrix
.
[Remark] |
Line, Plane, Hyperplane |
||
|
|
|
|
|
(1) Line of
(2) Plane of
Note: The solution set of
|
|
|
|
|
|
|
|
3.6 Special matrices
Reference video: https://youtu.be/FNRT0d_c9Pg (http://youtu.be/daIxHJBHL_g)
Practice site: http://matrix.skku.ac.kr/knou-knowls/CLA-Week-4-Sec-3-6.html
|
|
|
We saw various properties of matrix operations. In this section, we introduce special matrices and consider some of their crucial properties. |
Diagonal matrix: A square matrix with the entries 0 except the main diagonal.
A diagonal matrix with its main diagonal entries
can be written as diag
diag
Identity matrix: the matrix with its main diagonal entries all 1’s, denoted by
Scalar matrix:
,
The following are all diagonal matrices. and
are scalar matrices.
and
are written as
and
.
[ 2 0] [-3 0 0]
[ 0 -1] [ 0 -2 0]
[ 0 0 1] ■
Consider the following matrix.
If and
,
.
For a general matrix ,
is obtained by multiplying each row of
by the corresponding entry of
,
and is obtained by multiplying each column of
by the corresponding entry of
,
Furthermore, it satisfies the following.
,
,
In other words, the power of a diagonal matrix is the same as the diagonal matrix with the powers of the entries of the main diagonal. □
http://sage.skku.edu or http://mathlab.knou.ac.kr:8080
D^(-1)=
[ 1 0 0]
[ 0 -1/3 0]
[ 0 0 1/2]
D^5=
[ 1 0 0]
[ 0 -243 0
[ 0 0 32 ■
Definition |
|
||
|
|
|
|
|
If a square matrix |
|
|
|
|
||
|
In the following matrices, and
are symmetric matrices and
is a skew-symmetric matrix.
● http://matrix.skku.ac.kr/RPG_English/3-SO-Symmetric-M.html
|
If (1) (2)
a symmetric matrix. (2)Since a skew-symmetric matrix. ■ |
|
|
|
[Remark] |
|
||
|
|
|
|
|
A given matrix can be written uniquely as a sum of a symmetric matrix and a skew-symmetric matrix. |
|
|
|
|
|
|
|
For any given matrix
and
is a symmetric matrix and
is a skew-symmetric matrix. ■
Upper triangular matrix: A square matrix whose entries under the main diagonal are all zeros
Lower triangular matrix: A square matrix whose entries above the main diagonal are all zeros
In general, triangular matrices are as follows.
Theorem |
3.6.1 [Property of a triangular matrix] |
Let (1) (2) If (3) If |
Let
|
● http://matrix.skku.ac.kr/LA-Lab/index.htm
● http://matrix.skku.ac.kr/knou-knowls/cla-sage-reference.htm
[Solution] Section 3-1 http://youtu.be/LaAAruKbGyc
Section 3-2 http://youtu.be/-MPszmMNvLE
Section 3-3 http://youtu.be/ceI80eXp6xU
Section 3-4 http://youtu.be/s7jxVvVAel4
Section 3-5 http://youtu.be/IygHFdWacds
Section 3-6 http://youtu.be/rYBsPkeVhQ0
Indicate whether the statement is true (T) or false (F). Justify your answer.
(a) If three nonzero vectors form a linearly independent set, then each vector in the set can be expressed as a linear combination of the other two.
False
(b) The set of all linear combinations of two vectors and
in
is a plane.
False
(c) If u cannot be expressed as a linear combination of and
, then the three vectors are linearly independent.
False
(d) A set of vectors in that contains is linearly dependent.
True
(e) If {,
,
} is a linearly independent set, then so is the set {
,
,
} for every nonzero scalar
.
True
Note :
(a) If three vectors are linear independent, it is impossible to make one vector with other two vector's linear combination
(b) If and
are linear dependent, linear combinations of two vectors are not plane.
(c) If = (1, 0),
= (0, 1),
= (0, 2), we cannot make
by linear combination
,
. However,
and
are linearly dependent.
(e) If a solution of is only
, a solution of
is
. So {
} are linearly independent.
When
, confirm the following.
.
i)
ii)
■
Sage) http://math1.skku.ac.kr/home/math2013/297/
■
When
, confirm that
and that
.
,
but
■
Sage) http://math3.skku.ac.kr/home/pub/20
A=matrix(2, 2, [-2, 3, 2, -3])
B=matrix(2, 2, [-1, 3, 2, 0])
C=matrix(2, 2, [-4, -3, 0, -4])
print "AB="
print A*B
print "AC="
print A*C
AB=
[ 8 -6]
[-8 6]
AC=
[ 8 -6]
[-8 6] ■
When
, compute the following.
∴ Answer is ■
Sage)
■
Show that
is the inverse of
. And confirm that
.
=
=
=
=
∴=
=
=
=
=
∴=
■
If
, show that
.
∴ If , then
. ■
Solved by 주영은, 김원경, Refinalized by 서승완, 이나을, Final OK by SGLee
Find a
elementary matrix corresponding to each elementary operation.
(1)
(2)
(3)
(Elementary matrix)
(1) : Interchange the 2nd and the 3rd rows on
(2) : Multiply the 2nd row by 2.
(3) : Add –2 times the 1st row to the 3rd row. ■
Double checked by Sage) http://math3.skku.ac.kr/home/pub/55 by 주영은
#elementary_matrix=matrix([[1,0,0], [0,1,0],[0,0,1]]) E1=elementary_matrix(3, row1=1, row2=2) E2=elementary_matrix(3, row1=1, scale=2) E3=elementary_matrix(3, row1=2, row2=0, scale=-2) print "E1 =" print E1 print "E2 =" print E2 print "E3 =" print E3 |
E1 = [1 0 0] [0 0 1] [0 1 0] E2 = [1 0 0] [0 2 0] [0 0 1] E3 = [ 1 0 0] [ 0 1 0] [-2 0 1] |
Note) Sage의 Index는 0부터 시작함을 주의 ■
Using elementary operations, find the inverse of the following matrix.
(1) (2)
(1) =
→
→
=
.
=
(2) =
→ →
= .
. ■
Let
and
be any
matrix.
(1) What is and confirm how
affects on
.
(2) What is and confirm how
affects on
.
Let
=
,
(1) =
=
affects on the 3rd row of
.
(2) =
=
affects on the 1st column of
. ■
Determine if
is a subspace of
.
Show 1) is closed under the addition.
2) is closed under the scalar multiplication.
1)
2)
Therefore, is not a subspace of
. ■
Determine if
is a subspace of
.
Show 1) is closed under the addition.
2) is closed under the scalar multiplication.
1)
2)
Therefore, is a subspace of
. ■
Find a vector equation and a parameterized equation of the subspace spanned by the following vectors.
(a) ,
(b) ,
(a)
,
,
where
,
in ℝ.
(b) ,
,
,
,
. ■
Give a solution by finding the inverse of the coefficient matrix of the system.
Set the coefficient matrix
.
Use ERO to get :
Ans)
=
■
Sage ) Find Inverse
[ 2/3 -5/3 4/3] [ 0 -1 1] [ 1 -5 4] |
Sage ) Find solution set (해를 구하는 방법)
x= (5/3, 1, 4) x= (5/3, 1, 4)
Determine if the homogeneous system has a nontrivial solutoin.
Let =
: Augmented matrix
=
: RREF(
)
(3, 0, -2, 1) is one of solutions for the given homogeneous system of equations.
Therefore the system has a non trivial solution. ■
Check if the following matrix is invertible. If so,
find its inverse by using a property of special matrices.
The matrix
is a diagonal matrix.
Therefore .
Let ,
,
.
,
,
.
=>
=>
∴ The inverse matrix of is
■
Sage)
[ 1/2 0 0] [ 0 -1/5 0] [ 0 0 1/3]
Find the product by using a property of special matrices.
,
: diagonal matrices
1) =
:
,
,
was multiplied on the left.
2) =
:
,
was multiplied on the right.
∴ The answer is . ■
Double checked by Sage)
http://math3.skku.ac.kr/home/pub/56 by. 주영은
A=matrix([[2,0,0], [0,-1/2,0], [0,0,-5]])
B=matrix([[2,4], [-4,2], [3,2]])
C=matrix([[2,0], [0,-1/2]])
print A*B*C
[ 8 -4]
[ 4 1/2]
[-30 5] : OK
http://math3.skku.ac.kr/home/pub/58 by 김원경 -(Use Diagonal)
A=diagonal_matrix([2,-1/2,-5])
B=matrix([[2,4], [-4,2], [3,2]])
C=diagonal_matrix([2,-1/2])
print A*B*C
[ 8 -4]
[ 4 1/2]
[-30 5] : OK ■
Determine
so that
is skew-symmetric matrix.
The matrix
is a skew-symmetric matrix then
and
.
The answer is
. ■
If
satisfies
and
,
show that can be expressed as follows.
What is the value of ?
=>
■
Let
be a square matrix. Explain why the following hold.
(1) If contains a row or a column consisting of 0's,
is not invertible.
(2) If contains the same rows or columns,
is not invertible.
(3) If contains a row or column which is a scalar multiple of another row or column of
.
(1) is not invertible.
det
=0
det
(
contain a row or a column of all zeros)
is not invertible.
(2) If a matrix has
,
which are
, we can make a new matrix
which take on
. Because the matrix
has a row or a column
consisting of 0's and det=det
, det
=det
=0. So,
is not invertible.
is not invertible.
(3) If a matrix has
,
which are
(
is constant), we can make
a new matrix which take
on
. Because the matrix
has a
row or a column consisting of 0's and det=det
, det
=det
=0. So,
is not
invertible.
is not invertible. ■
Let
be an
square matrix. Discuss what condition is need to have
.
If there is an inverse matrix ,
So there must be an inverse matrix of the matrix . ■
Note : 가 invertible이 아니면 성립하지 않을 수 있다.
Find
matrices
,
and explain the relation with ERO.
■
Decide if the following 4 vectors are linearly independent.
,
,
,
Ex)
Ans) are linearly dependent. ■
Checked by Sage
http://math1.skku.ac.kr/home/pub/2491
A=matrix([[4,2,6,4],[-5,-2,-3,-1],[2,1,3,5],[6,3,9,6]])
print A.rref()
[ 1 0 -3 0]
[ 0 1 9 0]
[ 0 0 0 1]
[ 0 0 0 0] ■
If
and
have a solution, prove that
has a solution.
If and
have a solution, prove that
has a solution.
Let and
be solutions of
and
respectively.
=> and
=>
=> is a solution of
Therefore , if both and
have a solution, then
has a solution. ■
Suppose
is an invertible matrix of order
.
If in
is orthogonal to every row of
, what is
?
Justify your answer.
For
in
is orthogonal to every row of
,
=0,
=0
=0
=0
Null(
)
is a solution of
.
Prove that a necessary and sufficient condition for a diagonal matrix to be invertible is
that there is no zero entry in the main diagonal.
det=
0
for all
(
). ■
If
is invertible and symmetric, so is
.
,
and
.
=> =>
=>
is symmetric. ■
|
Version 2 |
Mar. 11, 2016 |
|
About the Author
http://www.researchgate.net/profile/Sang_Gu_Lee
https://scholar.google.com/citations?user=FjOjyHIAAAAJ&hl=en&cstart=0&pagesize=20
http://orcid.org/0000-0002-7408-9648
http://www.scopus.com/authid/detail.uri?authorId=35292447100
http://matrix.skku.ac.kr/sglee/vita/LeeSG.htm