2018학년도 2학기

반도체 공학과 공학수학2 (GEDB005) 강의교안  (2018학년도용)

주교재: Erwin Kreyszig, Engineering Mathematics, 10th Edition

부교재: 최신공학수학 I 과 II, 한빛출판사 및 (영문) 다변량 Calculus (이상구, 김응기 et al)   (http://www.hanbit.co.kr/EM/sage/)

강의시간: BD1615(화 16:30-17:45/ 목 15:00-16:15), 반도체관 400102호 담당교수: 김응기 박사

 주차 주교재 부교재 5 (briefly Review) 7.2: Matrix Multiplication 7.3: Linear Systems of Equations 7.4: Linear Independence, Rank of Matrix, Vector Space 7.5: Solutions of Linear System 7.6: Second and Third-Order Determinants 7.7: Determinants. 7.8: Inverse of Matrix 1.1 행렬의 성질과 연산 1.2 선형연립방정식 1.3 일차독립과 일차종속, 계수 1.4 행렬식과 여인자 전개 1.5 역행렬과 크래머 법칙 web

Week 5

(briefly review)

Chapter 7. Linear Algebra

7.2 Matrix Multiplication

In this section we introduce the basic concepts and rules of matrix and vector algebra.

Matrix multiplication means multiplication of matrices by matrices.

Definition  Multiplication of a Matrix by a Matrix

Product

column of  row of

( is matrix ) ( is matrix)

is matrix

(1)

( row of ) ( column of )

Example 1  Matrix Multiplication

Here and .

Sage Coding

 A=matrix(3, 3, [3, 5, -1, 4, 0, 2, -6, -3, 2])  B=matrix(3, 4, [2, -2, 3, 1, 5, 0, 7, 8, 9, -4, 1, 1]) (A*B).matrix_from_rows_and_columns([0], [0])

 A=matrix(3, 3, [3, 5, -1, 4, 0, 2, -6, -3, 2])  B=matrix(3, 4, [2, -2, 3, 1, 5, 0, 7, 8, 9, -4, 1, 1]) (A*B).matrix_from_rows_and_columns([1], [2])

 A=matrix(3, 3, [3, 5, -1, 4, 0, 2, -6, -3, 2])  B=matrix(3, 4, [2, -2, 3, 1, 5, 0, 7, 8, 9, -4, 1, 1]) A*B

Evaluate

[ 22  -2  43  42]

[ 26 -16  14   6]

[ -9   4 -37 -28]

Product is not defined.

column of row of

Example 2  Multiplication of a Matrix and a Vector

is undefined

column of row of

Sage Coding

 A = matrix(2, 2, [4, 2, 1, 8])  B = matrix(2, 1, [3, 5]) A*B

Evaluate

[22]

[43]

Example 3  Products of row and column Vectors

Sage Coding

 A = matrix(1, 3, [3, 6, 1])  B = matrix(3, 1, [1, 2, 4]) A*B

 A = matrix(1, 3, [3, 6, 1]) B = matrix(3, 1, [1, 2, 4]) B*A

 A = matrix(1, 3, [3, 6, 1]) B = matrix(3, 1, [1, 2, 4]) bool(A*B == B*A)

Evaluate

False

Example 4  Matrix multiplication is not commutative. in General

but

Sage Coding

 A = matrix(2, 2, [1, 1, 100, 100])  B = matrix(2, 2, [-1, 1, 1, -1]) show( A*B ) show( B*A ) bool(A*B == B*A)

Evaluate

[0 0]

[0 0]

[ 99  99]

[-99 -99]

does not necessarily imply , , .

[Ex]

In general

Matrix multiplication rules

(a)

(b) Associative law :

(c) Distributive law :

(d) Distributive law : .

does not necessarily imply .

, ,

.

Sage Coding

 A = matrix(2, 2, [3, 0, 2, 0])  B = matrix(2, 2, [1, 3, 5, 7]) C = matrix(2, 2, [1, 3, 1, 4]) print bool( A*B == A*C ) print bool(B ==C)

Evaluate

True

False

Matrix multiplication is a multiplication of rows into columns, we can write the defining formula more compactly as

(3)                    ,

where is the row vector of and is the column vector of .

Example 5  Product in Terms Row and Column Vectors

If is of size and is of size , then

Parallel processing of products on the computer is facilitated by a variant of for computing , which is used by standard algorithms (such as in Lapack). In this method, is used as given, is taken in terms of its column vectors, and the product is computed columnwise thus,

(5)                      .

Example 6  Computing Products Columnwise by

Solution

Calculate the columns

of and then write them as a single matrix.

Sage Coding

 A = matrix(2, 2, [4, 1, -5, 2])  B = matrix(2, 3, [3, 0, 7, -1, 4, 6]) print A*(B.column(0)) print A*(B.column(1)) print A*(B.column(2))

Evaluate

(11, -17)

(4, 8)

(34, -23)

Motivation of Multiplication by Linear Transformations

For variables these transformations are of the form.

(6*)                                .

and suffice to explain the idea. For instance, may relate an -coordinate system to a -coordinate system in the plane.

In vectorial from we can write as

(6)

Now suppose further that the -system is related to a -system by another linear transformation, say,

(7)

Then the -system is related to the system indirectly via the -system, and we wish to express this relation directly. Substitution will show that this direct relation is a linear transformation, too, say,

(8)

Substitute into , we obtain

.

.

Comparing this with , we see that

,

.

This proves that with the product defined as in .

Transposition

Definition Transposition of matrices and vectors

Transpose of an matrix is the matrix .

Transpose of is

Transposition converts row vectors to column vectors and conversely.

Transpose of is .

Example 7  Transposition of Matrices and Vectors

If then

Sage Coding

 A=matrix(QQ,[[5, -8, 1], [4, 0, 0]]) C=matrix(QQ,[[3, 0], [8, -1]]) D=matrix(QQ, [[6, 2, 3,]]) print A.transpose()                # Transpose of a matrix  A.transpose() print print C.transpose() print print D.transpose()

Evaluate

[ 5  4]

[-8  0]

[ 1  0]

[ 3  8]

[ 0 -1]

[6]

[2]

[3]

Rules for transposition are

a.                    b.

c.                   d.

Symmetric matrix

Symmetric matrix of an matrix is the matrix .

Matrix whose transpose equals the matrix itself ().

Symmetric matrix of is

Symmetric matrix of is then .

Sage Coding

 A=matrix(3, 3, [20, 120, 200, 120, 10, 150, 200, 150, 30]) print A.transpose() print bool(A==A.transpose())

Evaluate

[ 20 120 200]

[120  10 150]

[200 150  30]

True

Skew-symmetric matrix

Skew-symmetric matrix of an matrix is the matrix .

Matrix whose transpose equals minus the matrix ().

Skew-symmetric matrix of is

Skew-symmetric matrix of is then .

Sage Coding

 A=matrix(3, 3, [0, 1, -3, -1, 0, -2, 3, 2, 0]) print A.transpose() print bool(-A==A.transpose())

Evaluate

[ 0 -1  3]

[ 1  0  2]

[-3 -2  0]

True

Show that if is any matrix, then

(a) is symmetric matrix.

(b) is skew-symmetric matrix.

Solution

(a) .

is symmetric matrix.

(b).

is skew-symmetric matrix.

is an matrix      where is symmetric and is skew-symmetric.

Solution

is symmetric matrix.

is skew-symmetric matrix.

Triangular Matrices

A square matrix the entries either below or above the main diagonal are zero.

Upper triangular Matrices

Upper triangular Matrices are square matrices that can have non-zero entries only on and above the main diagonal, whereas any entry below the diagonal must be zero.

Upper triangular matrix

.

is upper triangular matrix.

Lower triangular Matrices

Lower triangular Matrices are square matrices that can have non-zero entries only on and below the main diagonal, whereas any entry above the diagonal must be zero.

Lower triangular matrix

.

is lower triangular matrix.

Diagonal matrices

These are square matrices that can have non-zero entries only on the main diagonal. Any entry above or below the main diagonal must be zero.

Diagonal matrix is

.

Sage Coding

 G=diagonal_matrix([2, -1])          # generate diagonal matrix H=diagonal_matrix([-3, -2, 1])      # diagonal_matrix([a1, a2, a3]) print G print H

Evaluate

[ 2  0]

[ 0 -1]

[-3  0  0]

[ 0 -2  0]

[ 0  0  1]

Scalar Matrix

If all the diagonal entries of a diagonal matrix are equal, say, , we call a scalar matrix because multiplication of any square matrix of the same size by has the same effect as the multiplication by a scalar, that is,

.

is diagonal matrix.      is scalar matrix.

If is an matrix, the trace of , is defined as the sum of all elements on the main diagonal of , .

Rules for trace are

(a)

(b) , where is a real number.

(c)

(d)

Identity matrix(Unit matrix)

A scalar matrix whose entries on the main diagonal are all is called a Identity matrix (unit matrix) and is denoted by or simply by .

Unit matrix

.

.

, is unit matrix.

Sage Coding

 print identity_matrix(2) print print identity_matrix(3)

Evaluate

[1 0]

[0 1]

[1 0 0]

[0 1 0]

[0 0 1]

7.3 Linear Systems of Equations. Gauss Elimination

We will learn how to :

Develop the calculus of linear systems.

Linear Systems, Coefficient Matrix, Augmented Matrix.

A linear system of equations in unknowns , , , is a set of equations of the form

(1)                                 linear system

each variable appears in the first power only, just as in the equation of a straight line. are given numbers, called the coefficients of the system. on the right are also numbers.

, then is a homogeneous system.

If at least one is not zero, then is a nonhomogeneous system.

A solution of is a set of numbers , , , that satisfies all the equations.

A solution vector of is a whose components from a solution of . If the system is homogeneous, it has at least the trivial solution , , , .

Matrix Form of the Linear System (1).

We see that the equation of may be a single vector equation

(2)                               definition of matrix multiplication

where the coefficient matrix is the matrix

and      and

are column vectors.

is the coefficient matrix

is the matrix

is unknown matrix

is constant matrix

The coefficients are not all zero.  is not a zero matrix.

has components, whereas has components.

The matrix.

is the augmented matrix of the system .

The last column of does not belong to .

Example 1  Geometric Interpretation. Existence and Uniqueness of Solutions

If , we have two equation in two unknowns ,

(a) Precisely one solution if the lines intersect.

(b) Infinitely many solutions if the lines coincide.

(c) No solution if the lines are parallel.

Gauss Elimination and Back Substitution

triangular from

back substitution

last equation for the variable

,

work backward, substituting

Augmented matrix is

.

Elementary Row Operations. Row-Equivalent Systems

Elementary Row Operations for Matrices

Interchange of two rows.

Addition of a constant multiple of one row to another row.

Multiplication of a row by a non-zero constant .

Elementary Operations for Equations

Interchange of two equations.

Addition of a constant multiple of one equation to another equation.

Multiplication of a equation by a non-zero constant .

Theorem1  Row-Equivalent Systems

Row-equivalent linear systems have the same set of solutions.

A linear system :

Be called overdetermined if .

Be called determined if .

Be called underdetermined if .

A linear system is consistent if it has at least one solution.

A linear system is inconsistent if it has no solution at all.

Example 3  Gauss Elimination if Infinitely Many Solutions Exist

Solve the following system using the Gauss-Jordan elimination.

Solution

Back substitution

Infinitely many solutions.

Setting ,

Solution : , , ,

Sage Coding

 A=matrix([[1, 3, -2, 0, 2, 0], [2, 6, -5, -2, 4, -3], [0, 0, 5, 10, 0, 15], [2, 6, 0, 8, 4, 18]]) b=vector([0, -1, 5, 6])  print "[A: b] =" print A.augment(b) print  print "RREF([A: b]) =" print A.augment(b).rref()   # 행렬 A와 벡터 b의 첨가행렬의 RREF 구하기

Evaluate

[A: b] =

[ 1  3 -2  0  2  0  0]

[ 2  6 -5 -2  4 -3 -1]

[ 0  0  5 10  0 15  5]

[ 2  6  0  8  4 18  6]

RREF([A: b]) =

[  1   3   0   4   2   0   0]

[  0   0   1   2   0   0   0]

[  0   0   0   0   0   1 1/3]

[  0   0   0   0   0   0   0]

Example 4  Gauss Elimination if no Solutions Exist

The false statement show that the system has no solution.

Sage Coding

 A=matrix([[3, 2, 1], [2, 1, 1], [6, 2, 4]]) b=vector([3, 0, 6])  print "[A: b] =" print A.augment(b) print  print "RREF([A: b]) =" print A.augment(b).rref()   # 행렬 A와 벡터 b의 첨가행렬의 RREF 구하기

Evaluate

[A: b] =

[3 2 1 3]

[2 1 1 0]

[6 2 4 6]

RREF([A: b]) =

[ 1  0  1  0]

[ 0  1 -1  0]

[ 0  0  0  1]

Row Echelon Form and Information From It

At the end of the Gauss elimination the form of the coefficient matrix, the augmented matrix, and the system itself are called the row echelon form. In it, rows of zeros, if present, are the last rows, and in each non-zero row the leftmost non-zero entry is farther to the right than in the previous row. For instance, in Example the coefficient matrix and its augmented in row echelon form are

and

Note that we do not require that the left most non-zero entires be since this would have no theoretic or numeric advantage.

At the end of the Gauss elimination(before the back substitution) the row echelon form of the augmented matrix will be

(9)

Here, and , and all the entries in the blue triangle as well as in the blue rectangle are zero. From this we see that with respect to solutions of the system with augmented matrix (and thus with respect to the originally given system) there are three possible cases :

(a) Exactly one solution

If and are zero. To get the solution, solve the equation corresponding to (which is ) for , then the equation for , and so on up the line. See Example , where and .

(b) Infinitely many solutions

If and are zero. To obtain any of  these solutions, choose values of arbitrarily. Then solve the equation for , then the equation for , and so on up the line. See Ex .

(c) No solution

If and one of the entries is not zero. See Example , where and .

7.4 Linear Independence. Rank of a matrix. Vector Space

Linear Independence and Dependence of Vector

Any set of vectors , a linear combination of these vectors is

where are any scalar. Now consider the equation

(1)                           .

, , , are linear independent set(or linear independent)

, , , are linear dependent.

For instance, if hold with , we can solve for :

(Some may be zero. Or even all of them, namely. if .)

Example 1  Linear Independence and dependence

The three vectors

are linearly dependent because

.

The and are linearly independent because

Sage Coding

 a1 = vector([3, 0, 2, 2]) a2 = vector([-6, 42, 24, 54]) a3 = vector([21, -21, 0, -15]) A = matrix([a1,a2,a3]) A.rref()

Evaluate

[    1     0   2/3   2/3]

[    0     1   2/3 29/21]

[    0     0     0     0]

Rank of a Matrix

Definition

The rank of a matrix is the maximum number of linearly independent row vectors of . It is denoted by .

Example 2  Rank

The matrix

(2)

has rank , because show that the first two row vectors are linearly independent, whereas all three row vectors are linearly dependent.

.

We call a matrix row-equivalent to a matrix if can be obtained from by

(finitely many!) elementary row operations.

Theorem 1  Row-Equivalent Matrices

Row-equivalent matrices have the same rank.

Example 3  Determination of  Rank

Sage Coding

 a1 = vector([3, 0, 2, 2]) a2 = vector([-6, 42, 24, 54]) a3 = vector([21, -21, 0, -15]) A = matrix([a1,a2,a3]) A.rank()

Evaluate

2

Theorem 2  Linear independence and Dependence of vectors

Consider vectors that each have components. Then these vectors are linearly independent if the matrix formed, with these vectors as row vectors, has rank . However, these vectors are linearly dependent if that matrix has rank less than .

Theorem 3  Rank in Terms of Column Vectors

The rank of a matrix equals the maximum number of linearly independent column vectors of .

Hence and its transpose have the same rank.

Example 4  Illation of Theorem 3

The matrix in has rank . From Example we see that the first two row vectors are linearly independent and by “working backward” we can verify that . Similarly, the first two columns are linearly independent, and by reducing the last matrix Example by columns we find that

and  .

Sage Coding

 a1 = vector([3, 0, 2, 2]) a2 = vector([-6, 42, 24, 54]) a3 = vector([21, -21, 0, -15]) print a3 == 6*a1 - 1/2*a2 A = matrix([a1, a2, a3]) b1 = A.column(0) b2 = A.column(1) b3 = A.column(2) b4 = A.column(3) print b3 == 2/3*b1 + 2/3*b2 print b4 == 2/3*b1 + 29/21*b2

Evaluate

True

True

True

Theorem 4  Linear Dependence of Vectors

Consider vectors each having components. If , then these vectors are linearly dependent.

Proof

The matrix with those vectors as row vectors has rows and columns.

By Theorem it has rank , which implies linear dependence by Theorem .

Vector space

is Dimension of .

The maximum numbers of linearly independent vectors in .

Basis for

A linearly independent set in consisting a maximum possible number of vectors in .

The number of vectors of a basis for equals .

The set of linear combinations of vectors with the same number of components is called the span of these vectors.

A span is a vector space.

By a subspace of a vector space we mean a nonempty subset of (including itself) that forms itself a vector space with respect to the two algebraic operation defined for the vectors of .

Example 5  Vector Space, Dimension, Basis

The span three vectors in Ex is a vector space of dimension and a basis , , for instance, or , etc.

Sage Coding

 a1 = vector([3, 0, 2, 2]) a2 = vector([-6, 42, 24, 54]) a3 = vector([21, -21, 0, -15]) V = span([a1,a2,a3], QQ) print V.dimension()                            # 차원 print span([a1,a2,a3], QQ) == span([a1,a2], QQ) print span([a1,a2,a3], QQ) == span([a1,a3], QQ) print V.basis()       # 내부적으로 계산한 기저를 제공

Evaluate

2

True

True

[

(1, 0, 2/3, 2/3),

(0, 1, 2/3, 29/21)

]

Theorem 5  Vector Space

The vector space consisting of all vectors with components( real numbers) has dimension .

Proof

A basis of vectors is

In the case of a matrix we call the span of the row vectors the row space of and the span of the column vectors the column space of .

Theorem 6  Row Space and Column Space

The row space and the column space of a matrix have the same dimension, equal to rank .

Finally, for a given matrix the solution set of the homogeneous system is a vector space, called the null space of , and its dimension is called the nullity of .

(6)                   rank nullity Number of columns of .

[Theorem] The row rank and the column rank of the matrix are equal.

7.5 Solutions of Linear Systems : Existence, Uniqueness

Theorem 1  Fundamental Theorem for Linear systems

(a) Existence.

A linear system of equations in unknowns , , ,

(1)

is consistent, that is, has solutions, if and only if the coefficient matrix and the augmented matrix have the same rank. Here

and

(b) Uniqueness.

The system has precisely one solution if and only if this common rank of and equals .

(c) Infinitely many solutions

If this common rank is less than , the system has infinity many solutions. All of these solutions are obtained by determining suitable unknowns(whose submatrix of coefficients must have rank ) in terms of  the remaining unknowns, to which arbitrary values can be assigned.

(d) Gauss elimination.

If solutions exist, they can all be obtained by the Gauss elimination. (This method will automatically reveal whether or not solutions exist.)

Homogeneous Linear System

Theorem 2  Homogeneous Linear System

A homogeneous linear system

(4)

always has the trivial solution , , , .

Non-trivial solutions exist      .

If , these solution, together with , form a vector space of dimension , called the solution space of .

In particular, if and are solution vectors of , then with any scalars and is a solution vector of . (This does not hold for non-homogeneous systems. Also, the term solution space is used for homogeneous systems only.)

The solution space of is also called the null space of because for every in the solution space of . Its dimension is called nullity of . Hence Theorem 2 states that

(5)

where is the number of unknowns (number of columns of ).

Furthermore, by definition of rank we have in . Hence if , then . By Theorem 2 this gives the practically important.

Theorem 3  Homogeneous Linear System with Fewer Equation Than Unknowns

A homogeneous linear system with fewer equation than unknowns has always nontrivial solutions

Nonhomogeneous Linear System

Theorem 4  Nonhomogeneous Linear System

If a nonhomogeneous linear system is consistent, then all of its solutions are obtained as

(6)

where is any (fixed) solution of and runs through all the solution of the corresponding homogeneous system .

7.6 For Reference

Second-order and Third-order Determinants

Second-Order Determinants

A determinant of second-order is denoted and defined by

(1)

Cramer's rule for solving linear systems of two equations

(2)

(3)

with as in (1), provided .

The value appears for homogeneous systems with nontrivial solutions.

Example 1  Cramer's Rule for Two Equations

.

Then .

Sage Coding

 A = matrix(2, 2, [4, 3, 2, 5]) A1 = matrix(2, 2, [12, 3, -8, 5]) A2 = matrix(2, 2, [4, 12, 2, -8]) print A.det() print A1.det() print A2.det() print "x =",  A1.det()/A.det() print "y =",  A2.det()/A.det()

Evaluate

14

84

-56

x = 6

y = -4

Third-Order Determinants

Determinant of Third-order is

(4)

Cramer's rule for solving linear systems of third equations

(5)

(6)

with the determinant of the system given by (4) and

.

7.7. Determinant. Cremer's Rule

Determinant of order is a scalar associated with an matrix , is

(1)                            .

Minors and Cofactors

Consider an square matrix . Let denote the square submatrix of obtained by deleting its th row and th column. The determinant is call the minor of the element of , and we define the cofactor of , denoted by .

Cofactor matrix

The matrix obtained from by replacing the th row of by the th row.

The matrix obtained from by replacing the th column of by the th column.

Example 1  Minors and Cofactors a Third-Order Determinant

Minors matrix and Cofactors matrix.

Solution

Example 2  Expansions of a Third-Order Determinant

Sage Coding

 D = matrix(3, 3, [1, 3, 0, 2, 6, 4, -1, 0, 2]) D.det()

Evaluate

-12

Example 3  Determinant of a Triangular Matrix

Sage Coding

 D = matrix(3, 3, [-3, 0, 0, 6, 4, 0, -1, 2, 5]) D.det()

Evaluate

-60

General Properties of Determinants

Theorem 1 Behavior of an -Order Determinant under Elementary Row Operations

(a) Interchange of two rows multiplies the value of the determinant by .

(b) Addition of a multiple of a row to another row does not alter the value of the determinant.

(c) Multiplication of a row by a nonzero constant multiplies the value of the determinant by . (This holds also when , but no longer gives an elementary row operation.)

Example 4  Evaluation of Determinants by Reduction to triangular Form

Find .

Solution

.

Sage Coding

 A = matrix(4, 4, [2, 0, -4, 6, 4, 5, 1, 0, 0, 2, 6, -1, -3, 8, 9, 1]) A.det()

Evaluate

1134

Theorem 2  Further Properties of an Order Determinants

(a) Interchange of two rows multiplies the value of the determinant by .

(b) Addition of a multiple of a row to another row does not alter the value of the determinant.

(c) Multiplication of a row by a nonzero constant multiplies the value of the determinant by . (This holds also when , but no longer gives an elementary row operation.)

(d) Transposition leaves the value of a determinant unaltered.

(e) A zero row or column renders the value of a determinant zero.

(f) Proportional rows or column render the value of a determinant zero.

In particular, a determinant with two identical rows or columns has the value zero.

Theorem 3  Rank in terms of Determinants

Consider an matrix :

(1) has rank     has an submatrix with a nonzero determinant.

(2) The determinant of any square submatrix with more than rows, contained in has a value equal to zero.

Furthermore, if  , we have :

(3) An square matrix has rank       .

Cramer’s Rule

If a linear system of equations in the same number of unknowns , , ,

has a non-zero coefficient determinants , the system has precisely one solution. This solution is given by the formulas

,           (Cramer’s rule)

where is the determinant obtained from by replacing in the th column by the column with the entries , , , .

(b) Hence if the system is homogeneous and , it has only the trivial solution , , , . If , the homogeneous system also has nontrivial solutions.

7.8 Inverse of a Matrix. Gauss-Jordan Elimination

The inverse of an matrix is denoted by such that

where is the unit matrix.

If has an inverse then is a nonsingular matrix.

If has an no inverse then is a singular matrix.

If has an inverse, the inverse is unique.

If both and are inverse of , then and . Show that .

Theorem 1  Existence of the Inverse

The inverse of matrix exists if and only if , thus if and only if . Hence is nonsingular if , and is singular if .

Determination of the Inverse by the Gauss-Jordan Elimination

Example 1  Inverse of a Matrix. Gauss-Jordan Elimination

Determine the inverse of .

Solution

The last columns constitute .

. Similarly .

Sage Coding

 A = matrix([[-1, 1, 2], [3, -1, 1], [-1, 3, 4]]) I = identity_matrix(3) Aug = A.augment(I).rref() # RREF of the augmented matrix [A : I] print Aug print print Aug.submatrix(0, 3, 3, 3)  # 역행렬 print print A.inverse() # 내부 명령어(역행렬)

Evaluate

[     1      0      0  -7/10    1/5   3/10]

[     0      1      0 -13/10   -1/5   7/10]

[     0      0      1    4/5    1/5   -1/5]

[ -7/10    1/5   3/10]

[-13/10   -1/5   7/10]

[   4/5    1/5   -1/5]

[ -7/10    1/5   3/10]

[-13/10   -1/5   7/10]

[   4/5    1/5   -1/5]

Useful Formulas for Inverses

Theorem 2  Inverses of a Matrix

The inverse of a nonsingular matrix is given by

where is the cofactor of in .

In particular, the inverse of

is .

Example 2  Inverses of a Matrix

Sage Coding

 A=matrix(2, 2, [3, 1, 2, 4]) print "A^(-1)=" print A.inverse()

Evaluate

A^(-1)=

[  2/5 -1/10]

[ -1/5  3/10]

Example 3  Inverses of a Matrix

Find the inverse of

Solution

Sage Coding

 A = matrix(3,3, [-1, 1, 2, 3, -1, 1, -1, 3, 4]) detA = A.det() print 1/detA*A.adjoint()

Evaluate

[ -7/10    1/5   3/10]

[-13/10   -1/5   7/10]

[   4/5    1/5   -1/5]

Diagonal matrices , when , have an inverse      all . Then is diagonal, too with entries , , , .

Example 4  Inverse of a Diagonal Matrix

Let .

Solution

Then the inverse is .

Products can be inverted by taking the inverse of each factor and multiplying these inverses in reverse order,

.

Hence for more than two factors,

.

Unusual Properties of Matrix Multiplication. Cancellation Laws

[1] Matrix multiplication is not commutative, that is, in general we have

.

[2] does not generally imply or . For example,

.

[3] does not generally imply (even when ).

Theorem 3  Cancellation Laws

Let be matrices, then

(a) If and then .

(b) If , then implies . hence if , but as well as , then and .

(c) If is singular, so are and .

Determinants of Matrix Products

Theorem 4  Determinant of a Product Matrices

For any matrices and ,

.

Theorem  Determinant of a sum Matrices

For any matrices and ,

.

and are matrix.

(1)

(2)

Solution

,

,

(1)

(2)

Sage Coding

 A = matrix(2, 2, [6, 1, 3, 2]) B = matrix(2, 2, [4, 3, 1, 2]) print A.det() print B.det() print (A.det() )*(B.det()) print (A.det() ) + (B.det()) print bool(( (A*B).det() ) ==  (A.det() )*(B.det())) print bool( (A.det() )*(B.det()) ==  (A.det() ) + (B.det()))

Evaluate

9

5

45

14

True

False

[한빛 아카데미] Engineering Mathematics with Sage:

[저자] 이상 구, 김영 록, 박준 현, 김응 기, 이재 화

Contents

A. 공학수학 1 – 선형대수, 상미분방정식+ Lab

Chapter 01 벡터와 선형대수 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-1.html

Chapter 02 미분방정식의 이해 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-2.html

Chapter 03 1계 상미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-3.html

Chapter 04 2계 상미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-4.html

Chapter 05 고계 상미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-5.html

Chapter 06 연립미분방정식, 비선형미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-6.html

Chapter 07 상미분방정식의 급수해법, 특수함수 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-7.html

Chapter 08 라플라스 변환 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-8.html

B. 공학수학 2 - 벡터미적분, 복소해석 + Lab

Chapter 09 벡터미분, 기울기, 발산, 회전 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-9.html

Chapter 10 벡터적분, 적분정리 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-10.html

Chapter 11 푸리에 급수, 적분 및 변환 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-11.html

Chapter 12 편미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-12.html

Chapter 13 복소수와 복소함수, 복소미분 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-13.html

Chapter 14 복소적분 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-14.html

Chapter 15 급수, 유수 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-15.html

Chapter 16 등각사상 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-16.html