2020학년도 1학기

                    반도체 공학과 공학수학1


   주교재: Erwin Kreyszig, Engineering Mathematics, 10th Edition

   부교재: 이상구 외 4인, 최신공학수학 I,  1st Edition

   실습실: http://www.hanbit.co.kr/EM/sage/    http://matrix.skku.ac.kr/LA/

   강의시간:  공학수학1, (월15:00-16:15)   (수16:30-17:45)

   담당교수:  김응기 박사


 



Week 12


12주차

7.3: Linear Systems of Equations

  1.2 선형연립방정식

  http://matrix.skku.ac.kr/LA/Ch-2/


7.4: Linear Independence, Rank of Matrix, Vector Space

  1.3 일차독립과 일차종속, 계수

  http://matrix.skku.ac.kr/LA/Ch-3/ 

  http://matrix.skku.ac.kr/LA/Ch-7/


7.5: Solutions of Linear Systems

  http://www.hanbit.co.kr/EM/sage/1_chap1.html 

  http://matrix.skku.ac.kr/2018-EM/EM-2-W5-lab.html





7.3 Linear Systems of Equations. Gauss Elimination


We will learn how to :

Develop the calculus of linear systems.


Linear Systems, Coefficient Matrix, Augmented Matrix.


A linear system of equations in unknowns , , , is a set of equations of the form

              

(1)                 linear system

               

               


 each variable appears in the first power only, just as in the equation of a straight line. are given numbers, called the coefficients of the system. on the right are also numbers.

, then is a homogeneous system.

If at least one is not zero, then is a nonhomogeneous system.

          


A solution of is a set of numbers , , , that satisfies all the equations.

A solution vector of is a whose components from a solution of . If the system is homogeneous, it has at least the trivial solution , , , .



Matrix Form of the Linear System (1).

We see that the equation of may be a single vector equation

(2)                definition of matrix multiplication

where the coefficient matrix is the matrix

       and      and  

are column vectors.

     is the coefficient matrix

     is the matrix

     is unknown matrix

     is constant matrix


The coefficients are not all zero.  is not a zero matrix.

 has components, whereas has components.

The matrix.

    

is the augmented matrix of the system .

The last column of does not belong to .

              



Example 1  Geometric Interpretation. Existence and Uniqueness of Solutions

If , we have two equation in two unknowns ,

    

    

(a) Precisely one solution if the lines intersect.

(b) Infinitely many solutions if the lines coincide.

(c) No solution if the lines are parallel.



Gauss Elimination and Back Substitution

triangular from

    

back substitution

last equation for the variable

    ,

work backward, substituting

     


        Augmented matrix is 

                           .



Elementary Row Operations. Row-Equivalent Systems


Elementary Row Operations for Matrices

Interchange of two rows.

Addition of a constant multiple of one row to another row.

Multiplication of a row by a non-zero constant .


Elementary Operations for Equations

Interchange of two equations.

Addition of a constant multiple of one equation to another equation.

Multiplication of a equation by a non-zero constant .


Theorem1  Row-Equivalent Systems

Row-equivalent linear systems have the same set of solutions.


A linear system :

Be called overdetermined if .

Be called determined if .

Be called underdetermined if .


A linear system is consistent if it has at least one solution.

A linear system is inconsistent if it has no solution at all.


Example 3  Gauss Elimination if Infinitely Many Solutions Exist

Solve the following system using the Gauss-Jordan elimination.

    


                                  

                                     

Solution

Back substitution

    

          

      Infinitely many solutions.

Setting ,

Solution : , , ,


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/



Example 4  Gauss Elimination if no Solutions Exist

    


                

The false statement show that the system has no solution.


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/



Row Echelon Form and Information From It

At the end of the Gauss elimination the form of the coefficient matrix, the augmented matrix, and the system itself are called the row echelon form. In it, rows of zeros, if present, are the last rows, and in each non-zero row the leftmost non-zero entry is farther to the right than in the previous row. For instance, in Example the coefficient matrix and its augmented in row echelon form are

        and  

Note that we do not require that the left most non-zero entires be since this would have no theoretic or numeric advantage.


At the end of the Gauss elimination(before the back substitution) the row echelon form of the augmented matrix will be

(9)          


Here, and , and all the entries in the blue triangle as well as in the blue rectangle are zero. From this we see that with respect to solutions of the system with augmented matrix (and thus with respect to the originally given system) there are three possible cases :

(a) Exactly one solution

If and are zero. To get the solution, solve the equation corresponding to (which is ) for , then the equation for , and so on up the line. See Example , where and .

(b) Infinitely many solutions

If and are zero. To obtain any of  these solutions, choose values of arbitrarily. Then solve the equation for , then the equation for , and so on up the line. See Ex .

(c) No solution

If and one of the entries is not zero. See Example , where and .



7.4 Linear Independence. Rank of a matrix. Vector Space


Linear Independence and Dependence of Vector

Any set of vectors , a linear combination of these vectors is

    

where are any scalar. Now consider the equation

(1)         .

          , , , are linear independent set(or linear independent)

          , , , are linear dependent.

For instance, if hold with , we can solve for :

    

    (Some may be zero. Or even all of them, namely. if .)



Example 1  Linear Independence and dependence

The three vectors

    

    

    

are linearly dependent because

    .

The and are linearly independent because


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/



Rank of a Matrix


Definition

The rank of a matrix is the maximum number of linearly independent row vectors of . It is denoted by .


Example 2  Rank

The matrix

(2)      

         

                              

has rank , because show that the first two row vectors are linearly independent, whereas all three row vectors are linearly dependent.

          .

We call a matrix row-equivalent to a matrix if can be obtained from by

(finitely many!) elementary row operations.



Theorem 1  Row-Equivalent Matrices

Row-equivalent matrices have the same rank.


Example 3  Determination of  Rank

    

                              

   


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/


Theorem 2  Linear independence and Dependence of vectors

Consider vectors that each have components. Then these vectors are linearly independent if the matrix formed, with these vectors as row vectors, has rank . However, these vectors are linearly dependent if that matrix has rank less than .



Theorem 3  Rank in Terms of Column Vectors

The rank of a matrix equals the maximum number of linearly independent column vectors of .

Hence and its transpose have the same rank.



Example 4  Illation of Theorem 3

The matrix in has rank . From Example we see that the first two row vectors are linearly independent and by “working backward” we can verify that . Similarly, the first two columns are linearly independent, and by reducing the last matrix Example by columns we find that

        and  .


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/


Theorem 4  Linear Dependence of Vectors

Consider vectors each having components. If , then these vectors are linearly dependent.

Proof

The matrix with those vectors as row vectors has rows and columns.

By Theorem it has rank , which implies linear dependence by Theorem .



Vector space


 is Dimension of .

The maximum numbers of linearly independent vectors in .


Basis for

A linearly independent set in consisting a maximum possible number of vectors in .


The number of vectors of a basis for equals .


The set of linear combinations of vectors with the same number of components is called the span of these vectors.

A span is a vector space.


By a subspace of a vector space we mean a nonempty subset of (including itself) that forms itself a vector space with respect to the two algebraic operation defined for the vectors of .



Example 5  Vector Space, Dimension, Basis

The span three vectors in Ex is a vector space of dimension and a basis , , for instance, or , etc.


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/



Theorem 5  Vector Space

The vector space consisting of all vectors with components( real numbers) has dimension .

Proof

A basis of vectors is

    

    

    

    


In the case of a matrix we call the span of the row vectors the row space of and the span of the column vectors the column space of .



Theorem 6  Row Space and Column Space

The row space and the column space of a matrix have the same dimension, equal to rank .



Finally, for a given matrix the solution set of the homogeneous system is a vector space, called the null space of , and its dimension is called the nullity of .

(6)          rank nullity Number of columns of .



Theorem

The row rank and the column rank of the matrix are equal.



7.5 Solutions of Linear Systems


Theorem 1  Fundamental Theorem for Linear systems

(a) Existence.

A linear system of equations in unknowns , , ,

           

(1)        

            

           

is consistent, that is, has solutions, if and only if the coefficient matrix and the augmented matrix have the same rank. Here

       and  

(b) Uniqueness.

The system has precisely one solution if and only if this common rank of and equals .

(c) Infinitely many solutions. 

If this common rank is less than , the system has infinity many solutions. All of these solutions are obtained by determining suitable unknowns(whose submatrix of coefficients must have rank ) in terms of  the remaining unknowns, to which arbitrary values can be assigned.

(d) Gauss elimination.

If solutions exist, they can all be obtained by the Gauss elimination. (This method will automatically reveal whether or not solutions exist.)



Example  1

Solve the following by the Gauss elimination.

    

    

    

Solution

Its augmented matrix is and its REF by EROs is  . Therefore, since the corresponding linear system of the above augmented matrix is 


      ,     i.e, 


The solution is . The solution set is  . Thus this system has a unique solution.


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/



Example  2

Solve the following system using the Gauss-Jordan elimination.

    

Solution

We will use Sage to solve this.


Sage Coding

http://math3.skku.ac.kr/  http://sage.skku.edu/  http://mathlab.knou.ac.kr:8080/


Its corresponding linear system is

     

By letting ( are any real), its solution is

    .

The solution set is

    .

Thus this system has infinitely many solutions.



Homogeneous Linear System


Theorem 2  Homogeneous Linear System

A homogeneous linear system

           

(4)        

            

           

always has the trivial solution , , , .

Non-trivial solutions exist      .

If , these solution, together with , form a vector space of dimension , called the solution space of .

In particular, if and are solution vectors of , then with any scalars and is a solution vector of . (This does not hold for non-homogeneous systems. Also, the term solution space is used for homogeneous systems only.)


The solution space of is also called the null space of because for every in the solution space of . Its dimension is called nullity of . Hence Theorem 2 states that

(5)        

where is the number of unknowns (number of columns of ).

Furthermore, by definition of rank we have in . Hence if , then . By Theorem 2 this gives the practically important.



Theorem 3  Homogeneous Linear System with Fewer Equation Than Unknowns

A homogeneous linear system with fewer equation than unknowns has always nontrivial solutions



Example  3

Using the Gauss-Jordan elimination, express the solution of the following homogeneous equation as a vector form.

      

Solution

Its augmented matrix is , and its

              

RREF is . Thus, leading entry 1's correspond to leading variables , , and the rest variables , , to free variables. We have the following.

    ,    ,   

Now let free variables be , , , then

    , , , , , . 

Therefore are solutions of the given homogeneous equation for any . And the solution set is  

      .  



Nonhomogeneous Linear System


Theorem 4  Nonhomogeneous Linear System

If a nonhomogeneous linear system is consistent, then all of its solutions are obtained as

(6)        

where is any (fixed) solution of and runs through all the solution of the corresponding homogeneous system .



Example  4

From the nonhomogeneous linear system in Example 2, its homogeneous linear system in Example 3, and (6), we can express a general solution of the nonhomogeneous linear system as a vector form with any .

    

         



http://matrix.skku.ac.kr/sglee/ 


[한빛 아카데미] Engineering Mathematics with Sage:

[저자] 이상 구, 김영 록, 박준 현, 김응 기, 이재 화


Contents

 A. 공학수학 1 – 선형대수, 상미분방정식+ Lab

Chapter 01 벡터와 선형대수 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-1.html


Chapter 02 미분방정식의 이해 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-2.html

Chapter 03 1계 상미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-3.html    

Chapter 04 2계 상미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-4.html

Chapter 05 고계 상미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-5.html

Chapter 06 연립미분방정식, 비선형미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-6.html

Chapter 07 상미분방정식의 급수해법, 특수함수 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-7.html  

Chapter 08 라플라스 변환 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-8.html

 

B. 공학수학 2 - 벡터미적분, 복소해석 + Lab

Chapter 09 벡터미분, 기울기, 발산, 회전 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-9.html

Chapter 10 벡터적분, 적분정리 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-10.html

Chapter 11 푸리에 급수, 적분 및 변환 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-11.html

Chapter 12 편미분방정식 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-12.html

Chapter 13 복소수와 복소함수, 복소미분 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-13.html

Chapter 14 복소적분 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-14.html

Chapter 15 급수, 유수 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-15.html

Chapter 16 등각사상 http://matrix.skku.ac.kr/EM-Sage/E-Math-Chapter-16.html



Made by Prof. Sang-Gu LEE  sglee at skku.edu

http://matrix.skku.ac.kr/sglee/   with Dr. Jae Hwa LEE


Copyright @ 2020 SKKU Matrix Lab. All rights reserved.
Made by Manager: Prof. Sang-Gu Lee and Dr. Jae Hwa Lee
*This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2017R1D1A1B03035865).