LA Chapter 8 by SGLee

LA Chapter 7 by SGLee

















Chapter 8




 8.1 Matrix Representation of Linear Transformation

 8.2 Similarity and Diagonalization

 8.3 Diagonalization with orthogonal matrix,

 8.4 Quadratic forms

*8.5 Applications of Quadratic forms

 8.6 SVD and Pseudo-Inverse

 8.7 Complex eigenvalues and eigenvectors

 8.8 Hermitian, Unitary, Normal Matrices

*8.9 Linear system of differential equations

 8.10 Exercises



 (English Textbook)   

 (e-book : Korean)  






행렬 계산기

그래프 이론



In Chapter 6, we have studied how to represent a linear transformation from  into 

       as a matrix using its corresponding standard matrix.

We were able to compute the standard matrix of the linear transformation based on the fact that

   every vector in  or  can be expressed as a linear combination of the standard basis vectors.


In this chapter, we study how to represent a linear transformation from  to  with respect to arbitrary ordered bases for  and .

In addition, we study relationship between different matrix representations of a linear transformation from  to itself using transition matrices.

We also study matrix diagonalization.


Further, we study spectral properties of symmetric matrices and show that every symmetric matrix is orthogonally diagonalizable.


A quadratic form is a quadratic equation which we come across in mathematics, physics, economics, statistics, and image processing, etc.

Symmetric matrices play a significant role in the study of quadratic forms.

We will learn how orthogonal diagonalization of symmetric matrices is used in the study of quadratic forms.


We introduce one of the most important concept in matrix theory called the singular value decomposition (SVD) 

                        which has many applications in science and engineering.


We will generalize matrix diagonalization of  matrices and study least squares solutions and a pseudoinverse.


Furthur we deal with complex matrices having complex eigenvalues and eigenvectors.

We also introduce Hermitian matrices and unitary matrices that are complex counterparts corresponding to symmetric matrices and orthogonal matrices respectively.

Lastly, we study diagonalization of complex matrices.




8.1 Matrix Representation


 Lecture Movie :

 Lab : 

원본 그림의 이름: mem000007b47fa7.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

In Chapter 6, we have studied how to represent a linear transformation from  into  

            as a matrix using the standard bases for  and 

    In this section, we find a matrix representation of a linear transformation from  into  

                    with respect to arbitrary ordered bases for  and .


Matrix Representation Relative to the Standard Bases




Matrix Representation Using Arbitrary Ordered Bases for  and 





Let  be a linear transformation, and let




be ordered bases for  and  respectively. Let . Then




where the matrix representation  of  with respect to the ordered bases and  is



Note that the matrix  is called the matrix associated with the linear transformation  with respect to the bases  and .

 Recall that any vector  can be uniquely represented as a linear combination of vectors in , say




       Then the coordinate vector for  relative to the basis  is



       By the linearity of , we have

       . Since  is a vector in , the coordinate vector of  relative to  satisfies




Thus the matrix  is the matrix whose -th column is the coordinate vector  of  with respect to the basis .








(1) By Theorem 8.1.1 we can compute   by a matrix-vector multiplication, that is,




(2) The matrix  varies according the ordered bases .

       For example, if we change the order of the vectors in the ordered base , then the columns of  change as well.

(3) The matrices  and  are distinct, but they have the following relationship:



(4) If  and , then  is denoted by  and

        is called the matrix representation of  relative to the ordered basis .







Define a linear transformation  via  and let


be ordered bases for  and  respectively. Compute .


Since , we first compute  .:



We now find the coordinate vectors of the above vectors relative to the ordered basis . Since






we need to solve the corresponding linear systems with the same coefficient matrix .

  The augmented matrix for all of the three linear systems is . By converting this into its RREF, we get




Therefore,  and hence


   (Note that .)                                            


Let  be defined via  and


be ordered bases for  and  respectively. Find .


Since , we have




We can get its RREF as follows:  




Therefore, we get as



① Write .

[ 1 -1  0  2  3]

[ 0  2  1 -2 -1]

[-1  1  1 -1 -3]



[   1    0    0  1/2  5/2]

[   0    1    0 -3/2 -1/2]

[   0    0    1    1    0]

③ Finding 

[ 1/2  5/2]

[-3/2 -1/2]

[   1    0]    


We shall include calculation using the inbuilt function. Following are the codes.

[ 1/2  5/2]

[-3/2 -1/2]

[   1    0]                                                



Let  be a linear transformation defined as  and

       consider the ordered bases  for  and  respectively.


  Answer the following questions:


(1) Find .

(2) Compute  using   in (1).

(3) Using the definition of , find the standard matrix  and  ,

      where  and  are the standard bases of  and  respectively.


(1) Since  and 

     where  and

       . Hence  .


(2) Since , we have



   (Note that 



(3) .            






Composition of Linear Transformations


Let  be a linear transformation from a vector space  with an ordered basis  into a vector space  with an ordered basis ,

      and  be a linear transformation from a vector sapce  with an ordered basis  into a vector space  with an ordered basis .

   Suppose these linear transformations have their corresponding matrix representations  and respectively.

     We can consider the composition . Then its matrix representation is




That is, the product of the two matrix representations of  and .


















Transition Matrix





As we have discussed earlier, , the matrix  is called the transition matrix from ordered basis to ordered basis .

   We can consider the transition matrix as linear transformation .








Let  be linear transformations defined as


respectively. Consider the ordered bases  for .

  Find the matrix representation of the composition  with respect to the ordered bases  and .

Matrix of T=

[ 1 -1]

[ 2  1]

Matrix of S=

[ 2 -3]

[ 3  2]


[-4 -5]

[ 7 -1]

Matrix of S*T=

[-4 -5]

[ 7 -1]                                                      



Consider  the linear transformation  defined by


respectively. Find the composition  and the matrix associated with composition.

   Hence show that the matrix of the composition is a product of the matrices associated with  and .

[ 3  1  4 -1]

[ 8  5 13 -6]

[ 7  2  9 -3]

[-1  0 -1  1]





8.2 Similarity and Diagonalization


 Lecture Movie :

 Lab : 

원본 그림의 이름: mem000007b40001.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

In this section, we present various matrix representations of a linear transformation  from  to itself

   in terms of transition matrix.   We also study when the transition matrix becomes a diagonal matrix.



Relationship between matrix representations  and 
















Let  be a linear transformation and  and  be ordered bases for .


      If , then we have




where   is the transition matrix from  to .




원본 그림의 이름: CLP000016340004.bmp
원본 그림의 크기: 가로 781pixel, 세로 281pixel



Let  be a linear transformation defined by .

      If  is the standard basis  for  and  is a basis for , find  using the transition matrix .


Let  be the standard matrix relative to the standard basis  for linear transformation .

        Then we can find . If  , then




Therefore,  and by Theorem 8.2.1 we get  as follows:




Transition Matrix=

[ 0 -1]

[ 1  1]


[ 2 -1]

[ 1  3]


[ 2 -1]

[ 1  3]

Matrix of A wrt beta=

[ 2 -1]

[ 1  3]                                                   










For square matrices  and  of the same order, if there exists an invertible matrix  such that


then we say that  is similar to . We denote it as .








For , it can be shown that  . Hence  is similar to .                     






For square matrices  of the same order, the following hold:




Therefore, the similarity relation is an equivalence relation.






For square matrices  of the same order, if  are similar to each other, then we have the following:

 (1) .

 (2) .


 Since , there exists an invertible matrix  such that .


       (1) By the multiplicative property of determinant,






       (2)    ( )



 Since similar matrices have the same determinants, it follows that they have the same characteristic equations and hence the same eigenvalues.

    In ordered to solve problems of determinant and/or eigenvalues of a square matrix, we can use its similar matrices which make the problems simpler.



Diagonalizable Matrices




 [Diagonalizable Matrices]





Suppose a square matrix  is similar to a diagonal matrix, that is, there exists an invertible matrix  such that is a diagonal matrix. Then  is called diagonalizable and the invertible matrix  is called a diagonalizing matrix for .







 If , then  and hence we have

   ( multiplications of )



 This implies that if a matrix is diagonalizable, then its powers can be very easily computed.



For invertible matrix  and matrix , we have


Hence  is diagonalizable.                                 



원본 그림의 이름: mem000007b40009.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel





Since every diagonal matrix  satisfies , it is




Note that not every matrix is diagonalizable.



Show that  is not diagonalizable. 


Suppose to the contrary that  is diagonalizable, that is, there exist an invertible matrix  and a diagonal matrix  with




such that . Since , we have




which gives . Hence .


If , then  and (). Hence . Similarly, we can show that .

   The conditions  and  give a contradiction to  . Therefore,  is not diagonalizable.                



Equivalent Conditions for Diagonalizability





 8.2.4 [Equivalent Condition]

Let  be a square matrix of order . Then  is diagonalizable if and only if has  linearly independent eigenvectors. 

  Furthermore, if  is diagonalizable, then  is similar to diagonal matrix whose main diagonal entries are equal to

    the eigenvalues  of , and the  th column of a diagonalizing matrix is an eigenvector of  corresponding to eigenvalue .



  If  is diagonalizable, then there exists an invertible matrix




       such that  where .

    Since , we get .


    Hence  are eigenvalues of  and . Note that  are eigenvectors of  

             corresponding to eigenvalues   respectively.


      Since  is invertible, it follows that its columns  are linearly independent.


        Suppose  has eigenvalues  and their corresponding eigenvectors  that are linearly independent. We define  as







       Since the columns of  are linearly independent, the matrix  is invertible, giving .

    Therefore  is diagonalizable. 


 Procedure for diagonalizing a matrix 





● Step 1: Find  linearly independent eigenvectors  of .

● Step 2: Construct a matrix  whose columns are  in this order.

● Step 3: The matrix  diagonalize  and  is a diagonal matrix

                 whose main diagonal entries are eigenvalues of 










It can be shown that the matrix  has eigenvalues  and

    their corresponding eigenvectors are  respectively.

     Since these eigenvectors are linearly independent, by Theorem 8.2.4,  is diagonalizable.


 If , then we have





Show that  is diagonalizable and find the diagonalizing matrix  of .



[1, 2, 2]

 has eigenvalues . We now compute linearly independent eigenvectors of .


For ,  we solve    (that is,   ) for .

[ 1  0  2]

[ 0  1 -1]

[ 0  0  0]


Since  , we get ;


    For , we solve  (that is, ) for .

[1 0 1]

[0 0 0]

[0 0 0]


This gives    and hence


[-2 -1  0]

[ 1  0  1]

[ 1  1  0]


Since the above computation shows that the determinant of  is not zero,  is invertible.

    Hence its columns  are linearly independent.


   Therefore, by Theorem 8.2.4,   is diagonalizable.     

[1 0 0]

[0 2 0]

[0 0 2]                                                           ■






If  are eigenvectors of  corresponding to distinct eigenvalues , then the set  is linearly independent.


 (Exercise) Hint. This can be proved by the mathematical induction on .






If a square matrix  of order  has  distinct eigenvalues, then  is diagonalizable.



 Let  be eigenvectors of  corresponding to distinct eigenvalues.

      Then, by Theorem 8.2.5, the eigenvectors are linearly independent. Therefore, Theorem 8.2.4 implies that  is diagonalizable.                                                      


The matrix  in Example 6 has two distinct eigenvalues. Thus, by Theorem 8.2.6,  is diagonalizable.                     



 Note that a diagonal matrix  can have a repeated eigenvalue. Therefore, the converse of Theorem 8.2.6 is not necessarily true.

Algebraic Multiplicity and Geometric Multiplicity of an Eigenvalue




 [Algebraic and Geometric Multiplicity]





Let  be distinct eigenvalues of .

  Then the characteristic polynomial of  can be written as


In the above expression the sum of the exponents  is equal to .

  The positive integer  is called thealgebraic multiplicity of  and 

 the number of linearly independent eigenvectors corresponding to the eigenvalue  is called

       the geometric multiplicity of .










 8.2.7 [Equivalent Condition for Diagonalizability]

Let  be a square matrix of order . Then  is diagonalizable if and only if the sum of the geometric multiplicities of eigenvalues of  is equal to .



 By Theorem 8.2.4, an equivalent condition for a square matrix  of order  to diagonalizable is to have  linearly independent eigenvectors.

      Since the sum of the geometric multiplicities of eigenvalues of  is equal to the number of linearly independent eigenvectors of  and

       it is equal to , the result follows.                                                   






Let  be a square matrix and  be an eigenvalue of . Then the algebraic multiplicity of  is greater than or equal to the geometric multiplicity of .



 Let  be the geometric multiplicity of an eigenvalue  of , and let  be the  matrix

        whose columns are the linearly independent eigenvectors of  corresponding to .

     We can construct an invertible matrix  by adding  linearly independent columns to .

      Let  be the resulting invertible matrix and let  be the inverse of .

     Then . Note that  and  have same characteristic polynomials.

       Since the first  column of has have  in its diagonal,

      the characteristic polynomial of  has a factor of at least .

        Hence, the algebraic multiplicity of  is greater than or equal to the geometric multiplicity of .        



 8.2.9 [Equivalent Condition for Diagonalizability]

Let  be a square matrix of order .

    Then  is diagonalizable if and only if each eigenvalue  of  has the same algebraic and geometric multiplicity.



If  is diagonalizable, then there exists an invertible matrix  and a diagonal matrix  

          such that , or equivalently .

     This implies that  times column  of  is equal to scalar multiple of the column  of .

        Hence, all the  columns of  are eigenvectors of ,

      which implies that each eigenvalue of  has the same algebraic and geometric multiplicity.


     The converse follows from Theorem 8.2.7.        



For , the characteristic equation is   .

    Hence the eigenvalues of  are  and  has algebraic multiplicity 2. 

     The following two vectors are linearly independent eigenvectors of 




corresponding to  and  respectively.

However, matrix  does not have three linearly independent eigenvectors.

  Hence Theorem 8.2.4 implies that  is not diagonalizable.         


 Note that the eigenvalue  with algebraic multiplicity 2 has

                  only one linearly independent eigenvector of .


It can be shown that  has eigenvalues 3 and  with algebraic multiplicity 1 and 2 respectively,

    We can further show that geometric multiplicity of 3 and 2 are 1 and 2 respectively.

    Hence  is diagonalizable since  has 3 linearly independent eigenvectors.

       It can be verified that  diagonalize  and , where .

      Let us further compute .






8.3 Diagonalization with orthogonal matrix,


*Function of matrix

 Lecture Movie :

 Lab : 

원본 그림의 이름: mem000007b40002.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

Symmetric matrices appear in many applications.

     In this section, we study useful properties of symmetric matrices and show that every symmetric matrix is orthogonally diagonalizable

    *Furthermore, we study matrix functions using matrix diagonalization.

Orthogonal Matrix




 [Orthogonal Matrix]





For a real square matrix , if  is invertible and , then  is called an orthogonal matrix.










If  is an orthogonal matrix of order , then the following hold:


(1) The rows of  are unit vectors and they are perpendicular to each other.

(2) The columns of  are unit vectors and they are perpendicular to each other.

(3)  is invertible.

(4)  for any  vector     (Norm Preserving Property).



Similar to the proof of Theorem 6.2.3.



For , we have . Since

    is an orthogonal matrix.     






 [Orthogonal Similarity]





Let  and  be square matrices of the same order.

   If there exists an orthogonal matrix  such that , then  is said to be orthogonally similar to .









 [Orthogonally Diagonalizable]





For a square matrix , if there exists an orthogonal matrix diagonalizing ,

   then  is called orthogonally diagonalizable and  is called a matrix orthogonally diagonalizing .








 Next we try to answer the following question. What matrices are orthogonally diagonalizable? 



Every eigenvalue of a real symmetric matrix is a real number.


 Let  be an eigenvalue of  and , an eigenvector corresponding to .

     Then Now premultiplying the first equation both sides by  to have . 

      Taking the complex conjugate both sides, we have  which implies



         Now, we get  . Note that . Hence  which means  is real.                                                  

Note: Compare this proof with the one in Theorem 8.7.1

The symmetric matrix  has characteristic equation



     Hence its eigenvalues are   which are all real numbers.             






If a square matrix  is symmetric, then eigenvectors of  corresponding to distinct eigenvalues are perpendicular to each other.


 Let  be eigenvectors corresponding to distinct eigenvalue  and  () respectively.

      We have  .

           Since , the above equation implies .                                         






Let  be a square matrix. Then  is orthogonally diagonalizable if and only if the matrix  is symmetric.


 () Suppose  is orthogonally diagonalizable.

           Then there exists an orthogonal matrix  and a diagonal matrix  such that .

         Since , we have








       Therefore,  is symmetric. 


       () : Read :    




If  is a symmetric matrix of order , then  has  eigenvectors forming an orthonormal set.


 Since  is symmetric, by Theorem 8.3.4,  is orthogonally diagonalizable,

       that is, there exists an orthogonal matrix  and a diagonal matrix  such that .

        Hence the main diagonal entries of  are the eigenvalues of  and the columns of  are  eigenvectors of .

        Since the columns of the orthogonal matrix  form an orthonormal set, the  eigenvectors of  are orthonormal.      






For a square matrix  of order the following are equivalent:


(1)  is orthogonally diagonalizable.

(2)  has  eigenvectors that are orthonormal.

(3)  is symmetric.


 How to find an orthogonal matrix  diagonalizing a given symmetric matrix ?



For symmetric matrix , find an orthogonal matrix  diagonalizing .


 Since the characteristic equation of  is ,

         the eigenvalues of  are ,.

      Note that all the eigenvalues are distinct. Hence there exist eigenvectors of  that are orthogonal.

      The corresponding eigenvectors of  are




By normalizing , we get an (real) orthogonal matrix  diagonalizing :





It can be shown that the matrix  has eigenvalues (algebraic multiplicity 2) and .

    Hence we need to check if  has two linearly independent eigenvectors.

    After eigenvector computation, we get




that are linearly independent eigenvectors corresponding to eigenvalue -3.

  Using the Gram-Schmidt Orthonormalization, we get


   ,    ,



We can find an eigenvector  corresponding to the eigenvalue  and

     normalization gives


Therefore, the orthogonal matrix   diagonalizing  is given by  






* Function of Matrices





● There are several techniques for extending a real function to a square matrix function such that interesting properties are maintained.

        You can find more the details on the following website:









원본 그림의 이름: mem0000019c0001.JPG
원본 그림의 크기: 가로 217pixel, 세로 106pixel   

원본 그림의 이름: mem0000019c0002.png
원본 그림의 크기: 가로 560pixel, 세로 158pixel

8.4 Quadratic forms


 Lecture Movie :

 Lab : 

원본 그림의 이름: mem000007b40003.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

A quadratic form is a polynomial each of whose terms is quadratic.

Quadratic forms appear in many scientific areas including mathematics, physics, economics, statistics, and image processing.

Symmetric matrices play an important role in analyzing quadratic forms.

In this section, we study how diagonalization of symmetric matrices can be applied to analyse quadratic forms.










An implicit equation in variables  for a quadratic curve can be expressed as

           .                      (1)


This can be rewritten in a matrix-vector form as follows:

           .                      (2)









Graph for a quadratic curve (conic section)





The following are the types of conic sections:

① Non-degenerate conic sections: Circle, Ellipse, Parabola, Hyperbola. See Figure 1 and Example 1.

② Imaginary conic section: There are no points  satisfying (1)

③ Degenerate conic section: The graph of the equation (1) is

        one point, one line, a pair of lines, or having no points. See Example 2.









                                             Figure 1




Conic Sections in the Standard Position





●     (Ellipse)                                     (3)

원본 그림의 이름: CLP000016f80002.bmp
원본 그림의 크기: 가로 784pixel, 세로 359pixel


●        or  (Hyperbola)               (4)

원본 그림의 이름: CLP000016f80003.bmp
원본 그림의 크기: 가로 1080pixel, 세로 453pixel


●        or  (Parabola, )                  (5)


원본 그림의 이름: CLP000016f80004.bmp
원본 그림의 크기: 가로 878pixel, 세로 289pixel







(Non-degenerate conic section)

Since the equation  can be written as ,

    the graph of this equation is an ellipse.

      The equation  has the standard form  and

    hence its graph is a hyperbola.

    Since the equation  can be put into , its graph is a parabola. 



원본 그림의 이름: sage0 (1).png
원본 그림의 크기: 가로 488pixel, 세로 316pixel  


(Degenerate conic section)

The graph of the equation  is the -axis. The graph of  consists of the two horizontal lines .

  The graph of  consists of the two lines  and .

   The graph of  consists of one point . The graph of  has no points.       


 The graph of a quadratic equations with both  and  terms or

             both  and  terms is a translation of a conic section in the standard position.



    Let us plot the graph of .

    By completing squares in , we get


                  .                       (6)


Hence by using the substitutions , we get


in the -coordinate plane. This equation gives a hyperbola of the standard position in the -coordinate plane.

    Hence the graph of the equation (6) is obtained by translating the hyperbola

           in the standard position 3 units along the -axis and 1 unit along the -axis. 



원본 그림의 이름: mem00000e740001.png
원본 그림의 크기: 가로 360pixel, 세로 359pixel             

Quadratic Form





 [Quadratic Form]







is called the quadratic form of the quadratic equation (1).







The quadratic equations  are quadratic forms,

      but the quadratic equation  has a linear term  and

          constant term 1 and hence it is not a quadratic form.   



 A quadratic form can be written in the form of . For example,

 or .


This means that the matrix  above is not unique.


 We will use a symmetric matrix  to write a quadratic forms:




   We use symmetric matrices to represent quadratic forms because

               symmetric matrices are orthogonally diagonalizable. 










Let  be a symmetric matrix of order  and  for  real values .

  Then  is called a quadratic form in .








 For a quadratic form in  and , the -term is called a cross-product term.

 Using orthogonal diagonalization, we can eliminate the cross-product term.


 For a quadratic form



  the matrix  is symmetric, we can find orthonormal eigenvectors  corresponding to the eigenvalues  of .

    The matrix  is orthogonal and  orthogonally diagonalizes , that is, .

   Since we can switch the roles of  and  by switching the roles of  and ,

     without loss of generality, we can assume .


  Therefore, we can consider  as the rotation matrix  in .

        Let  for some . Then





  and hence  is a quadratic form without any cross-product term in the -coordinate system.

       Therefore, we get the following theorem.




 8.4.1 [Diagonalization of a Quadratic Form] 

Suppose a symmetric matrix  has  as its eigenvalues. Then, by rotating the coordinate axes, the quadratic form   can be written as follows in the -coordinate system


                     .                        (8)


If the determinant of  is 1 and  diagonalize , then the rotation can be obtained by  or .




Using diagonalization of a quadratic form, determine which conic section the following quadratic equation describes.


               .                         (9)


The quadratic equation  can be written as




Since the characteristic equation of the symmetric matrix  is

    , the eigenvalues of  are  .

   By Theorem 8.4.1, .


   Hence, in the new coordinate system, the quadratic equation becomes



원본 그림의 이름: CLP000016f80005.bmp
원본 그림의 크기: 가로 385pixel, 세로 379pixel


  Since eigenvectors corresponding to  are



respectively, the orthogonal matrix  diagonalizing  is



           and   .


Therefore -coordinate axes are obtained by rotating the -axis 45° clockwise () and

  the equation (9) is an ellipse in the standard position relative to the -coordinate system.     


Simulation for quadratic forms







원본 그림의 이름: CLP000008101ee9.bmp
원본 그림의 크기: 가로 900pixel, 세로 652pixel 그림입니다.
원본 그림의 이름: CLP000008100001.bmp
원본 그림의 크기: 가로 903pixel, 세로 653pixel







Sketch the graph of the following quadratic equation


         .              (10)


Letting  , we can rewrite the equation (10) as follows:


         .                                (11)


Using rotation we first eliminate the cross-product terms. Since the characteristic equation of  is




the eigenvalues of  are  and their corresponding orthonormal eigenvectors are   respectively.

    Hence we can take .


Using axis rotation , we get  and  and hence from (11), we obtain


              .                   (12)


We now use horizontal translation to remove -term in (12). By completing the squares in (12) we get



That is,  (Note: ).


   Therefore, the equation (12) represents an ellipse in the -coordinate system                    




where the -coordinate system is obtained by 

    horizontally translating the -coordinate system 1 unit along the -axis.

원본 그림의 이름: CLP000016f80006.bmp
원본 그림의 크기: 가로 512pixel, 세로 376pixel

Surface in 3-dimensional space




                       .                               (13)


Then, after diagonalization, we get 




in the rotated -coordinate system. This enables us to identify the graph of the equation (13) in .


In equation (14), if both  are positive,

    then the graph of equation (14) is a paraboloid opening upward (see figure (a) below). If both  and  are negative,

then the graph is a paraboloid opening downward (see figure (b) below).


Since the horizontal cross-section of each paraboloid is an ellipse,

     the above graphs are called elliptic paraboloids.


원본 그림의 이름: CLP000016f80007.bmp
원본 그림의 크기: 가로 689pixel, 세로 366pixel

                                Elliptic Paraboloid


 In (14) if both of  and  are nonzero but have different signs, then  the graphs looks like a saddle in (a) and

      is called a  hyperbolic paraboloid. If one of  and  is zero, then the graph is parabolic cylinder in (b).



Show that the graph of the following equation is an elliptic paraboloid and sketch its cross-section at .




The quadratic form in (15) can be written as .

    We first find an orthogonal matrix  diagonalizing the symmetric matrix .

      It can be shown that , and hence using , we can transform (15) into the following:


                   .                       (16)


The equation (16) represents an elliptic paraboloid in the -coordinate system.

   Note that the -coordinate system is obtained by rotating the -coordinate by angle  counterclockwise. Hence, in  is given by




and . Now we sketch the cross-section of equation (15) at .

     By substituting  into (16), we get  and hence the graph looks like the following:

원본 그림의 이름: CLP000016f80009.bmp
원본 그림의 크기: 가로 408pixel, 세로 328pixel         


 Let's use Sage to graph equation (15)


① Computing eigenvalues of 

[50, 25]

② Computing eigenvectors of 

[(50, [(1, -4/3)], 1),

(25, [(1, 3/4)], 1)]


③ Computing  diagonalizing 

[ 4/5  3/5]

[ 3/5 –4/5]

④ Sketching two ellipses simultaneously

원본 그림의 이름: mem00000dd00001.png
원본 그림의 크기: 가로 585pixel, 세로 584pixel                      




8.5 *Applications of Quadratic forms


 Lecture Movie : 

 Lab : 

원본 그림의 이름: mem000007b40004.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

By the theorem of principal axis (theorem 8.4.1), the graph of a 3D curve can be shown in the form of circle, ellipse or parabola in 2D.

 The specific shape is uniquely determined by signs of eigenvalues of the corresponding quadratic form.

    In this section, we define the sign of the quadratic form to identify the type of graph of given quadratic forms,

    and learn how to obtain the extrema of multivariable functions using them.


 Given a system of springs and masses, there will be one quadratic form that represents the kinetic energy of the system,

    and another which represents the potential energy of the system in position variables.

     Can one supply more details or omit this section. More details can be found in the following websites:

● Application of Quadratic Forms and Sage:


원본 그림의 이름: mem0000019c0005.jpg
원본 그림의 크기: 가로 300pixel, 세로 300pixel

8.6 SVD and Pseudo-inverse


 Lecture Movie :

 Lab : 

원본 그림의 이름: mem000007b40005.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

 We have learned that symmetric matrices are diagonalizable.

     We now extend the concept of diagonalization to  matrices (not necessarily square or symmetric) resulting in a matrix decomposition and

     study pseudo-inverse and least square solutions using a matrix decomposition.




 8.6.1 [Singular Value Decomposition]

Let  be an  real matrix. Then there exist orthogonal matrices  of order  and  of order ,

    and an  matrix  such that 


                           ,                 (1)


where the main diagonal entries of  are positive and listed in the monotonically decreasing order,

    and  is a zero-matrix. That is,



       where .










Equation (1) is called the singular value decomposition (SVD) of .

  The main diagonal entries of the matrix  are called the singular values of .

  In addition, the columns of  are called the left singular vectors of  and the columns of  are called the right singular vectors of .






 The following theorem shows that matrices  and  are orthogonal matrices diagonalizing  and  respectively.






Let the decomposition  be the singular value decomposition (SVD) of an  matrix  where  are positive diagonal entries of . Then


 (1) .

 (2) .



 Since , we get


   Hence, by considering,  and ,







Find the SVD of .


 The eigenvalues of  are   and

     hence the singular values of  are .

A unit eigenvector of  corresponding to  is , and a unit eigenvector of  corresponding to 

                is . We can also find unit eigenvectors of :



Hence we get



Therefore, the SVD of  is


① Computing the singular values of  and eigenvectors of 

[(9, [(1, sqrt(3))], 1), (1, [(1, -1/3*sqrt(3))], 1)]

② Computing 

[        1/2 1/2*sqrt(3)]

[1/2*sqrt(3)        -1/2]

③ Computing eigenvectors of 

[(9, [(1, 1/3*sqrt(3))], 1), (1, [(1, -sqrt(3))], 1)]

④ Computing 

[ 1/2*sqrt(3)          1/2]

[         1/2 –1/2*sqrt(3)]

⑤ Computing diagonal matrix 

[3 0]

[0 1]

⑥ Verifying 

[sqrt(3)       2]

[      0 sqrt(3)]                                             (SVD)          ■




Equivalent statement of invertible matrix on SVD






Let  be an  matrix. Then  is a nonsingular matrix if and only if every singular value of  is nonzero.


 Since , matrix  is nonsingular if and only if  is nonsingular.

     Hence, if  is nonsingular, then all the eigenvalues of  are nonzero.

      By Theorem 8.6.2, the singular values of  are the square roots of the positive eigenvalues of .

       Hence the singular values of  are nonzero.                       





Suppose  are the singular values of an  matrix . Then the matrix  can be expressed as follows:

                           .                         (R)

The equation (R) is called a rank-one decomposition of .



 Note that the pseudo-inverse of a matrix is important in the study of the least squares solutions for optimization problems.


 We can express an  nonsingular matrix  using the SVD


                               .                                 (2)


  Note that all of  are  nonsingular matrices and  are orthogonal matrices.

   Hence the inverse of  can be expressed as


                               .                            (3)

 If  is not a square matrix or  is singular, then (3) does not give an inverse of .

    However, we can construct a pseudo-inverse  of  by putting  in (2) into the form (where  is nonsingular).











For an  matrix , the  matrix  is called a pseudo-inverse of ,

    where  are orthogonal matrices in the SVD of   and  is


                    (where  is nonsingular).







 We read  as  ‘dagger.’ If , then we define .



 Truncated SVD'sLSI-SVDModule/p5module.html



What is a truncated SVD?


We learned that singular value decomposition for any matrix  so that  .

     Let's take a closer look at the matrix . Remember  is a diagonal matrix where

                                    and  are the singular values of the matrix .

     A full rank decomposition of  is usually denoted by  where  and  are the matrices obtained by taking the first  columns of  and  respectively.

    We can find -rank approximation (or truncated SVD) to  by taking only the first k largest singular values and the first kcolumns of U and V.



Find a pseudo-inverse of .


We first compute the (truncated) SVD1) of : 







원본 그림의 이름: mem000007b4000a.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel


[ 0.333333333333 -0.333333333333  0.666666666667]

[ 0.333333333333  0.666666666667 -0.333333333333]


 If  has , then  is said to be of the full column rank.

   If  has full column rank, then  is nonsingular. If  is nonsingular, then the pseudo-inverse of  is equal to .






If an  matrix  has full column rank, then the pseudo-inverse of  is




 Let  be the SVD of . Then  where  is nonsingular. Then




      Since  has full column rank,  is nonsingular and matrix  is an  orthogonal matrix. Hence  and






Find the pseudo-inverse of  using theorem 8.6.5.




Since  has full column rank,


    is nonsingular and







If  is a pseudo-inverse of , then the following hold:







 (6) .









A pseudo-inverse provides a tool for solving a least square problem.

     It is known that the least squares solution to the linear system  is the solution to the normal equation .

   If  has full column rank, then the matrix  is nonsingular and hence


This means that if  has full column rank, the least squares solution to  is the pseudo-inverse  times the vector .













Let  be an  matrix and  be a vector in . Then  is the least squares solution to .



 Let  be the SVD of  with ( is nonsingular).

     Then  and hence , Since





       it follows that  satisfies .                        



Find the least squares line passing through the four points  .



  Let  be an equation for the line that fits to the points .

      Then, by letting , the given condition can be written as the linear system  for which  and .

  Since  has full column rank, we get  which is


   . Hence . Therefore, the least squares line is given by .                         


원본 그림의 이름: CLP00000d9c0001.bmp
원본 그림의 크기: 가로 633pixel, 세로 599pixel


var('x, y')

p1=plot(x + 3/2, x, -1,3, color='blue');

p2 = text("$x+ 3/2 $", (2,2), fontsize=20, color='blue')

show(p1+p2, ymax=4, ymin=-1)






8.7 Complex eigenvalues and eigenvectors


 Lecture Movie :

 Lab :

원본 그림의 이름: mem000007b40006.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

We have so far focused on real eigenvalues and real eigenvectors.

    However, real square matrices can have complex eigenvalues and eigenvectors.

   In this section, we introduce complex vector spaces, complex matrices, complex eigenvalues and complex eigenvectors.


Complex vector spaces




 [Complex Vector Space]





The set of vectors with  complex components is denoted by



If we define the vector addition and the scalar multiple of a vector in  similar to those for , then  is a vector space over  and its dimension is equal to .










  then a vector  in  can be expressed as  where ’s are complex numbers,

     and the set {} is a basis for .  This basis is called the standard basis for .



 For a complex number  is called the conjugate of  and   is called the modulus of .

    Furthermore, if we denote a complex number  as , then  and .

    For a complex vector , we define its conjugate as .


● [Example]

원본 그림의 이름: mem000013bc0003.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

Inner product




 [Inner Product]





Let  and  be vectors in . Then




satisfies the following properties:





(4)  in particular, 


The inner product  is called the Euclidean inner product for the vector space  as a special case.
















Let   be vectors in .

      Then, using the Euclidean inner product , we can define the Euclidean norm  of  and

     the Euclidean distance  between  as the following:


(1)     .

(2)    .







 If , then we say that  and  are orthogonal to each other.


원본 그림의 이름: mem0000019c0006.gif
원본 그림의 크기: 가로 578pixel, 세로 167pixel     



For vectors , compute the Euclidean inner product and their Euclidean distance. 








4*I + 8

sqrt(13)                                                          ■




Complex Eigenvalues and Eigenvectors of Real



Let  be a real square matrix of order .

  Since eigenvalues of  are zeros of its  characteristic  polynomial, eigenvalue can be a complex number.

    If we find eigenvector corresponding to a complex eigenvalue, it will a complex vector.





If  is a complex eigenvalue of an  real matrix  and  is its corresponding eigenvector,

     then the complex conjugate  of  is also an eigenvalue of  and  is an eigenvector corresponding to .


 Since an eigenvector is a nonzero vector,  and . Since  and  is real (i.e.,),


                      it follows that .

Eigenvalues of Complex Matrices





If   = [ bar(A)]^T ( = A^* ), then all the eigenvalues of  are real numbers.


 Let  be an eigenvalue of , that is, there exists a nonzero vector  such that .

      By multiplying both sides by  on the left-hand side, we get .

        Hence . Since  is a nonzero real number, we just need to show that  is a real number. Note that



( because [ bar(A)]^T = A^*).) Therefore,  is a real number.                          



Show that the eigenvalues of  are . In addition show that if , then  can be decomposed into




where  is the angle between the -axis and the line passing through the origin and the point .


Since the characteristic equation of  is , the eigenvalues of  are .

       If , then . Therefore,



원본 그림의 이름: CLP000016f812d4.bmp
원본 그림의 크기: 가로 390pixel, 세로 393pixel




8.8 Hermitian, Unitary, Normal Matrices


 Lecture Movie : , 

 Lab : 

원본 그림의 이름: mem000007b40007.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

We used  to denote the set of all  real matrices.

      In this section, we introduce  to denote the set of all  complex matrices. 

    Symmetric matrices and orthogonal matrices in  can be generalized

                 to be Hermitian matrices and unitary matrices in ,

    We shall further study the diagonalization of Hermitian and Unitary matrices.

Conjugate Transpose




 [Conjugate Transpose]





For a matrix is defined by




The transpose  of the complex conjugate of  is called the conjugate transpose and


         is denoted by , that is, .















● The Euclidean inner product in 

● If a matrix  is real, then .










For matrices , their conjugate transposes are




Theorem 8.8.1

  [Properties of Conjugate Transpose]

For complex matrices and a complex number , the following hold:


 (1) .

 (2) .

 (3) .

 (4) .

 Proof of the above theorem is easy and left as exercises.

Hermitian Matrix



 [Hermitian Matrix]





If a complex square matrix satisfying , is called a Hermitian matrix.





In , and hence is not Hermitian. However, since , is Hermitian.                                         


 8.8.2 [Properties of Hermitian Matrix]

Suppose is Hermitian. Then the following hold:


(1) For any vector , the product is a real number.

(2) Every eigenvalue of is a real number.

(3) Eigenvectors of corresponding to distinct eigenvalues are

   orthogonal to each other.

  Theorem 8.7.2 &

Let . Since , is Hermitian.

    The characteristic equation of is and hence the eigenvalues of are 1, ,

      which confirms that all the eigenvalues of a Hermitian matrix are real numbers

    . Furthermore, it can be shown that the eigenvectors are  , , and , where

       , ,

corresponding to , , and respectively, are orthogonal to each other.                                   

Skew-Hermitian Matrices



 [Skew-Hermitian Matrix]





If a complex square matrix satisfies , then is called a skew-Hermitian matrix.





It can be verified that both matrices and   below are skew-Hermitian:




 Every matrix can be expressed as , where is Hermitian and is skew-Hermitian.

     In particular, since is Hermitian and is skew-Hermitian, every complex square matrix can be rewritten as


Unitary Matrices


 [Unitary Matrix]





If matrix satisfies , then is called a unitary matrix. If is unitary, then . In addition, if the th column vector of is denoted by , then




Therefore, is a unitary matrix if and only if the columns of form an orthonormal set in .





Show that the following matrix is unitary:


 Since , the product

   . Hence

 is a unitary matrix. We can also show that


For example, .          



 8.8.3 [Properties of a Unitary Matrix]

Suppose has the Euclidean inner product and is a unitary matrix. Then the following hold:


(1) For , , which implies .

(2) If is an eigenvalue of , then .

(3) Eigenvectors of corresponding to distinct eigenvalues are orthogonal to each other.

 The property of a unitary matrix shows that a unitary matrix is an isometry, preserving the norm.

Unitary Similarity and Unitarily Diagonalizable



 [Unitary Similarity and Unitary Diagonalization]





For matrices , if there exists a unitary matrix such that ,

      then we say that and are unitarily similar to each other.

     Furthermore, if is unitarily similar to a diagonal matrix, then is called unitarily diagonalizable.





Let and . Then it can be checked that is a unitary matrix and .

         Therefore, is unitarily diagonalizable.            


 If is unitarily diagonalizable, then there exists a unitary matrix such that and

                            hence . Letting , we get,


 This implies that each th column of the unitary matrix is a unit eigenvector of corresponding to the eigenvalue .

Find a unitary matrix diagonalizing matrix .

The eigenvalues of are and their corresponding eigenvectors are


Letting  ,

 and , it follows that


where is a unitary matrix.                            

Schur’s Theorem


 Transforming a complex square matrix into an upper triangular matrix




 8.8.4 [Schur’s Theorem]

A square matrix  is unitarily similar to an upper triangular matrix whose main diagonal entries are the eigenvalues of .

     That is, there exists a unitary matrix  and an upper triangular matrix  such that




where ’s are eigenvalues of .



   Let  be the eigenvalues of . We prove this by mathematical induction.

        First, if  , then the statement holds because .

   We now assume that the statement is true for any square matrix of order less than or equal to .


 ① Let  be an eigenvector corresponding to eigenvalue .

②By the Gram-Schmidt Orthonormalization, there exists an orthonormal basis for  including  , say .

③ Since  is orthonormal, the matrix  is a unitary matrix.

   In addition, since , the first column of  is . Hence  is of the following form:




  where . Since , the eigenvalues of  are  .

④ By the induction hypothesis, there exists a unitary matrix  such that



⑤ Letting      ,


we get



  Since  is a unitary matrix, the result follows.               

 [Lecture on this proof]


 Not every square matrix is unitarily diagonalizable. (see Chapter 10)


Normal matrix




 [Normal Matrix]





If matrix  satisfies


then  is called a normal matrix.








It can be shown that the following matrices  and  are normal:




A Hermitian matrix  satisfies  and hence . This implies that any Hermitian matrix is normal.

     In addition, since a unitary matrix  satisfies , it is a normal matrix.    

Equivalent Conditions for a Matrix to be Normal





For matrix , the following are equivalent:


(1)  is unitarily diagonalizable.

(2)  is a normal matrix.

(3)  has  orthonormal eigenvectors.




Let    and .

Show that  is a normal matrix and the columns of  are orthonormal eigenvectors of .

Since  and hence  is a normal matrix.

Letting , we get




Thus  and  are eigenvectors of . In addition, since and  and  are orthonormal eigenvectors of .    

Show that  is unitarily diagonalizable.


Note that matrix  is Hermitian and its eigenvalues are .

An eigenvector corresponding to  is .

      By normalizing it, we get .


     Similarly, we can get a unit eigenvector  corresponding to .

      Taking , we get .                                        









Although not every matrix  is diagonalizable, using the Schur’s Theorem,

      we can obtain an upper triangular matrix  (close to a diagonal matrix) similar to .

     The upper triangular matrix  is called the Jordan canonical form of .

   The Jordan canonical form will be discussed in Chapter 10.








8.9 *Linear system of differential equations


 Lecture Movie :  

 Lab : 

원본 그림의 이름: mem000007b40008.tmp
원본 그림의 크기: 가로 357pixel, 세로 357pixel

Many problems in science and engineering can be written as a mathematical problem of solving linear system of differential equations.

   In this section, we learn how to solve linear system of differential equations by using a matrix diagonalization.



● Details can be found in the following websites:


Ch. 7 Exercises





 8.2 닮음과 행렬의 대각화

 8.3 직교대각화, *행렬 함수

 8.4 이차형식

*8.5 이차형식의 응용

*8.6 SVD와 일반화된 역행렬

 8.7 복소고유값과 고유벡터

 8.8 Hermitian, 유니타리, 정규행렬

*8.9 선형연립미분방정식

그림입니다. 원본 그림의 이름: mem00000c202726.png 원본 그림의 크기: 가로 220pixel, 세로 82pixel그림입니다. 원본 그림의 이름: Sage-logo.jpg 원본 그림의 크기: 가로 200pixel, 세로 200pixel

 About the Author  

                         Made by SGLee