Chapter 9
Vector Space
9.1 Axioms of a Vector Space
9.2 Inner product; *Fourier series
9.3 Isomorphism
9.4 Exercises
The operations used in vector addition and scalar multiple are not limited to the theory but can be applied to all areas in society. For example, consider objects around you as vectors and make a set of vectors, then create two proper operations (vector addition and scalar multiple) from the relations between the objects. If these two operations satisfy the two basic laws and 8 operation properties, the set becomes a mathematical vector space (or linear space). Thus we can use all properties of a vector space and can analyze set theoretically and apply them to real problems.
In this chapter, we give an abstract definition of a vector space and deal with general theory of a vector space.
9.1 Axioms of a Vector Space
Ref site : https://youtu.be/yi9z_2e6y8w http://youtu.be/m9ruF7EvNg
Lab site: http://matrix.skku.ac.kr/knouknowls/claweek14sec91.html
In this section. concept of vectors has been extended to tuples in from the arrows in the 2dimensional or 3dimensional space. In Chapter 1, we defined the addition and the scalar multiplication in the dimensional space . In this section, we extend the concept of the dimensional space to an dimensional vector space.
Vector Spaces
Definition 
[Vector space] 






If a set has two welldefined binary operations, vector addition (A) ¡®¡¯ and scalar multiplication (SM) ¡®¡¯, and for any and , two basic laws
A. . SM. .
and the following eight laws hold, then we say that the set forms a vector space over with the given two operations, and we denote it by (simply if there is no confusion). Elements of are called vectors.
A1. . A2. . A3.For any , there exists a unique element in such that . A4. For each element of , there exists a unique such that .
SM1. . SM2. . SM3. . SM4. .
The vector satisfying A3 is called a zero vector, and the vector satisfying A4 is called a negative vector of . 






In general, the two operations defining a vector space are important. Therefore, it is better to write instead of just .
For vectors in and a scalar , the vector sum and a scalar multiple by are defined as
(1) .
(2) .
The set together with the above operations forms a vector space over the set of real numbers. ¡á
For vectors in
,
and a scalar , the sum of two vectors and the scalar multiple of by is defined by
(1) and (2) .
The set form a vector space together with the above two operations. ¡á
Theorem 
9.1.1 
Let be a vector space. Let and . Then the following hold. (1) . (2) . (3) . (4) or . 
Zero Vector Space
Definition 







Let . For a scalar , if the addition and scalar multiple are defined as , then
forms a vector space. This vector space is called a zero vector space. 






Let be the set of all matrices with real entries. That is,
.
When , we denote by .
If is equipped with the (usual) matrix addition and the scalar multiplication, then form a vector space over .
The zero vector is the zero matrix and for each , the negative vector is . Note that each vector in means an matrix with real entries. ¡á
Let be the set of all continuous functions from to . That is,
is continuous}
Let and a scalar , define the addition and the scalar multiple as
, .
Then forms a vector space over .
The zero vector is (zero function) and for each , is defined as .
Vectors in mean continuous functions from to . ¡á
Let be the set of all polynomials of degree at most with real coefficients. In other words,
Let and a scalar . The addition and the scalar multiplication are defined as
.
Then forms a vector space over . The zero vector is zero polynomial and each has the negative vector defined as
.
Vectors in means polynomials of degree at most with real coefficients.¡á
Subspaces
Definition 







Let be a vector space and be a subset of . If forms a vector space with the operations defined in , then is called a subspace of . 






If is a vector space, and itself are subspaces of called the trivial subspaces. ¡á
In fact, the only subspaces of are , , and lines passing through the origin. (see section 3.4 ).
In , only subspaces are (i) (Null Space), (ii) , (iii) lines passing through origin and (iv) planes passing through origin.
How to determine a subspace?
Theorem 
9.1.2 [the 2step subspace test] 
Let a set be a vector space and be a subset of . A necessary and sufficient condition for to be a subspace of is
(1) (closed under vector addition ) (2) (closed under scalar multiple) 
Show that is a subspace of the vector space .
Note that is a vector space under the matrix addition and the scalar multiplication. Let
.
The following two conditions are satisfied.
(1)
(2) .
Hence by Theorem 9.1.2, is a subspace of . ¡á
The set of invertible matrices of order is not a subspace of the vector space .
One can make a noninvertible matrix by adding two invertible matrices. For example,
. ¡á
Let be a vector space and . Show that the set
is a subspace of . Note that is a linear span of the set .
Suppose that , . Then for ,
.
Thus
,
.
and .
Therefore, is a subspace of . ¡á
Linear independence and linear dependence
Definition 
[Linear independence and linear dependence] 






If a subset of a vector space satisfies the following condition, it is called linearly independent.
and if the set is not linearly independent, it is called linearly dependent. Hence being linearly independent means that there exist some scalars not all zero such that .







Remark 
Linear combination in 2dimensional space  linear dependence (computer simulation) 






¡Ü http://www.geogebratube.org/student/m57551








Let , , , . Since
is a linearly independent set of . ¡á
Let , , . Since , is a linearly dependent set of . ¡á
The subset of is linearly independent. ¡á
Let be a subset of . Then since
,
the set is linearly dependent. ¡á
Basis
Definition 
[basis and dimension] 






If a subset () of a vector space satisfies the following conditions, is a basis of .
(1) . (2) is linearly independent.
In this case, the number of elements of the basis , , is called the dimension of , denoted by . 






The set in consisting of , , , is a basis of . Thus .
On the other hand, the set in is a basis of . Thus .
These bases play a role similar to the standard basis of , hence
{ , , , } and are called standard bases for and respectively. ¡á
Show that is a basis of .
Since , is linearly independent.
Next, given , the existence of such that
is guaranteed since the coefficient matrix of the linear system
that is,
is invertible. Thus spans . Hence is a basis of . ¡á
Linear independence of continuous function:
Wronskian
Theorem 
9.1.3 [Wronski's Test] 
If are times differentiable on the interval and there exists such that Wronskian defined below is not zero, then these functions are linearly independent.
Conversely if for every in , then are linearly dependent. 
Show by Theorem 9.1.3 that , , are linearly independent.
For some (in fact, any) , . Thus these functions are linearly independent. ¡à
var('x')
W=wronskian(1, e^x, e^(2*x)) # wronskian(f1(x), f2(x), f3(x))
print W
2*e^(3*x) ¡á
Let , . Show that these functions are linearly independent.
Since for some ,
these functions are linearly independent. ¡á
Show that , are linearly dependent.
Since for any ,
,
these functions are linearly dependent. ¡á
9.2 Inner product; *Fourier series
Ref movie: https://youtu.be/SAgZ_iNsZjc http://youtu.be/m9ruF7EvNg
demo site: http://matrix.skku.ac.kr/knouknowls/claweek14sec92.html
In this section, we generalize the Euclidean inner product on (dot product) to introduce
the concepts of length, distance, and orthogonality in a general vector space.
Inner product and inner product space
Definition 
[Inner product and inner product space] 






The inner product on a real vector space is a function assigning a pair of vectors , to a scalar satisfying the following conditions. (that is, the function satisfies the following conditions.)
(1) for every in . (2) for every in . (3) for every in and in . (4) for every in .
The inner product space is a vector space with an inner product defined on . 






The Euclidean inner product, that is, the dot product is an example of an inner product on .
Let us ask how other inner products on are possible. For this, consider .
Let and be the column vectors of . Define
(or ) by .
Then let us find the condition on so that this function becomes an inner product.
In order for to be an inner product, the four conditions (1)~(4) should be satisfied. First consider conditions (2) and (3).
,
.
Let us check when condition (1) holds. Since is a matrix (hence a real number), we have
That is, to satisfy
,
we must have , in other words, is a symmetric matrix.
Thus the function satisfy condition (1) if is a symmetric matrix.
Finally check condition (4). An symmetric matrix should satisfy for any nonzero vector .
This condition means that is positive definite. In other words, if is positive definite, satisfies condition (4).
Therefore, to wrap up, if is an symmetric and positive definite matrix,
then defines an inner product on .
The well known Euclidean inner product can be obtained as a special case
when (symmetric and positive definite). ¡á
For any nonzero vector , if the eigenvalues of are positive, then (the converse also holds.)
Let be a symmetric matrix and in . Then
satisfies conditions (1), (2), (3) of an inner product on .
Now let us show that is a positive definite. Let . Then
. Thus
and
.
Hence the symmetric matrix is positive definite and
defines an inner product on of the form .
If and , then . On the other hand,
Hence the inner product on is different from the Euclidean inner product. ¡á
Norm and angle
Definition 
[norm and angle] 






Let be a vector space with an inner product . The norm (or length) of a vector with respect to the inner product is defined by .
The angle between two nonzero vectors and is defined by
().
In particular, if two vectors and satisfy , then they are said to be orthogonal. 






For example, the norm of with respect to the inner product given in is
.
Thus . On the other hand, the norm with respect to the Euclidean inner product is
For any inner product space, the triangle inequality holds.
Using the GramSchmidt orthogonality process,
we can make a basis of a inner product space into an orthonormal basis .
Inner product on complex vector space
Definition 







Let be a complex vector space. Let be any vectors in and be any scalar. The function from to is called an inner product (or Hermitian inner product) if the following hold. (1) . (2) . (3) . (4) 






A complex vector space with an inner product is called a complex inner product space or a unitary space.
If for any two nonzero vectors , then we say that and are orthogonal.
Let be a complex vector space. By the definition of an inner product on , we obtain the following properties.
(1) .
(2) .
(3)
().
Let and be vectors in .
The Euclidean inner product satisfies the conditions (1)~(4) for the inner product. ¡á
Let be the set of continuous functions from the interval to the complex set .
Let . If the addition and scalar multiple of these functions are defined below,
then is a complex vector space with respect to these operations.
.
In this case, a vector in is of the form and are continuous functions from to .
For , define the following inner product
.
Then is a complex inner product space.
We leave readers to check conditions (1)~(3) for an inner product, and show condition (4) here. Note
and , hence . In particular,
That is, , conversely, if is a zero function, then it is easy to see that . ¡á
Complex inner product space, norm, distance
Definition 
[Norm, and distance] 






Let be a complex inner product space. The norm of and the distance between and are defined as follows: , . 






Find the Euclidean inner product and the distance of vectors .
.
. ¡á
From , we let and . Find the norm of .
. ¡á
CauchySchwarz inequality and the triangle
inequality
Theorem 
9.2.1 
Let be a complex inner product space. For any in , the following hold.
(1) . (CauchySchwarz inequality) (2) . (triangle inequality) 
We prove (1) only and leave the proof of (2) as an exercise.
If , . Hence (1) holds. Let and , .
Then and . Thus we have the following.
.
Thus, as, (1) holds. ¡á
Let be vectors in . Answer the following.
(1) Compute the Euclidean inner product .
(2) Show that and are linearly independent.
(1) .
.
.
.
.
(2) If for any scalar , then
.
So . Thus and are linearly independent. ¡á
Let be vectors in .
Check that the CauchySchwarz inequality and the triangle inequality hold.
Since and , the CauchySchwarz inequality holds.
Also since , the triangle inequality holds. ¡á
[CauchySchwarz inequality in and ]
(1) Let be a complex inner product space with the Euclidean inner product.
Let , be in . Then the CauchySchwarz inequality is given by
¡á
(2) Let . As in , with the inner product, the CauchySchwarz inequality is given by
¡á
[Triangle inequality] Consider the inner products given in and .
(1) Let . Then the triangle inequality holds. That is,
. ¡á
(2) Let . Then the triangle inequality holds. That is,
. ¡á
9.3 Isomorphism
Reference site: https://youtu.be/SAzm6t_sb8o http://youtu.be/frOcceYb2fc
Lab site: http://matrix.skku.ac.kr/knouknowls/claweek14sec93.html
We generalize the definition of a linear transformation on to a general vector space .
A special attention will be given to both injective and surjective linear transformations.
Definition 







Let and be vector spaces over . be a map from a vector space to a vector space . If satisfies the following conditions, it is called a linear transformation.
(1) for every in and in . (2) for every in .







If , then the linear transformation is called a linear operator.
Theorem 
9.3.1 
If is a linear transformation, Then we have the following:
(1) . (2) . (3) . 
If satisfies that for any , then it is a linear transformation,
called the zero transformation. Also, if satisfies that for any ,
then it is a linear transformation, called the identity operator. ¡á
Define by ( a scalar). Then is a linear transformation. The following two properties hold.
(1)
(2)
If , then is called a contraction and if , then it is called a dilation. ¡á
Let be the vector space of all continuous functions from to and be the subspace of
consisting of differentiable functions.
Define by . Then is a linear transformation and called a derivative operator. ¡á
Let the subspace of consisting of differentiable functions.
Define by . Then is linear transformation. ¡á
Kernel and Range
Definition 
[Kernel and Range] 






Let . Define
,
ker is called the kernel and Im the range. 






If is the zero transformation, and . ¡á
If is the identity operator, and . ¡á
Let be the derivative operator defined by as in .
is ¡°the set of all constant functions defined on ¡± and
is ¡°the set of all continuous functions, that is, ¡± ¡á
Basic properties of kernel and range
Theorem 
9.3.2 
If is a linear transformation, and are subspaces of and respectively. 
Theorem 
9.3.3 
If is a linear transformation, the following statements are equivalent.
(1) is an injective (or onetoone) function. (2) . 
Isomorphism
Definition 
Isomorphism 






If a linear transformation is onetoone and onto, then it is called an isomorphism. In this case, we say that is isomorphic to , denoted by . 






Any dimensional real vector space (defined over the real set ) is isomorphic to and
any dimensional complex vector space (defined over the complex set ) is isomorphic to .
Theorem 
9.3.4 
Any dimensional real vector space is isomorphic to . 
We immediately obtain the following result from the above theorem.
, ¡á