# 2021 Winter Math4AI <Final PBL report>

## E-mail: minjikang*@gmail.com, yss*@g.skku.edu, ngzhiwei**@hotmail.com

Korean (HWP) : http://matrix.skku.ac.kr/PBL/PBL-Report-Form-Korean.hwp  <- Modify this for this semester

English (MS Word) : http://matrix.skku.ac.kr/PBL/PBL-Report-Form-English.docx <-- Modify this form for your semester

We have learned Basic Mathematics(행렬도함수통계)

to understand and can talk about the following concepts in 14 weeks(days) in this semester.

1. SVD(Singular Value Decomposition)

3. Data and Covariance Matrix

4. PCA(Principal Components Analysis)

5. Rank Reduction and the role of SVD in PCA

6. BP(Back-Propagation) algorithm in ML(Machine Learning) and ANN(Artificial Neural Network)

l  Sample Students HW/Report [예시, 이전 학기 학생들의 질문/답변/활동 기록]

Math4AI-Summary : http://matrix.skku.ac.kr/Math4AI-Summary/

2021 Fall PBL report by two Freshmen (English) : http://matrix.skku.ac.kr/2021-Math4AI-Fall-PBL/

2021 Summer PBL report (English) : http://matrix.skku.ac.kr/2021-Final-PBL-E/

2021 Summer PBL report (Korean) :http://matrix.skku.ac.kr/2021-Final-PBL/

2020 Fall PBL report (Korean) : http://matrix.skku.ac.kr/2020-Math4AI-PBL/ Basic Math for AI)

Sample 1 (도전학기 7주차 기말보고서) http://matrix.skku.ac.kr/2020-Math4AI-Final-pbl2/

Sample 2 (도전학기 7주차 기말보고서) http://matrix.skku.ac.kr/2020-math4ai-final-pbl/

Sample 3 (도전학기 4주차 중간보고서) http://matrix.skku.ac.kr/2020-Mid-PBL-2/

Sample 4 (도전학기 4주차 중간보고서) http://matrix.skku.ac.kr/2020-Mid-PBL-1/

English (Linear Algebra) English :  http://matrix.skku.ac.kr/2018-album/LA-PBL.htm (선형대수학)

English (Linear Algebra) (Korean PDF file):  http://matrix.skku.ac.kr/2015-album/2015-F-LA-Sep-Record.pdf

Calculus 1 PBL report http://matrix.skku.ac.kr/Cal-Book1/Calculus-1/ (미적분학 1)

Calculus 2 PBL report http://matrix.skku.ac.kr/Cal-Book1/Calculus-2/ (미적분학 2)

[ We could practice the our codes in http://matrix.skku.ac.kr/KOFAC/

{High School Math Review} Math Lab Review (실습실)

## ◆ General Academic Knowledge  (10 points)

(1) State more than 10 Math Definitions and concepts what you learned in the first 14 weeks (days).

http://matrix.skku.ac.kr/intro-math4ai/

1.    Polynomial functions: A function f(x) that is a polynomial of x is called a polynomial function. Most well-known polynomial functions are as follows. Linear function, quadratic function, nth order polynomial.

2.    Rational functions: Rational functions f(x) are functions of x that are rational P(x)/Q(x) where P(x) and Q(x) are polynomials.

3.    Vector and scalar: A quantity which does not depend on direction is called a scalar quantity. Vector quantities have two characteristics, a magnitude and a direction. Scalar quantities have only a magnitude.

4.    Rules for Matrix Operations: A + B = B + A, 1A = A, (ab)C = aC + bC, …

5.    Classification: Classification is the problem of identifying to which category a new data belongs, based on the given data's characteristics.

6.    Gauss-Jordan elimination: Gaussian elimination is an algorithm for solving systems of linear equations. When we solve a system of linear equations using Gaussian elimination, the final form (of the augmented matrix) on the left side of the equation becomes the identity matrix.

7.    Least Squares Problem: Least-squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution.

8.    SVD: The SVD (singular value decomposition) always exists for any sort of rectangular or square matrices. It is the key feature of SVD. The singular value decomposition of m x n matrix A is the matrix factorization in the form of A = U x Sigma x V^T, where U is an m x m orthogonal matrix, V is an n x n orthogonal matrix, and sigma is an m x n rectangular (generalized) diagonal matrix with non-negative real numbers on the main diagonal.

9.    Limits of Functions: An optimal solution is a solution in which a function defined in a set has a maximum or minimum value. The problem of finding an optimal solution involves generalized concepts and operations of derivatives. Techniques used to find an approximate solution help when we find an optimal solution.

10.  Local Maximum and Minimum: The derivative can be used to determine whether a function increases or decreases based on the sign of slope at a point (or on a given interval). There might be a changing point of the slope (e.g, decreasing to increasing, increasing to decreasing). We call it a critical point, and the derivative of the function will be zero at a critical point. The second derivative can be used to determine whether a given function has a local maximum or minimum value at a critical point. Using the second derivative, we can 'check that a function has the absolute maximum or minimum on a given interval'.

11.  Fermat's Theorem for Extrema: Fermat's theorem essentially says that every local extremum (i.e. local maximum or minimum) of the function that occurs at a point within the interval where the function is differentiable (i.e. the function has a derivative at that point) must be a stationary point.

12.  Gradient descent method: Gradient descent is an iterative optimization algorithm for finding the local minimum of a function. To find the local minimum of a function using gradient descent, we must take steps proportional to the negative of the gradient (move away from the gradient) of the function at the current point.

13.  Conditional probability: Conditional probability is an essential concept in data analysis, which is the probability that an event B occurs under the condition that an event A occurred.

14.  Bayes' theorem: Bayes' theorem describes the probability of an event based on prior knowledge of conditions related to the event.

15.  PCA: PCA converts a data set from high-dimensional space into low-dimensional, easy-to-handle spaces while preserving the distribution of the original data as much as possible.

16.  Artificial Neural Network: computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain.

17.  Backpropagation (BP): an algorithm for supervised learning of artificial neural networks using gradient descent. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network's weights.

- Vector is a geometric object that has magnitude (or length) and direction. It can be represented as an ordered set of numbers arranged in columns.

- Matrix is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.

- Distance is a measure of data similarity using distance between two vectors. This is often used in data sets having similar values.

- Cosine similarity is a measure of data similarity using angle between two vectors. It is often used in data sets having trends.

- Least squares problem is minimizing the sum of the squares of the differences of data made in the results of each individual equation. It is often used in data fitting,

- QR decomposition is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R.

- LU decomposition factors a matrix as the product of a lower triangular matrix and an upper triangular matrix.

- SVD: is a factorization of a real or complex matrix. It generalizes the diagonalization of a square normal matrix with an orthonormal eigenbasis to any rectangular matrix.

- Limit is the value that a function (or sequence) approaches as the input (or index) approaches some value.

- Derivative measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value).

- GDM is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function.

- Expectation of a random variable X, often denoted E(X), is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of X.

- Variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value.

- Standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

- Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. - Covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the other variable, the covariance is positive. In the opposite case, when the greater values of one variable mainly correspond to the lesser values of the other, the covariance is negative. The sign of the covariance therefore shows the tendency in the linear relationship between the variables.   - Covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

- PCA is the process of computing the principal components and using them to perform a change of basis on the data, sometimes using only the first few principal components and ignoring the rest. It is commonly used for dimensionality reduction by projecting each data point onto only the first few principal components to obtain lower-dimensional data while preserving as much of the data's variation as possible.

- Linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables. More simply, it can be understood as fitting the data by appropriate line.

- ANN are computing systems inspired by the biological neural networks that constitute animal brains. It consists of input layer, inner layer, output layer. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. - Back Propagation is a widely used algorithm for training feedforward neural networks. In fitting a neural network, backpropagation computes the gradient of the loss function with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually.

- MNIST is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. (2) State more than 10 things that you know/can/find ...  after you studied the first 14 weeks (days).

## -

1.    Sketch graphs of polynomial functions

2.    Make a composite function

3.    Find solutions of equations

4.    Perform matrix operation by applying operations on vectors and matrices

5.    Check inverse and transpose of a matrix

6.    Find inner product and angle of two vectors

7.    Find system of linear equations

8.    Find SVD of a matrix

9.    Find derivative of a differentiable function using while loop

10.  Find local maximum, local minimum, absolute maximum, and absolute minimum of a function

11.  Find the minimum value of a function with the GDM code.

12.  Draw graphs for various differentiable functions and determine a small interval containing the local minimum identified from the figure, and apply the GDM code to each interval with a reasonable x1.

13.  Explain how SVD is used in PCA

14.  Take a data matrix and apply a PCA

15.  Process of dimension reduction by PC’s on the covariance matrix.

16.  Describe simply ANN and Backpropagation algorithm as you understand.

## ◆ Participation Part (20 points)

(A) Briefly describe your contributions through Q&A for yourself and fellow students in our "Basic Math4AI" classe!

(A1)Quantity :

-       Check your participation numbers in QnA for each week (Saturday to Friday):

12/13:      0           12/14:          3     12/15:         3       12/16:    3

12/17:      2              12/18:          2     12/19:         0       12/20:    0

12/21:      5              12/22:          5     12/23:       1      12/24:    0

12/25:     1        12/26:       1    12/27:       0      12/28:    1

12/29:     0        12/30:       5

-       Total number of sessions (Q:     2   ,   A:   26     ,    Others:   4       )

-       Number of online attendances:     ( 34   / 34 )

-       Off-line attendance and number of absences:  (  3   /  3 ) (0 absence)

-       Check your participation numbers in QnA for each week (Saturday to Friday):

Week(day)  1:    0                Week  2:        3       Week  3:     0           Week  4:    0

Week  5:     10               Week  6:        1       Week  7:      3          Week  8:    0

Week  9:       1             Week 10:         0      Week 11:          0      Week 12:    2

Week 13:     0               Week 14:        3       Week 15: 0

-       Total number of sessions (Q:      1       ,   A:       13       ,    Others:    9       )

-       Number of online attendances:     (    35    / 35  )

-       Off-line attendance and number of absences:  (   3     /   3) (0 absence)

·         Others Include some announcements or course related posts.

## ·          (A2) What you contributed through this course so far (Q & A and/or In-class)?

·         I actively participated on Q&A open problems. I frequently shared my solution to the class.

·         I shared the study material I’ve found during studying about the concept to the class. This might helped some colleagues understand the concept better.

·         I actively commented on other colleagues’ solutions and contributed finalizing it.

Ng Zhi Wei Self Introduction / Motivation

[Question] Open problem 11 by 박우현

Problem 1 by Ng Zhi Wei

Problem 2 by Ng Zhi Wei

Problem 3 by Ng Zhi Wei

Problem 14 by 안성준

Problem 15 by 안성준

Problem 5 by random_matrix() by Ng Zhi Wei

Problem 6 by Ng Zhi Wei

Problem 1 by 남상현

Day 7 Open Problem 1 by Ng Zhi Wei 응즈웨이 (using while loop) [Comment by: 우건주, 고재윤, 남상현, 윤상수]

[News] The Future of Jobs in the Era of AI (인공지능 시대의 진로와 직업)

Explained SVD in class (2nd WebEx meeting)

[Question] What exactly does 'Linear Regression' mean?

Gradient Decent Method 관한 간략한 정리와 GDM 최초점과 관련해 윤상수 학우님께서 올려주신 질문 게시글과 댓글로부터 배운

Finalise Code a Neural Network with Backpropagation In Python by 이우흠

using SVD to perform PCA

(A3) Number of Final OK by SGLee Problems (and/or Completed Discussion/Question) in QnA that your name is included.

22 problems

Final OK by SGLee [Finalized by 이예진] Dec. 14th, Summary of 1st WebEx meeting - How this class will proceeds and Who will get A in this class. by 윤상수, ﻿﻿우건주, ﻿김명규, ﻿박우현, 고재윤, 안성준, 김지훈, ﻿도경근, ﻿이규리, ﻿황세진, 응즈웨이, 이예진, 정운섭, 이우흠, ﻿인유진, 김지훈, 신유정, 도경근, ﻿남궁보민, 우건주, 강민지, 남상현, 이우흠, 인유진.

[Final OK by SGLee] open problem 4 solution by 정운섭, 황세진, 응즈웨이 (finding solutions to various equation using sage) and question

Open Problem 4 by Ng Zhi Wei (Cont. from [Final OK by SGLee] open problem 4 solution by 정운섭, 황세진 (finding solutions to various equation using sage) and question) [Comment by: 이예진]

ALL Must read This!! [Sample, Re-Finalized OK by SGLee] Open Problem 16 [Re-Finalized by Ng Zhi Wei(응즈웨이), 이규식, 남상현, 이예진, 고윤진, 고재윤, 도경은, 황세윤, 신유정, 박우현, 이예진, 남상현, 도경근, 이규식] (Solutions and Comments) Find SVD of a rectangular 4 by 5 matrix which is bigger than 3 by 3.

OK : the2nd WebEx meeting (처음 게시글을 작성한 인원이 자신의 글에 코멘트를 달아준 학우들의 이름을 포함하여 제목 내용을 수정 해주어야 다른 분들의 검색이 쉬어질 것입니다.) 고재윤/전자전기공학과, 우건주/신소재공학과 (Woo Gun Joo), 문헌정보학과 2018 이규리(Lee Kyuri), 시스템경영공학과 황세진 (Hwang Sejin), 전자전기공학과 김지훈 (Kim Jihoon), 기계공학과 박우현 (Park Woohyeon), 2019 응즈웨이, 2020 인유진, 2021 강민지, 2018 정운섭, 2020 윤상수, 2018 남상현, 2015 김명규, 이우흠-2018 (Yi) , 안성준 2016, 2016 안성준, 박우현_2016, 2018 이규식, 정운섭_2018, 2018 이규리, 2015_김명규, 고재윤 2020, 2015_ 황세진, 안성준 2016, 2015_김명규, 2018 이우흠, 안성준 2016, 2018 도경근, 김지훈_2020 ﻿

Open Problem 16 by 이우흠 Re-Finalized OK by SGLee] Open Problem 16 [Re-Finalized by Ng Zhi Wei(응즈웨이), 이규식, 남상현, 이예진, 고윤진, 고재윤, 도경은, 황세윤, 신유정, 박우현, 이예진, 남상현, 도경근, 이규식] (Solutions and Comments) Find SVD of a rectangular 4 by 5 matrix which is bigger than 3 by 3.

[Final OK by SGLee] Summary for 3rd Webex Meeting (12/28), Dear 응즈웨이 Ng Zhi Wei Software 3rd year and 이우흠/ LIYuxin/

## ·         (B1) What did you especially remember while you are doing the first part 1, 2, 3, (4, 5-Final).﻿﻿

·                 I especially remember the SVD. When I first listened to the lecture, there were actually many parts I didn't understand. However, I understood many parts through the explanations and discussions of my classmates. In addition, the part that the professor explained additionally in the last Webex meeting helped complete the concept.

## I also remember Principal Component Analysis and dimension reduction. It is very interesting to know how the PCA works and why it is used to analyze data. It was good to know that dimension reduction is the key feature of PCA.

·         Sketch graphs of polynomial functions

·         Make a composite function

·         Find solutions of equations

·         Perform matrix operation by applying operations on vectors and matrices

·         Check inverse and transpose of a matrix

·         Find inner product and angle of two vectors

·         Find system of linear equations

·         Find SVD of a matrix

·         Find derivative of a differentiable function using while loop

·         Find local maximum, local minimum, absolute maximum, and absolute minimum of a function

·         Find the minimum value of a function with the GDM code.

·         Draw graphs for various differentiable functions and determine a small interval containing the local minimum identified from the figure, and apply the GDM code to each interval with a reasonable x1.

·         Explain how SVD is used in PCA

·         Take a data matrix and apply a PCA

·         Process of dimension reduction by PC’s on the covariance matrix.

·         Describe simply ANN and Backpropagation algorithm as you understand.

(B2) What did you learn or feel while learning Basic Math4AI (Action Learning/PBL) with your classmates?

I received great help in understanding the lecture by sharing opinions on the solution to the problem through discussions with my classmates. In addition, it was easy to understand problems that were difficult to understand alone. And it was good to receive opinions from my classmates on the solution I solved.

I learned that visualizing the example or process helps intuitive understanding on the problem or concept. I saw some of the colleagues did very well on visualizing what they are saying.

It was good experience to study mathematics with students from such a various backgrounds. I could learn from colleagues’ background knowledge on diverse field.

I think it is a good way to learn when we share our answer in QnA and discuss and ask questions so we get a better understanding on the concepts and also see how people approach the questions.

​(B3) Write names of YOUR PBL Team members and Team Leader. And Now I understand the concepts below

SVD Decomp, LU Decomp, QR Decomp, Euclidean Distance, Norm, Vector-Plane Projection, Determinants, Limits, Differentiation, Leact-Squares Solution, Matrix and Vector Actions.  ## ◆ Self Evaluation (20 points)

Self-Evaluation 2

 Subject Basic Math4AI Major Software Name/ID Ng Zhi Wei 2019313851 Evaluation Items Strongly disagree Disagree Mostly disagree Mostly agree Agree Strongly agree 1. I participated actively in both, online and offline classes. ü 2. I participated actively on a Q&A activity. ü 3. My question and replies made on Q&A are relevant. ü 4. Information provided by my activity was useful for other students in the class. ü 5. I enthusiastically took into the consideration other students’ opinions or point of view. ü 6. I contributed to class by participating on Q&A discussions. ü 7. I am enthusiastic about taking other class with the same students I am taking this class. ü [Opinion] ► Satisfaction according to the Self-Evaluation I am satisfied that I took a lot of lectures in class and participated a lot in Q&A. Also, I am satisfied with my contribution to this class.  I am very happy with the way this class is conducted. It is rather new to me but I think it is effective learning.   ► Sorrow according to the Self-Evaluation There is still a lack of understanding. And although I am actively participating in the class, I want to participate more actively. Also, I want to contribute more to this class. The QnA board is a little messy. Rather than using the QnA board, I think using the discussion board and splitting the discussion board into the different days will be better.

Self-Evaluation 3

 Subject Basic Math4AI Colleague’s name 고재윤 Name of evaluator 윤상수, 강민지 Evaluation Items Strongly disagree Disagree Mostly disagree Mostly agree Agree Strongly agree 1. I participated actively in both, online and offline classes. o 2. I participated actively on a Q&A activity. o 3. My question and replies made on Q&A are relevant. o 4. Information provided by my activity was useful for other students in the class. o 5. I enthusiastically took into the consideration other students’ opinions or point of view. o 6. I contributed to class by participating on Q&A discussions. o 7. I am enthusiastic about taking other class with the same students I am taking this class. o [Opinion] ► Satisfaction according to the Self-Evaluation 고재윤 학우분께서 작성하신 솔루션들로 문제 풀이를 이해하는 데 도움을 많은 도움을 받았습니다. Well visualized his thoughts. It helped other colleagues.   ► Sorrow according to the Self-Evaluation 고재윤 학우분만큼 더 많은 솔루션을 작성하도록 앞으로 더 노력하고 싶습니다 There was no sorrow.

Self-Evaluation 3

 Subject Basic Math4AI Major Software Name/ID Ng Zhi Wei 2019313851 Evaluation Items Strongly disagree Disagree Mostly disagree Mostly agree Agree Strongly agree 1. I participated actively in both, online and offline classes. ü 2. I participated actively on a Q&A activity. ü 3. My question and replies made on Q&A are relevant. ü 4. Information provided by my activity was useful for other students in the class. ü 5. I enthusiastically took into the consideration other students’ opinions or point of view. ü 6. I contributed to class by participating on Q&A discussions. ü 7. I am enthusiastic about taking other class with the same students I am taking Discrete Mathematics. ü [Opinion] ► Satisfaction according to the Self-Evaluation I am very happy with the way this class is conducted. It is rather new to me but I think it is effective learning.   ► Sorrow according to the Self-Evaluation The QnA board is a little messy. Rather than using the QnA board, I think using the discussion board and splitting the discussion board into the different days will be better.

Self-Evaluation 4

자기소개/수강동기

안녕하세요. 저는 공학계열 21학번 강민지입니다.﻿

﻿저는 아직 전공진입을 하지 않은 상태에서 어떤 전공이  맞을까 탐색하던 인공지능에 대해 관심을 갖게 되었습니다. ﻿비록 아직 미적분학 밖에 수강하지 않았지만이들을 토대로 인공지능에 기반이 되는 수학적 내용들을 탐구해보고 싶어이 강좌를 수강하게 되었습니다.

﻿앞으로 성실하게 강의활동에 참여하여 여러 내용들을 학습하겠습니다 부탁드립니다.

#Open Problem 1 (solved by 강민지)

http://matrix.skku.ac.kr/KOFAC/ ﻿ 에서 1주차 교재 http://matrix.skku.ac.kr/intro-math4ai/w1/ ﻿ 코드를 수정하여 풀어본/그려본 그래프를 공유합니다 <function of graph>

1.    plot function 이용하고,

2.    이때 정의역 a<x<b f(x), plot(f(x), (x,a,b)) 형식으로 입력한다.

3.    기본 파란색으로 출력되며함수 끝에 color='' 형식으로 색을 지정할  있다.

4.    함수가 발산할 경우함수 끝에 ymin, ymax 통해 출력범위를 지정할  있다.

Q(강민지) detect_poles='show'  점근선을 출력한다는 의미가 맞을까요?

A(이상구 교수님) 맞습니다. detect_poles==’show’가 점근선을 출력한다는 의미입니다.

Comment : 강의를 처음 시작하며, 아직 낯선 강의 방식에 적응하는 단계에서 작성한 글이라, 처음에는 맞게 작성한 것인지 확신이 없었지만, 여러 학우분들의 도움이 되었다는 답변에 자신감을 갖고 QnA에 여러 글을 남기는 계기가 되었습니다.

## #Open Problem 2(solved by 이예진)

(수업 이해를 돕고자 교수님 강의 내용 일부를 요약 정리하였습니다.)

먼저 day 2 학습목표는 Tuple, Vectors, Matrix, Tensor 대한 이해입니다

나아가 행렬의 다양한 operation 의미와 쓰임을 이해하는 것입니다.

한편선형대수학은 행렬의 operation 방법을 다루는 수학의  branch, Artificial Intelligence 배우고자 하는 사람이라면 필수 수강해야 하는 과목 하나입니다

1. Tuple

tuple이란 data 정해진 짝으로 표현하는 방식으로이것을 n개의 tuple 나타낸다고 표현합니다

Ex) kim 신장체중연령성별에 대한 data = (160,80,19,1) (4-tuple)

1. Vector operations

Vector 연산은 크게 벡터끼리의 , scalar값의 곱이 있습니다 다음은 벡터 연산의 성질입니다 ◩ Open Problem 2

Sketch the graph for the function .

Cos 함수와 다항식의 곱으로 표현된 함수를 plot하였습니다. , HW1에서 x 범위를 지정해주었던 것과 달리 이번에는 x 아니라 y 범위를 넣어 plotting 보았습니다그런데 다음과 같이지정해준 ymin, ymax 초과하여 그래프가 도출되었습니다 이에 다음과 같이 y 범위를 조절해보았습니다. 그래프가 x기준 -0.3~0.3 사이, y 기준 -3~3 사이에서 진동하고 왼쪽으로는 0 수렴오른쪽으로는 발산함을 확인할 있었습니다다음 그림을 통해 그래프의 전체적인 형태를 파악할 있습니다 또한다음 x 범위를 조정한 다음 코드로 그린 그래프를 통해 x 원점 주변에서 굉장히 많은 해를 가짐을 있습니다.  ﻿

감사합니다. ﻿

Comment : x값의 범위를 좁혀 그래프를 표현해, 함수의 개형을 확실히 알아볼 있어 도움이 되었습니다. 함수 그래프 관련 예제들을 풀어볼 때는 단순히 코드 실행만 해보았었는데, 이 글을 본 후에 x값의 범위를 변경해가며, 함수의 개형을 좀 더 효율적으로 보게 되었습니다.

## Open Problem 3 #Open Problem 3(solved by 고재윤)

 Open Problem 3

Make a composite function from the functions that you learned and draw a graph of it. [Hint: plot(sin(e^(1/3)^x), (x, -1.2, 10))]

﻿

http://matrix.skku.ac.kr/KOFAC/  접속하여 sin(e^(1/3)^x) 식을 -1.2부터 10까지 나타내기 위해 아래와 같이 코딩하였습니다. plot ( sin(e^(1/3)^x), (x, -1.2, 10))

보시는바와 같이 한줄로 간단하게 코딩되었습니다. 전체적인 형태는 plot( function﻿, (variable, starting interval, finishing interval))입니다.

결과 아래와 같은 그래프가 결과값으로 출력되었습니다. ﻿함수에 대해 이해하기 위해 e^(1.3)^x 그래프를 아래와 같이 코딩하고 출력값을 살펴보았습니다. plot (e^(1/3)^x) e^(1/3)^x 그래프의 x>0 부분을 보면 y=1 부근으로 수렴하며 감소한다는 것을 있습니다.

이를 통해 ﻿sin(e^(1/3)^x) x>0 부분이 수렴하는 형태로 나왔는지 추론할 있었습니다.

반대로 x<0부분을 보면 exponential하게 y값이 증가한다는 것을 있습니다.

sin 주기함수 이므로 ﻿sin(e^(1/3)^x) x<0 부분이 진동하는 형태로 그려졌는지 추론할 있었습니다.

Comment : 합성함수 문제를 단순히 합성함수의 그래프만 출력하고, 분석하는 식으로 접근하였었는데, 합성함수의 각 구성함수들을 분리하여 분석하는 것도 이해에 큰 도움이 된다는 것을 알게 되었습니다.

#Open Problem 4(solved by 안성준)

sin(x)+sin(x^2) =exp(x) 함수를 정해봤습니다.

﻿

코드를 다음과 같이 입력한 결과 ﻿너무 복잡한 식이라 교점이 바로 구해지지 않았습니다. 함수의 개형을 통해 근을 유추하고자 다음과 같이 입력하였습니다.

주기는 sin함수의 마디인  (-pi , pi) 설정하였습니다.﻿ 결과 흥미로운 그래프가 나왔습니다 근을 엄밀히 유추하기 위해 범위를 줄였습니다. ﻿ 범위를 줄인 결과 근이 명확히 있었습니다.﻿  근이 -2.624 근접한 것을 확인할 있습니다.

﻿마지막으로 find root 통해 근을 찾은 결과입니다. ﻿이상입니다. 감사합니다!﻿

Comment : Solve Equation을 할 때, 근이 sol_equation 함수로 구하기 어려울 때, 그래프를 통해 추정할 수 있다는 사실을 알고는 있었으나, 직접 코드를 통해 문제를 해결해 보니, 그 개념을 더 또렷하게 이해할 수 있었습니다. 단순히, 그래프만 그리는 것이 아니라 그래프를 그리는 최종 목표가 근을 구하고 후에 max/min값들을 분석하기 위함 이라는 생각도 해보았습니다.

Open Problem 4  #Open Problem 5(solved by 강민지)

<sol-equation>

solve function 사용하며, x 대한 방정식을 solve(equation, x)형태로 입력한다.

﻿

이때 출력값은 [x==]형태로 방정식의 근이 출력된다.

﻿

값이 크거나 복잡해 다루기 어려울 경우에는, 그래프를 그려 근사 값을 찾는 것이 효율적인 방법이다.

﻿

그래프는 저번 강의에서 처럼 plot function 이용해 그릴 있다.

﻿

<vector-operations>

데이터를 순서 쌍이나 튜플 형태로 나타낼 있을 , 이는 2차원 혹은 3차원 데이터 (  n 차원 데이터)  coordinate plane이나 coordinate space 위의  (  n 차원 공간안의  점으로 (표현할 수있다) 나타낼 있다.

﻿

point 다루기 위해서 vector 사용한다.

vector 이용해   쉽게 시각화   있고, vector 성질을 통해 다양한 계산을   있다.

vector function 이용하며, vector(좌표)형식으로 입력한다.

(vector print 함수) 혹은 (var plot3d 함수) 이용해, 이를 시각화 하여 출력값을 있다.

﻿ ﻿

1) f(x)=5/1-x^2에 대하여 0.3~3^(1/2)+0.05 까지 붉은 선 그래프를 출력하고, 0.5~3^(1/2)까지 f(x) x축 까지 영역을 색칠하여 출력합니다.

여기서, sqrt는 제곱근을 계산합니다.

﻿

2) f(x)에서 색칠 영역에 대하여 integral 합니다.

﻿

3) 해당 근사 값을 5자리 수로 출력합니다.

﻿ 1) var function을 통해, 'x,y,z' 변수의 3차원 좌표를 설정합니다.

﻿

2)﻿ (x,-5,5),(y,-5,5),(z,-5,5) 영역 상의 x+y == 5 라는 graph plot3d 함수를 이용해 출력합니다.

﻿

﻿

Q(강민지) 첫 문제에서 simplify.full() 함수는 무엇을 의미할까요?

A(이상구 교수님) simplify.full() 명령어는 위의 복잡한 숫자를 ... 보기좋은 모습/이해하기쉬운/이용하기쉬운/간단한 숫자로 표현해 줍니다.

Comment : 첫 문제에서 integral을 그래프 상 영역으로 표시하여, 계산 과정을 시각화 해 더 쉽게 이해할 수 있었습니다. 두번째 문제에서는, vector operation이 하나의 point를 지정하고 이들이 모여 하나의 영역을 출력하는 과정을 코드를 통해 이해할 수 있었습니다. 또한 코드 실행 과정 중에 낯선 코드가 있어 교수님과의 질의응답을 통해 답을 얻어 궁금증을 해결할 수 있었습니다.

#Open Problem 6(solved by 강민지)

with 6x6 matrix













﻿ 6x6 행렬을 이용하여, 전치행렬이 존재하고 역행렬이 존재하지 않음을 확인했습니다.

﻿

-transposed matrix exists.

﻿

-inverse matrix doesn't exist.

Comment : 6X6행렬을 만들고 이 행렬의 전치행렬, 역행렬의 존재여부를 확인해보았습니다. 아직 미분적분학 밖에 수강하지 않아, 행렬이라는 개념을 처음 접해 전치행렬과 역행렬에 대해 이해하는 것이 조금 어려웠지만, 강의와 인터넷 검색들을 통해 이해하고 문제도 해결할 수 있었습니다.

#Open Problem 7(solved by 고재윤)

 ◩ Open Problem 7

What kinds of data can you apply the similarity measures we just discussed?

﻿

"What kinds of data can you apply the similarity measures we just discussed (Distance similarity)?"

Using the distance similarity, we can determine the color similarity. Defining the 3-dim vectors as an rgb pair, and later calculating Euclidean distance between them.

﻿

두가지 이상의﻿ 데이터를 비교하는 방법 하나로 데이터 사이의 거리를 구하는 방법이 있습니다 식은 'Euclidean distance' 대한 공식으로, 좌표에서 A(a1, a2) B(b1, b2)사이의 거리를 나타내는 공식입니다.

공식을 기초로 하여 데이터 사이의 ﻿연관성 또는 similarity 구할 있습니다. 대표적으로, ﻿RGB 표를 통해 두가지 이상의 빛들의 similarity 쉽게 알아낼 있습니다

﻿ ﻿ 그림은 ﻿RGB﻿ 표를 나타낸 것입니다

우리가 전자기기를 통해 보는 빛의 색은 빛의 삼원색인 빨강 (R) , 초록 (G), 파랑(B) 세가지 색의 조합으로 그림과 같은 많은 종류의 색을 표현할 있습니다

, 데이터는 ﻿3-dim vector 표현됩니다. 우리는 데이터간의 거리로 데이터간의 similarity 판단할 있을 것입니다.﻿ (Distance Similarity)

﻿

﻿color1 = [1, 2, 3], color2= [11, 22, 33], color3= [12, 23, 34]라는 가지 데이터가 있다고 가정해보도록 하겠습니다.

벡터의 성분은 [R,G,B]이고, 성분의 크기는 성분의 세기를 뜻합니다.

color1 color2 어떤 색이 color3 비슷한 색인지 알기 위해( Distance Similarity 구하기 위해아래와 같이 코딩하였습니다. ﻿ 여기서 bool() 명령어는 괄호 안의 조건문이 참이면 True, 거짓이면 False 출력﻿하는 명령어입니다

﻿﻿

위와 같이 코딩한 결과 아래와 같이 출력되었습니다. ﻿따라서 Color2 Color1보다 Color3 비슷한, similarity 높은 데이터라는 것을 있었습니다.﻿

Comment : RGB Color Similarity를 구하기 위해 vector operation을 도입해 Similarity를 구하는 과정을 이해할 수 있었습니다. Vector operation을 어떻게 실생활에 적용할까 하는 문제에 대해 답을 찾기 어려웠는데, 이 글을 보고 Similarity를 적용할 수 있는 데이터에 대한 개념을 완벽히는 아니지만, 어느정도 이해할 수 있었습니다.

#Open Problem 8(solved by 문태의)

﻿[Open Problem 8]

What kind of data you cannot use this similarity measure using the distance? Any other measures that you can think of? (Hint: Vectors/Data in the same directions)

"Can you think of the data you cannot apply to the similarity measure we just discussed (Distance similarity)?"

﻿

데이터 유사성을 측정하기 위해 ﻿'Euclidean distance'방식을 적용할 없는 데이터는 문장입니다.

﻿

예를 들어 다음과 같은 문장이 있다고 생각해보겠습니다.

1. ﻿인공지능 수학 어려워﻿﻿﻿
2. 인공지능 과학 어려워
3. 인공지능 수학 어려워 인공지능 수학 어려워 인공지능 수학 어려워

보기에 가장 유사한 문장은 1번과 3번입니다.

﻿

문장의 데이터 유사성을 측정하기 위해 'Euclidean distance'방식을 사용해보겠습니다.

우선 ﻿ 문장이 단어를 몇개나 갖고 있는지 표로 나타내보겠습니다.

 ﻿ ﻿인공지능 ﻿수학 ﻿과학 ﻿어려워 ﻿1번 1 1 ﻿0 1﻿ ﻿2