함수의 입력이 변할 때 출력이 어떻게 변화하는지를 분석하는 것이 미적분학의 주된 관심 분야이다. 단변수함수일 경우 미분으로 충분했다. 현실에서 자주 접하게 되는 다변수함수에 대해 방향도함수로부터 시작해 그라디언트와 헤시안 매트릭스에 대해 알아보자. (단, 스칼라 함수를 전제로 설명한다)
[목표]다변수 입력x이 변할 때 목적 함수의 출력f(x)는 어떻게 변화하는가 ? [결론]그라디언트를 통해 알 수 있다.
[참고] 함수의 분류
[X Univariate, Y ScalarValued] = INPUT:1, OUTPUT:1
[X Univariate, Y VectorValued] = INPUT:1, OUTPUT:M
[X Multivariate, Y ScalarValued] = INPUT:N, OUTPUT:1
[X Multivariate, Y VectorValued] = INPUT:N, OUTPUT:M
📕 다변수 함수의 미분
🎢 방향도함수 Directional Derivative
1. 정의: 벡터 x가 벡터 v의 방향으로 이동할 때 발생하는 f(x)의 변화율(Scalar)
v 방향으로 이동할 때 생기는 변화량은 v1,v2로 이동할 때 생기는 변화량의 합과 같다. 이때, v의 원소 v1,v2는 이동량이고 그라디언트 ∇f의 원소 ∂x1∂f(x),∂x2∂f(x)는 x1,x2 방향으로 이동할 때의 변화율을 의미한다. 따라서 단위벡터 v를 내적하면 v 방향으로 이동할 때 발생하는 변화율을 알려준다.
∇ GRADIENT VECTOR(Scalar-Valued)
1. 정의: 표준기저벡터 방향으로 이동할 때 발생하는 방향도함수들의 벡터
∇f≡∂x∂f(x)=[∂x1∂∂x2∂]⊤f=[∂x1∂f∂x2∂f]⊤
Gradient is Collection of Partial Derivatives
[주의] Gradient는 기울기가 아니라 입력 벡터와 같은 차원에서의 벡터이다
그러므로 해당 벡터를 더해 해당 방향으로 이동하는 것은 자연스럽다.
2. 성질
① ∇f(a,b)⋅v 연산을 통해 ∇vf(a,b)를 구하는 벡터
② ∇f Direction of Steepest Ascent & ∣∣∇f∣∣2 Rate of Change
v가 ∇f(a,b)위에 있을 때 ∇vf(a,b)=∇f(a,b)⋅v는 최대가 되고,
Normalized Gradient∣∣∇f(a,b)∣∣∇f(a,b) 는 최대 변화율을 갖는 단위벡터
Magnitude of Gradient∣∣∇f(a,b)∣∣2는 최대 변화율이 된다!
[해석] 이 내적을 v의 ∇f(a,b)에 대한 사영Projection의 입장에서 해석해보자.
기하적 해석
이때,∇f(a,b)이 단위벡터이면 프로젝션 매트릭스이므로 일반성을 소실하지 않으면서 단위벡터라고 가정하자. 그러면 ∇f(a,b)⋅v는 v를 ∇f(a,b)에 투영했을 때의 길이가 된다. 따라서 이 길이가 최대가 될 때는 v와 ∇f(a,b)의 방향이 일치할 때 이다.
The reason for considering a unit vector is that the gradient is a vector that points in the direction of maximum increase of the function, and has magnitude equal to the maximum rate of change of the function. So, if we take the dot product of the gradient and a unit vector, we obtain the rate of change of the function in the direction of the unit vector.
Moreover, by considering a unit vector, the magnitude of the gradient does not affect the result, and we can compare the rate of change of the function in different directions easily, without having to normalize the gradient.
③ Perpendicular to Contour lines
한 점에서의 여러가지 벡터와 Gradient벡터를 생각해보자. 가장 빠르게 다음 등고선에 접근하는 벡터는 Gradient벡터라고 했다.
만약 Gradient가 Steepest Ascent라면, Countour line에 Perpendicular할 수 밖에 없다!
Algebraically, it represents the rate of change of the gradient of the function, which provides information about the curvature of the function.
Geometrically, the Hessian matrix has a significant meaning in terms of the shape of a surface defined by the function.
The eigenvalues of the Hessian matrix indicate the curvature of the surface along different directions in the domain. If the eigenvalues are positive, the surface is locally convex, meaning it curves upward like a bowl. If the eigenvalues are negative, the surface is locally concave, meaning it curves downward like a saddle. If the eigenvalues are zero, the surface is locally flat.
The sign of the determinant of the Hessian matrix is also related to the local convexity of the surface: a positive determinant indicates a locally convex surface, while a negative determinant indicates a locally concave surface.
In summary, the Hessian matrix provides information about the local behavior of a function and its shape. This information is useful in various applications, such as optimization, where the Hessian matrix is used to determine the type of stationary points (minimum, maximum, or saddle point) and to compute the search direction for optimization algorithms.
2. 성질
① [dx1,⋯,dxn]⋅H⋅[dx1,⋯,dxn]T = [d2f1,⋯,d2fn]
② ⎣⎢⎢⎡∂x2∂2f∂y∂x∂2f∂x∂y∂2f∂y2∂2f⎦⎥⎥⎤⎣⎢⎡∂x∂g∂y∂g⎦⎥⎤=⎣⎢⎡∂x∂h∂y∂h⎦⎥⎤
When we apply the Hessian matrix to another function, what we are actually doing is transforming the second function in a way that is informed by the behavior of the first function. In other words, the Hessian matrix of the first function provides information about the local curvature of the function, and this information is used to transform the second function in a way that reflects that curvature.
Here is a simple example to illustrate the idea. Let's say that we have two functions, f(x, y) and g(x, y). We can think of the Hessian matrix of f(x, y) as a linear transformation, H, that operates on g(x, y) to produce a new function h(x, y). The operation can be written as h(x, y) = H(g(x, y)).
In matrix form, we can write the operation as follows:
In this example, the Hessian matrix of f(x, y) acts on the gradient of g(x, y) to produce the gradient of the new function h(x, y). The new function h(x, y) will have properties that reflect the curvature of f(x, y), and these properties can be useful in various optimization and machine learning algorithms.
헤시안 연산 사례를 더 찾아봐야할 것 같아.
~~Here's another example of using the Hessian matrix for a linear transformation:
Suppose we have a function h(x, y) = x^2 + 2y^2, and we want to find the second-order behavior of the function near the point (1, 2). The gradient of the function is given by:
Evaluating the Hessian at the point (1, 2), we get:
Hh(1,2)=[2004]
We can use this Hessian to linearly transform another function g(x, y) = x + y. The gradient of g is given by:
∇g(x,y)=[∂x∂g∂y∂g]=[11]
And the Hessian of g is given by:
Hg(x,y)=Hh(x,y)⋅∇g(x,y)=[2004][11]=[24]
This means that the second-order behavior of the function g near the point (1, 2) is determined by the linear transformation described by the Hessian of h. The resulting Hessian of g provides information about the direction of maximum increase or decrease, and the steepness of the function g.
Why use the Hessian of h, which has nothing to do with g, to find the Hessian of g?
In general, the Hessian matrix of one function may be used to perform a linear transformation on another function because the Hessian matrix encodes information about the second-order behavior of the first function, such as the curvature. This information can then be used to derive information about the second-order behavior of the transformed function. For example, if we have a function g which is a composition of another function f and a linear transformation, then the Hessian of g can be related to the Hessian of f and the matrix representing the linear transformation. In this case, using the Hessian of f can provide valuable information about the second-order behavior of g.
Note that the Hessian of h, which has nothing to do with g, is only used in this context if h is used to model some aspect of the function g.~~
증명1 / 증명2
Here's a proof of ∇vf(x)=∇f(x)⋅v, where ∇f(x) is the gradient of the function f at the point x, and v is a vector:
Let f(x+tv) be a scalar function that describes the change in the value of the function f(x) as x moves in the direction of the vector v. The gradient of this function with respect to the scalar t is given by:
dtdf(x+tv)=∇vf(x)=∂t∂f(x+tv)
Using the chain rule of differentiation, we can expand the partial derivative as: