[CV] Linear Algebra and Probability, 선형대수학과 확률론

전서윤·2023년 3월 27일

1. Linear Algebra in Computer Vision

1-1. Vector

  • Geometric object that has both a magnitude and direction
    xRnx\in R^n,   x=(x1x2...xn)=(x1,x2,...,xn)x = \begin{pmatrix}x_1\\x_2\\...\\x_n\\ \end{pmatrix}=(x_1,x_2,...,x_n)^\top
  • Magnitude of vector
    x=x12+x22+...+xn2||x|| = \sqrt {x_1^2+x_2^2+...+x_n^2}
  • Dot Product
  • Outer Product
  • Norm
    p-norm : x=(i=1nxip)1p||x||=(\sum_{i=1}^{n} |x_i|^p)^{\frac{1}{p}}

1-2. Basis

  • Linear Dependency
    Given a set of vectors X={x1,x2,...xn}X = \{x_1,x_2,...x_n\}
    xiXx_i\in X is linearly dependent if it can be written as a linear combination of X{xi}X-\{x_i\}
  • Basis
    A basis is an lineraly independent set of vectors that spans the "whole space"
  • Standard Basis
    Othogonal eiej=0e_i^{\top}e_j = 0, Normalized eiei=1e_i^{\top}e_i=1 → Orthonormal
  • Change of Basis
    B={b1,b2,...,bn},bi=RmB=\{b_1,b_2,...,b_n\}, b_i = R^m : basis

1-3. Matrix

  • Rectangular (2D) array of numbers
  • Rank of Matrix
    The number of linearly independent rows or columns in matrix
  • Matrix Inversion
    To have inversion matrix, the matrix should be square and non-singular.
  • Determinant

1-4. Solving Linear Equations

Ax=bAx = b
A:m×n,x:m×1,b:n×1A : m\times n, x:m\times 1, b:n\times 1
  • Finding the exact solution (m = n)
    x=A1bx=A^{-1}b
  • Finding the least square solution (m>n)
    x=(AA)1(Ab)x=(A^{\top}A)^{-1}(A^{\top}b)
    When A's independent row is smaller than n, m gets smaller than n in fact.
  • Eigen Vector & Eigen Value
    Ax=λxAx=\lambda x (square matrix A, eigen value λ\lambda, eigen vector x)
    A를 x방향으로 projection하면 λ\lambda만큼의 magnitude를 가진다.
  • Eigen Decomposition
    Square and symmetric matrix A can be decomposed as A=VDVA=VDV^{\top}
    where V is orthonormal matrix of A's eigenvectors and D is a diagonal matrix of the associated eigenvalues.

2. Probability in Computer Vision

2-1. Definitions

  • Sample Space (Ω\Omega) : The set of all the outcomes
  • Event Space (EE) : A set whose element is a subset of Sample space.
  • Random Variable : A function that assigns a number to each point in sample space.

2-2. Conditional Probability

  • Conditional Probability of A given B
  • Independence
  • Conditional Independence
    A and B are conditionally independent given C,

2-3. Chain Rule

2-4. Bayes' Theorem

  • Posterior probability (사후확률) : 사건 B가 발생했을 때, 그 사건이 특정 모델 A에서 발생했을 확률
  • Likelihood : 어떤 모델 A에서 B가 관측될 확률
  • Prior : 어떤 모델이 가지고 있는 선험적 확률

2-5. Gaussian Distribution

요약

1. Linear Algebra

  • Vector : operations, norm
  • Basis, linear dependency
  • Matrix : rank, inversion, determinant
  • Linear Equations : Least square solution, Eigen Decomposition

2. Probability

  • Conditional Probability, Independence
  • Chain Rule
  • Bayes' Theorem
  • Gaussian Distribution

0개의 댓글