[DetnEst] 11. Statistical Decision Theory

KBC·2024년 12월 10일
0

Detection and Estimation

목록 보기
18/23

Difference between Detection and Estimation

  • Estimation : Continuous set of hypotheses(almost always wrong - minimize error instead)
  • Detection : Discrete set of hypotheses(right or wrong)
  • Classical : Hypotheses/parameters are fixed, non-random
  • Bayesian : Hypotheses/parameters are treated as random variables with assumed priors

Overview

  • Theory of hypothesis testing
  • Simple hypothesis testing problem with completely known PDF
  • Complicated hypothesis testing problem with unknown PDF
    • Primary approaches :
      • Classical approach based on the Neyman-Pearson theorem
      • Bayesian approach based on minimization of the Bayes risk

Mathematical Detection Problem

  • Binary Hypothesis Test
    • Noise only hypothesis vs. signal present hypothesis(deterministic signals)
      H0:x[n]=w[n],null hypothesisH1:x[n]=s[n]+w[n],alternative hypothesis\mathcal{H}_0: x[n] = w[n], \quad \text{null hypothesis} \\ \mathcal{H}_1: x[n] = s[n] + w[n], \quad \text{alternative hypothesis}
    • Example of the DC level in noise (A=1)(A=1)
      • s[n]=A=1s[n]=A=1
      • w[n]w[n] : zero mean Gaussian process N(0,σ2)\sim\mathcal{N}(0, \sigma^2)
      • p(x[0];H0)=12πσ2exp[12σ2x[0]2]p(x[0];\mathcal{H}_0)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{1}{2\sigma^2}x[0]^2\right]
      • p(x[0];H1)=12πσ2exp[12σ2(x[0]1)2]p(x[0];\mathcal{H}_1)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{1}{2\sigma^2}(x[0]-1)^2\right]

Neyman-Pearson Theorem

  • Reasonable approach
    H1:x[0]>1/2H0:otherwise\mathcal{H}_1:x[0]>1/2\\\mathcal{H}_0:\text{otherwise}
  • Type 1 error : decide H1\mathcal{H}_1 when H0\mathcal{H}_0 is true(false alarm)
    \rightarrow Probability of false alarm, PFA=P(H1;H0)P_{FA} =P(\mathcal{H}_1;\mathcal{H}_0)
  • Type 2 error : decide H0\mathcal{H}_0 when H1\mathcal{H}_1 is true (miss)
    \rightarrow Probability of miss PM=P(H0;H1)P_M=P(\mathcal{H}_0;\mathcal{H}_1)
    \rightarrow Probability of detection,
    PD=P(H1;H1)=1P(H0;H1)=1PMP_D=P(\mathcal{H}_1;\mathcal{H}_1)=1-P(\mathcal{H}_0;\mathcal{H}_1)=1-P_M
    It is not possible to reduce both error probabilities simultaneously

Neyman-Pearson Test

  • Maximize PD=P(H1;H1)P_D=P(\mathcal{H}_1;\mathcal{H}_1) subject to the constraint PFA=P(H1;H0)=αP_{FA}=P(\mathcal{H}_1;\mathcal{H}_0)=\alpha
  • Example of the DC level in noise
    A=1,σ2=1(standard normal)PFA=P(H1;H0)=Pr(x[0]>γ;H0)=γ12πexp(12t2)dt=Q(γ)PFA=103γ=3PD=P(H1;H1)=Pr(x[0]>γ;H1)=γ12πexp(12(t1)2)dt=Q(γ1)=Q(2)=0.023Prob of DetectionA = 1, \, \sigma^2 = 1 \quad \text{(standard normal)} \\[0.2cm] P_{FA} = P(\mathcal{H}_1; \mathcal{H}_0) = \Pr(x[0] > \gamma; \mathcal{H}_0) = \int_{\gamma}^\infty \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}t^2\right) dt = Q(\gamma) \\[0.2cm] P_{FA} = 10^{-3} \rightarrow \gamma' = 3 \\[0.2cm] P_D = P(\mathcal{H}_1; \mathcal{H}_1) = \Pr(x[0] > \gamma; \mathcal{H}_1) \\[0.2cm] = \int_{\gamma'}^\infty \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}(t-1)^2\right) dt = Q(\gamma' - 1) = Q(2) = 0.023\\[0.2cm] \text{Prob of Detection}
  • Detector : decide H0\mathcal{H}_0 or H1\mathcal{H}_1 given x={x[0],,x[n1]}\text{x}=\{x[0],\cdots,x[n-1]\}
  • Decision region
    R1={x:decide H1 or reject H0}R0={x:decide H0 or reject H1}R_1=\{\text{x}:\text{decide }\mathcal{H}_1 \text{ or reject }\mathcal{H}_0\}\\[0.2cm] R_0=\{\text{x}:\text{decide }\mathcal{H}_0 \text{ or reject }\mathcal{H}_1\}
    • R0R1=RNR_0 \cup R_1=R^N (data space)
    • PFA=R1p(x;H0)dx=αP_{FA}=\int_{R_1}p(\text{x};\mathcal{H}_0)d\text{x}=\alpha : significance level or size
    • PD=R1p(x;H1)dxP_D=\int_{R_1}p(\text{x};\mathcal{H}_1)d\text{x} : power of the test
  • Neyman-Pearson Theorem
    To maximize PDP_D for a given PFA=αP_{FA} = \alpha, decide H1\mathcal{H}_1 if
    L(x)=p(x;H1)p(x;H0)>γL(\text{x})=\frac{p(\text{x};\mathcal{H}_1)}{p(\text{x};\mathcal{H}_0)}>\gamma
    where the threshold γ\gamma is found from
    PFA={x:L(x)>γ}p(x;H0)dx=αP_{FA}=\int_{\{\text{x}:L(\text{x})>\gamma\}}p(\text{x};\mathcal{H}_0)d\text{x}=\alpha
    • L(x)=p(x;H1)p(x;H0)L(\text{x})=\frac{p(\text{x};\mathcal{H}_1)}{p(\text{x};\mathcal{H}_0)} : likelihood ratio : Likelihood ratio test(LRT)

Neyman-Pearson Therorem - Proof

  • Using Lagrangian multipliers,
    F=PD+λ(PFAα)=R1p(x;H1)dx+λ(R1p(x;H0)dxα)=R1(p(x;H1)+λp(x;H0))dxλαF=P_D+\lambda(P_{FA}-\alpha)=\int_{R_1}p(\text{x};\mathcal{H}_1)d\text{x}+\lambda\left(\int_{R_1}p(\text{x};\mathcal{H}_0)d\text{x}-\alpha\right)\\[0.2cm] =\int_{R_1}(p(\text{x};\mathcal{H}_1)+\lambda p(\text{x};\mathcal{H}_0))d\text{x}-\lambda\alpha
  • To maximize FF, we should include x\text{x} in H1\mathcal{H}_1 if the integrand is positive, i.e., if
    p(x;H1)+λp(x;H0)>0p(\text{x};\mathcal{H}_1)+\lambda p(\text{x};\mathcal{H}_0) >0
    • decide H1\mathcal{H}_1 if p(x;H1)p(x;H0)>λ\frac{p(\text{x};\mathcal{H}_1)}{p(\text{x};\mathcal{H}_0)} > -\lambda (λ\lambda should be negative)
    • decide H1\mathcal{H}_1 if p(x;H1)p(x;H0)>γ\frac{p(\text{x};\mathcal{H}_1)}{p(\text{x};\mathcal{H}_0)}>\gamma (γ\gamma is found from PFA=αP_{FA}=\alpha

  • DC level in noise (A=1)(A=1) with PFA=103P_{FA}=10^{-3}
    p(x;H1)p(x;H0)=exp[12(x[0]1)2]exp[12x2[0]]γexp(x[0]12)>γPFA=Pr{exp(x[0]12)>γ;H0}=103\frac{p(x; \mathcal{H}_1)}{p(x; \mathcal{H}_0)} = \frac{\exp\left[-\frac{1}{2}(x[0] - 1)^2\right]}{\exp\left[-\frac{1}{2}x^2[0]\right]} \rightarrow \gamma \rightarrow \exp\left(x[0] - \frac{1}{2}\right) > \gamma \\[0.2cm] P_{FA} = \Pr\left\{\exp\left(x[0] - \frac{1}{2}\right) > \gamma; \mathcal{H}_0\right\} = 10^{-3}
  • Let γ=lnγ+1/2\gamma'=\ln\gamma+1/2, then
    PFA=γ12πexp(12t2)dt=103γ=3PD=Pr{x[0]>3;H1}=312πexp(12(t1)2)dt=0.023P_{FA} = \int_{\gamma}^\infty \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}t^2\right) dt = 10^{-3} \rightarrow \gamma' = 3 \\[0.2cm] P_D = \Pr\{x[0] > 3; \mathcal{H}_1\} = \int_{3}^\infty \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}(t-1)^2\right) dt = 0.023
  • If PFA=0.5P_{FA}=0.5
    PFA=γ12πexp(12t2)dt=0.5    γ=0PD=012πexp(12(t1)2)dt=Q(1)=1Q(1)=0.84P_{FA} = \int_{\gamma'}^\infty \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}t^2\right) dt = 0.5 \implies \gamma' = 0 \\[0.2cm] P_D = \int_{0}^\infty \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}(t-1)^2\right) dt = Q(-1) = 1 - Q(1) = 0.84

  • Example of the DC level in WGN
    H0:x[n]=w[n],n=0,1,,N1H1:x[n]=s[n]+w[n],n=0,1,,N1w[n]N(0,σ2),S[n]=AH0:μ=0H1:μ=A1\mathcal{H}_0: x[n] = w[n], \quad n = 0, 1, \dots, N-1 \\[0.2cm] \mathcal{H}_1: x[n] = s[n] + w[n], \quad n = 0, 1, \dots, N-1 \\[0.2cm] w[n] \sim \mathcal{N}(0, \sigma^2), \quad S[n] = A \\[0.2cm] \mathcal{H}_0: \mu = 0 \\[0.2cm] \mathcal{H}_1: \mu = A1
  • Decide H1\mathcal{H}_1 if
    exp[12σ2n=0N1(x[n]A)2]exp[12σ2n=0N1x2[n]]>γ12σ2(2An=0N1x[n]+NA2)>lnγAσ2n=0N1x[n]>lnγ+NA22σ21Nn=0N1x[n]>σ2NAlnγ+A2=γT(x)=1Nn=0N1x[n],T(x){N(0,σ2N),under H0N(A,σ2N),under H1PFA=Pr(T(x)>γ;H0)=Q(γσ2/N)γ=σ2NQ1(PFA)PD=Pr(T(x)>γ;H1)=Q(γAσ2/N)PD=Q(Nσ2Q1(PFA)Nσ2A)=Q(Q1(PFA)NA2σ2)\frac{\exp\left[-\frac{1}{2\sigma^2} \sum_{n=0}^{N-1} (x[n] - A)^2\right]}{\exp\left[-\frac{1}{2\sigma^2} \sum_{n=0}^{N-1} x^2[n]\right]} > \gamma \\[0.2cm] \rightarrow -\frac{1}{2\sigma^2} \left(-2A \sum_{n=0}^{N-1} x[n] + N A^2\right) > \ln \gamma\\[0.2cm] \frac{A}{\sigma^2} \sum_{n=0}^{N-1} x[n] > \ln \gamma + \frac{N A^2}{2\sigma^2} \\[0.2cm] \frac{1}{N} \sum_{n=0}^{N-1} x[n] > \frac{\sigma^2}{N A} \ln \gamma + \frac{A}{2} = \gamma' \\[0.2cm] T(\mathbf{x}) = \frac{1}{N} \sum_{n=0}^{N-1} x[n], \quad T(\mathbf{x}) \sim \begin{cases} \mathcal{N}\left(0, \frac{\sigma^2}{N}\right), & \text{under } \mathcal{H}_0 \\[0.2cm] \mathcal{N}\left(A, \frac{\sigma^2}{N}\right), & \text{under } \mathcal{H}_1 \end{cases} \\[0.2cm] P_{FA} = \Pr(T(\mathbf{x}) > \gamma'; \mathcal{H}_0) = Q\left(\frac{\gamma'}{\sqrt{\sigma^2 / N}}\right) \rightarrow \gamma' = \sqrt{\frac{\sigma^2}{N}} Q^{-1}(P_{FA}) \\[0.2cm] P_D = \Pr(T(\mathbf{x}) > \gamma'; \mathcal{H}_1) = Q\left(\frac{\gamma' - A}{\sqrt{\sigma^2 / N}}\right) \\[0.2cm] \rightarrow P_D = Q\left(\sqrt{\frac{N}{\sigma^2}} Q^{-1}(P_{FA}) - \sqrt{\frac{N}{\sigma^2}} A\right) = Q\left(Q^{-1}(P_{FA}) - \sqrt{\frac{N A^2}{\sigma^2}}\right)

  • Deflection coefficient dd is defined for a test statistic TT as,
    d2=(E(T;H1)E(T;H0))2var(T;H0)d^2=\frac{(E(T;\mathcal{H}_1)-E(T;\mathcal{H}_0))^2}{\text{var}(T;\mathcal{H}_0)}
    • Useful in characterizing the performance of a detector
    • Usually, the larger the deflection coefficient, the easier it is to differentiate between the two signals, and thus the better the detection performance
  • For the mean shifted Gaussian problem,
    T{N(μ0,σ2),under H0N(μ1,σ2),under H1d2=(μ1μ0)2σ2PD=Q(Q1(PFA)d2)T \sim \begin{cases} \mathcal{N}(\mu_0, \sigma^2), & \text{under } \mathcal{H}_0 \\[0.2cm] \mathcal{N}(\mu_1, \sigma^2), & \text{under } \mathcal{H}_1 \end{cases} \quad \rightarrow \quad d^2 = \frac{(\mu_1 - \mu_0)^2}{\sigma^2} \\[0.2cm] P_D = Q\left(Q^{-1}(P_{FA}) - \sqrt{d^2}\right)

  • Example of change in variance

    H0:x[n]N(0,σ02),n=0,1,,N1H1:x[n]N(0,σ12),n=0,1,,N1σ12>σ02\mathcal{H}_0 : x[n] \sim \mathcal{N}(0, \sigma_0^2), \quad n = 0, 1, \dots, N-1 \\[0.2cm] \mathcal{H}_1 : x[n] \sim \mathcal{N}(0, \sigma_1^2), \quad n = 0, 1, \dots, N-1 \\[0.2cm] \sigma_1^2 > \sigma_0^2

  • NP Test : Decide H1\mathcal{H}_1 if

    1(2πσ12)N/2exp[12σ12n=0N1x2[n]]1(2πσ02)N/2exp[12σ02n=0N1x2[n]]>γ12(1σ121σ02)n=0N1x2[n]>lnγ+N2lnσ12σ021Nn=0N1x2[n]>(2Nlnγ+lnσ12σ02)/(1σ021σ12)=γ\frac{\frac{1}{(2\pi \sigma_1^2)^{N/2}} \exp\left[-\frac{1}{2\sigma_1^2} \sum_{n=0}^{N-1} x^2[n]\right]} {\frac{1}{(2\pi \sigma_0^2)^{N/2}} \exp\left[-\frac{1}{2\sigma_0^2} \sum_{n=0}^{N-1} x^2[n]\right]} > \gamma \\[0.2cm] -\frac{1}{2} \left(\frac{1}{\sigma_1^2} - \frac{1}{\sigma_0^2}\right) \sum_{n=0}^{N-1} x^2[n] > \ln \gamma + \frac{N}{2} \ln \frac{\sigma_1^2}{\sigma_0^2} \\[0.2cm] \frac{1}{N} \sum_{n=0}^{N-1} x^2[n] > (\frac{2}{N} \ln \gamma + \ln \frac{\sigma_1^2}{\sigma_0^2} )\bigg/ \left(\frac{1}{\sigma_0^2} - \frac{1}{\sigma_1^2}\right) = \gamma'

  • Test statistic and sufficient statistic
    • Assume that we observe x=[x[0]    x[n]]T\text{x}=[x[0]\;\cdots\;x[n]]^T with a PDF that is parameterized by θ,p(x;θ)\theta,p(\text{x};\theta)
      H0:θ=θ0H1:θ=θ1\mathcal{H}_0:\theta=\theta_0\\[0.2cm] \mathcal{H}_1:\theta=\theta_1
    • By Neyman-Fisher factorization theorem
      p(x;θ)=g(T(x),θ)h(x),where T(x) is a sufficient statistic for θp(\text{x};\theta)=g(T(\text{x}),\theta)h(\text{x}), \text{where }T(\text{x}) \text{ is a sufficient statistic for }\theta
    • The NP test becomes
      p(x;θ1)p(x;θ0)>γg(T(x),θ1)g(T(x),θ0)>γ\frac{p(\text{x};\theta_1)}{p(\text{x};\theta_0)}>\gamma\rightarrow\frac{g(T(\text{x}),\theta_1)}{g(T(\text{x}),\theta_0)}>\gamma
      However, a single sufficient statistic doesn't always exist

Receiver Operating Characteristics(ROC)

Bayes Risk

  • P(Hi),  i=0,1P(\mathcal{H}_i),\;i=0,1 : prior probability of each hypothesis
  • CijC_{ij} : cost of deciding Hi\mathcal{H}_i when Hj\mathcal{H}_j is true
  • Bayes risk
    R=E(C)=i=01j=01CijP(HiHj)P(Hj)R=E(C)=\sum^1_{i=0}\sum^1_{j=0}C_{ij}P(\mathcal{H}_i|\mathcal{H}_j)P(\mathcal{H}_j)
  • Usually C00=C11=0C_{00}=C_{11}=0
  • If Cij=1δijR=PeC_{ij}=1-\delta_{ij}\rightarrow R=P_e (minimum probability of error)
    Pe=P(H0H1)P(H1)+P(H1H0)P(H0)P_e=P(\mathcal{H}_0|\mathcal{H}_1)P(\mathcal{H}_1)+P(\mathcal{H}_1|\mathcal{H}_0)P(\mathcal{H}_0)

    PeP_e = Probability of miss + Probability of False alarm

  • Bayes risk detector
    R=C00P(H0)R0p(xH0)dx+C01P(H1)R0p(xH1)dx+C10P(H0)R1p(xH0)dx+C11P(H1)R1p(xH1)dx=C00P(H0)+C01P(H1)+R1[(C10P(H0)C00P(H0))p(xH0)]dx+R1[(C11P(H1)C01P(H1))p(xH1)]dxR = C_{00} P(\mathcal{H}_0) \int_{R_0} p(x|\mathcal{H}_0) dx + C_{01} P(\mathcal{H}_1) \int_{R_0} p(x|\mathcal{H}_1) dx \\[0.2cm] \quad + C_{10} P(\mathcal{H}_0) \int_{R_1} p(x|\mathcal{H}_0) dx + C_{11} P(\mathcal{H}_1) \int_{R_1} p(x|\mathcal{H}_1) dx \\[0.2cm] = C_{00} P(\mathcal{H}_0) + C_{01} P(\mathcal{H}_1) \\[0.2cm] \quad + \int_{R_1} \left[(C_{10} P(\mathcal{H}_0) - C_{00} P(\mathcal{H}_0)) p(x|\mathcal{H}_0)\right] dx \\[0.2cm] \quad + \int_{R_1} \left[(C_{11} P(\mathcal{H}_1) - C_{01} P(\mathcal{H}_1)) p(x|\mathcal{H}_1)\right] dx
  • Including x\text{x} in R1R_1 if the integrand is negative
    • Decide H1\mathcal{H}_1 if
      (C10C00)P(H0)p(xH0)<(C01C11)P(H1)p(xH1)(C_{10}-C_{00})P(\mathcal{H}_0)p(\text{x}|\mathcal{H}_0) <(C_{01}-C_{11})P(\mathcal{H}_1)p(\text{x}|\mathcal{H}_1)
    • Decide H1\mathcal{H}_1 if
      LRT Bayesian p(xH1)p(xH0)>(C10C00)P(H0)(C01C11)P(H1)=γ\text{LRT Bayesian }\frac{p(\text{x}|\mathcal{H}_1)}{p(\text{x}|\mathcal{H}_0)}>\frac{(C_{10}-C_{00})P(\mathcal{H}_0)}{(C_{01}-C_{11})P(\mathcal{H}_1)}=\gamma
      • In Classical γ=α=PFA\gamma=\alpha=P_{FA}

  • Example of DC Level in WGN (Minimum probaility of error criterion)
    H0:x[n]=w[n],  n=0,1,,N1H1:x[n]=s[n]+w[n],  n=0,1,,N1w[n]N(0,σ2),WGN,P(H0)=P(H1)=1/2\mathcal{H}_0:x[n]=w[n],\;n=0,1,\cdots,N-1\\[0.2cm] \mathcal{H}_1:x[n]=s[n]+w[n],\;n=0,1,\cdots,N-1\\[0.2cm] w[n]\sim\mathcal{N}(0,\sigma^2), \text{WGN},P(\mathcal{H}_0)=P(\mathcal{H}_1)=1/2
  • Minimizing PeP_e : Deicide H1\mathcal{H}_1 if
    L(x)=p(xH1)p(xH0)>(C10C00)P(H0)(C01C11)P(H1)=1=(10)1/2(10)1/2=1L(\text{x})=\frac{p(\text{x}|\mathcal{H}_1)}{p(\text{x}|\mathcal{H}_0)}>\frac{(C_{10}-C_{00})P(\mathcal{H}_0)}{(C_{01}-C_{11})P(\mathcal{H}_1)}=1\\[0.2cm] =\frac{(1-0)1/2}{(1-0)1/2}=1
  • Decide H1\mathcal{H}_1 if
    xˉ>A2 (the same as NP criterion except the threshold)xˉ{N(0,σ2/N) under H0N(A,σ2/N) under H1Pe=12[P(H0H1)+P(H1H0)]=12[Pr{xˉ<A/2H1}+Pr{xˉ>A/2H0}]=12[(1Q(A/2Aσ2/N))+Q(A/2σ2/N)]=Q(A/2σ2/N)=Q(NA24σ2)\bar x>\frac{A}{2}\text{ (the same as NP criterion except the threshold)}\\[0.2cm] \bar x\sim\begin{cases} \mathcal{N}(0,\sigma^2/N) \text{ under }\mathcal{H}_0\\ \mathcal{N}(A,\sigma^2/N)\text{ under }\mathcal{H}_1 \end{cases}\\[0.2cm] P_e=\frac{1}{2}\left[P(\mathcal{H}_0|\mathcal{H}_1)+P(\mathcal{H}_1|\mathcal{H}_0)\right]\\[0.2cm] =\frac{1}{2}\left[\Pr\{\bar x<A/2|\mathcal{H}_1\} + \Pr\{\bar x > A/2|\mathcal{H}_0\}\right]\\[0.2cm] =\frac{1}{2}\left[\left(1-Q\left(\frac{A/2-A}{\sqrt{\sigma^2/N}}\right)\right)+Q\left(\frac{A/2}{\sqrt{\sigma^2/N}}\right)\right]\\[0.2cm] =Q\left(\frac{A/2}{\sqrt{\sigma^2/N}}\right)=Q\left(\sqrt{\frac{NA^2}{4\sigma^2}}\right)

    Large AA and large σ2\sigma^2 : good

Multiple Hypothesis Testing

  • MM hypothesis instead of 2. (Ex. QPSK) : a.k.a. classification or discrimination
    • Bayes risk
      R=i=0M1j=0M1CijP(HiHj)P(Hj)R=\sum^{M-1}_{i=0}\sum^{M-1}_{j=0}C_{ij}P(\mathcal{H}_i|\mathcal{H}_j)P(\mathcal{H}_j)
    • Decision rule : Choose the hypothesis that minimizes
      Ci(x)=j=0M1CijP(Hjx) over i=0,1,M1C_i(\text{x})=\sum^{M-1}_{j=0}C_{ij}P(\mathcal{H}_j|\text{x})\text{ over }i=0,1,\cdots M-1
    • Decision rule to minimize PeP_e : Minimize
      Ci(x)=j=0M1P(Hjx)P(Hix)C_i(\text{x})=\sum^{M-1}_{j=0}P(\mathcal{H}_j|\text{x})-P(\mathcal{H}_i|\text{x})
      maximize P(Hix)P(\mathcal{H}_i|\text{x}) : maximum a posteriori probability(MAP) rule
      maxiP(Hix)=maxiP(xHi)P(Hi)ML rule if P(Hi) are all equal\max_iP(\mathcal{H}_i|\text{x})=\max_iP(\text{x}|\mathcal{H}_i)P(\mathcal{H}_i) \rightarrow \text{ML rule if }P(\mathcal{H}_i) \text{ are all equal}

      Maximum Likelihood

All Content has been written based on lecture of Prof. eui-seok.Hwang in GIST(Detection and Estimation)

profile
AI, Security

0개의 댓글