[DetnEst] Assignment 5

KBC·2024년 10월 26일
0

Detection and Estimation

목록 보기
11/23

P1

  • Let (X0,X1,,XN1)(X_0,X_1,\dots,X_{N-1}) be a random sample of a Bernoulli random variable XX with a probability mass function
    f(x;p)=px(1p)1xf(x;p)=p^x(1-p)^{1-x}
    where x{0,1}x\in\{0,1\} and 0p10\leq p\leq 1 is unknown
  • Find the maximum-likelihood estimator (MLE) of pp

Solution

xˉ=[X0,X1,,XN1]likelihood(xˉ;p)=n=0N1pXn(1p)1Xnlikelihood(xˉ;p)=n=0N1(Xnlnp+(1Xn)ln(1p))\bar x=[X_0,X_1,\dots,X_{N-1}]\\[0.3cm] \text{likelihood}(\bar x;p)=\prod_{n=0}^{N-1}p^{X_n}(1-p)^{1-X_n}\\[0.3cm] \rightarrow \text{likelihood}(\bar x;p)=\sum_{n=0}^{N-1}\left(X_n\ln p +(1-X_n)\ln(1-p)\right)
  • Let Xn=Y\sum X_n=Y
    =Ylnp+(NY)ln(1p)lnp(x;p)p=Yp+YN1pYp^+YN1p^=0p^(YNY)+Y=0p^=YN=1Nn=0N1Xn=Y\ln p+(N-Y)\ln(1-p)\\[0.3cm] \frac{\partial \ln p(x;p)}{\partial p}=\frac{Y}{p}+\frac{Y-N}{1-p}\\[0.3cm] \rightarrow\frac{Y}{\hat p}+\frac{Y-N}{1-\hat p}=0\\[0.3cm] \rightarrow \hat p(Y-N-Y)+Y=0\\[0.3cm] \rightarrow \hat p = \frac{Y}{N}=\frac{1}{N}\sum_{n=0}^{N-1}X_n


P2

  • Let (X0,X1,,XN1)(X_0,X_1,\dots,X_{N-1}) be a random sample of a binomial random variable XX with parameters (n,p)(n,p), where nn is assumed to be known and pp is unknown
  1. Determine the maximum-likelihood estimator (MLE) of pp
  2. Show that the MLE of pp is unbiased

Solution 1

liklihoodi=0N1(nxi)pxi(1p)nxi=n!xi!(nxi)!pxi(1p)nxilog liklihoodi=0N1(lnn!X!(nXi)!+Xilnp+(nXi)ln(1p))lnp(x;p)p=i=0N1(Xip+Xin1p)=Yp+YNn1pY=XiYp^+YNn(1p^)=0p^=YNn=1Nni=0N1Xi\text{liklihood}\rightarrow \prod_{i=0}^{N-1} \binom{n}{x_i} p^{x_i} (1 - p)^{n - x_i} = \frac{n!}{x_i! (n - x_i)!} p^{x_i} (1 - p)^{n - x_i}\\[0.3cm] \text{log liklihood}\rightarrow\sum_{i=0}^{N-1}\left(\ln\frac{n!}{X!(n-X_i)!}+X_i\ln p+(n-X_i)\ln(1-p)\right)\\[0.3cm] \frac{\partial\ln p(x;p)}{\partial p}=\sum_{i=0}^{N-1}\left(\frac{X_i}{p}+\frac{X_i-n}{1-p}\right)=\frac{Y}{p}+\frac{Y-Nn}{1-p}\\[0.3cm] Y=\sum X_i\\[0.3cm] \rightarrow \frac{Y}{\hat p}+\frac{Y-Nn}{(1-\hat p)}=0\\[0.3cm] \rightarrow \hat p=\frac{Y}{Nn}=\frac{1}{Nn}\cdot \sum_{i=0}^{N-1} X_i

Solution 2

E[p^]=E[1Nni=0N1Xi]=1Nni=0N1E[xi]=1NnNnp=pUnbiasedE[\hat p]=E\left[\frac{1}{Nn}\sum_{i=0}^{N-1} X_i\right]=\frac{1}{Nn}\sum_{i=0}^{N-1}E[x_i]\\[0.3cm] =\frac{1}{Nn}\cdot N\cdot np=p\\[0.3cm] \therefore \text{Unbiased}


P3

  • We observe NN IID samples from the PDFs :
    1. Gaussian
      p(x;μ)=12πexp[12(xμ)2]p(x;\mu)=\frac{1}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}(x-\mu)^2\right]
    2. Exponential
      p(x;λ)={λexp(λx)x>00x<0p(x; \lambda) = \begin{cases} \lambda \exp(-\lambda x) & x > 0 \\ 0 & x < 0 \end{cases}
  • In eaxh case find the MLE of the unknown parameter and be sure to verify that it indeed maximizes the likelihood function
  • Do the estimators make sense?

Solution

  1. p(x;μ)=12πexp[1/2(xμ)2]p(x;\mu)=\frac{1}{\sqrt{2\pi}}\exp[-1/2(x-\mu)^2]
    • Log-Likelihood Function
      N2log(12π)12i=0N1(Xiμ)2\frac{N}{2}\log\left(\frac{1}{2\pi}\right)-\frac{1}{2}\sum_{i=0}^{N-1}(X_i-\mu)^2\\[0.3cm]
    • 1st Derivative of Log-Likelihood Function
      lnp(x;μ)μ=i=0N1(Xiμ)=YNμY=i=0N1Xiμ^=YN\frac{\partial \ln p(x;\mu)}{\partial\mu}=\sum_{i=0}^{N-1}(X_i-\mu)=Y-N\mu\\[0.3cm] Y=\sum_{i=0}^{N-1}X_i\\[0.3cm] \rightarrow \hat \mu=\frac{Y}{N}
    • 2nd Derivative of Log-Likelihood Function
      2lnp(x;μ)μ2=N\frac{\partial^2 \ln p(x;\mu)}{\partial\mu^2}=-N
    • Conclusion : Second derivative negative ▷ it's maximum
  2. p(x;λ)={λexp(λx)x>00x<0p(x; \lambda) = \begin{cases} \lambda \exp(-\lambda x) & x > 0 \\ 0 & x < 0 \end{cases}
  • Log Likelihood Function
    likelihood=i=0N1λexp(λXi)Log-likelihood=Nlogλλi=0N1Xi\text{likelihood}=\prod_{i=0}^{N-1}\lambda\exp(-\lambda X_i)\\[0.3cm] \text{Log-likelihood}=N\log\lambda-\lambda\sum_{i=0}^{N-1}X_i
    • 1st Derivative of Log-likelihood Function
      lnp(x;λ)λ=Nλi=0N1Xiλ^=Ni=0N1Xi\frac{\partial \ln p(x;\lambda)}{\partial\lambda}=\frac{N}{\lambda}-\sum_{i=0}^{N-1} X_i \rightarrow\hat\lambda=\frac{N}{\sum_{i=0}^{N-1} X_i}
    • 2nd Derivative of Log-Likelihood Function
      2lnp(x;λ)λ2=Nλ2\frac{\partial^2 \ln p(x;\lambda)}{\partial\lambda^2}=-\frac{N}{\lambda^2}
      It's maximum because negative
    • Exponential distribution
      • Mean =1λ=\frac{1}{\lambda}
    • Conclusion
      λ^=Ni=0N1Ximean=1λ=XiN=1λ^\hat\lambda=\frac{N}{\sum_{i=0}^{N-1}X_i}\\[0.3cm] \text{mean}=\frac{1}{\lambda}=\frac{\sum X_i}{N}=\frac{1}{\hat \lambda}
profile
AI, Security

0개의 댓글