[DetnEst] Assignment 1

KBC·2024년 10월 22일
0

Detection and Estimation

목록 보기
3/23

P1

Let X0,X1,X2,,XN1X_0, X_1, X_2, \dots, X_{N-1} be a random sample of XX having unknown mean μ\mu and variance σ2\sigma^2.
Show that the estimator of variance defined by

σ^2=1N1i=0N1(XiXˉ)2\hat{\sigma}^2 = \frac{1}{N-1} \sum_{i=0}^{N-1} (X_i - \bar{X})^2

is an unbiased estimator of σ2\sigma^2, where Xˉ\bar{X} is the sample mean as

Xˉ=1Ni=0N1Xi\bar{X} = \frac{1}{N} \sum_{i=0}^{N-1} X_i

Solve

E[σ^2]=E[1N1i=0N1(XiXˉ)2]=1N1E[i=0N1(XiXˉ)2]=1N1E[(Xi22XiXˉ+Xˉ2)]=1N1(E[Xi2]2NXˉ2+NXˉ2)=1N1(E[Xi2]NXˉ2)σ2=Var(Xi)=E[Xi]2(E[Xi])2E[Xi2]=Var(Xi)+(E[Xi])2=σ2+μ2Var(Xˉ)=σ2NE[Xˉ2]=Var(Xˉ)+(E[Xˉ])2=σ2N+μ2E[σ^2]=1N1[N(σ2+μ2)N(σ2N+μ2)]=1N1(N1)σ2=σ2E\left[\hat\sigma^2\right]=E\left[\frac{1}{N-1}\sum_{i=0}^{N-1}(X_i-\bar{X})^2\right]\\[0.3cm] =\frac{1}{N-1}E\left[\sum_{i=0}^{N-1}(X_i-\bar{X})^2\right]\\[0.3cm] =\frac{1}{N-1}E\left[\left(X^2_i-2X_i\bar{X}+\bar{X}^2\right)\right]\\[0.3cm] =\frac{1}{N-1}\left(E[X_i^2]-2N\bar{X}^2+N\bar{X}^2\right)=\frac{1}{N-1}\left(E[X_i^2]-N\bar{X}^2\right)\\[0.3cm] \sigma^2=\text{Var}(X_i)=E[X_i]^2-(E[X_i])^2\\[0.3cm] E[X_i^2]=\text{Var}(X_i)+(E[X_i])^2=\sigma^2+\mu^2\\[0.3cm] \text{Var}(\bar{X})=\frac{\sigma^2}{N}\\[0.3cm] E[\bar{X}^2]=\text{Var}(\bar{X})+(E[\bar{X}])^2=\frac{\sigma^2}{N}+\mu^2\\[0.3cm] E[\hat\sigma^2]=\frac{1}{N-1}\left[N\left(\sigma^2+\mu^2\right)-N\left(\frac{\sigma^2}{N}+\mu^2\right)\right]\\[0.3cm] =\frac{1}{N-1}(N-1)\sigma^2=\sigma^2


P2

Let X0,X1,X2,,XN1X_0, X_1, X_2, \ldots, X_{N-1} be a random sample of a Poisson random variable XX with unknown parameter λ\lambda.

1. Show that the estimators λ^1\hat{\lambda}_1 and λ^2\hat{\lambda}_2 of the parameter λ\lambda defined by

λ^1=1Ni=0N1Xiandλ^2=X0+X12\hat{\lambda}_1 = \frac{1}{N} \sum_{i=0}^{N-1} X_i \quad \text{and} \quad \hat{\lambda}_2 = \frac{X_0 + X_1}{2}

are both unbiased estimators of λ\lambda.

2. Which estimator is more efficient?


Solve 1

Lambda 1 is unbiased estimator

E[λ^1]=E[1Ni=0N1Xi]=1NE[i=0N1Xi]=1N×N×λ=λE[\hat\lambda_1]=E\left[\frac{1}{N}\sum_{i=0}^{N-1}X_i\right]\\[0.3cm] =\frac{1}{N}E\left[\sum_{i=0}^{N-1}X_i\right]=\frac{1}{N}\times N \times \lambda = \lambda

Lambda 2 is also unbiased estimator

E[λ^2]=E[X0+X12]=12(E[X0]+E[X1])=12(λ+λ)=λE[\hat\lambda_2]=E\left[\frac{X_0+X_1}{2}\right]\\[0.3cm] =\frac{1}{2}\left(E[X_0]+E[X_1]\right)=\frac{1}{2}(\lambda + \lambda) = \lambda

Solve 2

Variance of Lambda 1

Var(λ1)=λVar(λ^1)=1N2i=0N1Var(Xi)=1N2Nλ=λN\text{Var}(\lambda_1)=\lambda\\[0.3cm] \text{Var}(\hat\lambda_1)=\frac{1}{N^2}\sum_{i=0}^{N-1}\text{Var}(X_i)=\frac{1}{N^2}\cdot N \cdot \lambda=\frac{\lambda}{N}

Variance of Lambda 2

Var(λ2)=λVar(λ^2)=14(Var(X0)+Var(X1))=142λ=λ2\text{Var}(\lambda_2)=\lambda\\[0.3cm] \text{Var}(\hat\lambda_2)=\frac{1}{4}(\text{Var}(X_0)+\text{Var}(X_1))=\frac{1}{4}\cdot 2\lambda=\frac{\lambda}{2}

Lambda 1 is more Efficient Estimator

Var(λ^1)Var(λ^2)=λNλ2,  (N>=2)\text{Var}(\hat\lambda_1)\leq\text{Var}(\hat\lambda_2)=\frac{\lambda}{N}\leq\frac{\lambda}{2},\;(N>=2)


P3

  1. The data {x[0],x[1],,x[N1]}\{x[0], x[1], \ldots, x[N-1]\} are observed where the x[n]x[n]'s are independent and identically distributed (IID) as N(0,σ2)\mathcal{N}(0, \sigma^2). We wish to estimate the variance σ2\sigma^2 as
σ^2=1Nn=0N1x2[n].\hat{\sigma}^2 = \frac{1}{N} \sum_{n=0}^{N-1} x^2[n].
  1. Is this an unbiased estimator? Find the variance of σ^2\hat{\sigma}^2 and examine what happens as NN \to \infty.

Solve 1

Unbiased Estimator

E[σ^2]=E[1Nn=0N1x2[n]]=1NE[n=0N1x2[n]]=1Nn=0N1E[x2[n]]Var(x[n])=σ2=E[x2[n]](E[x[n]])2E[x2[n]]=σ2+(E[x[n]])2=σ2+02=σ2E[σ^2]=1NNσ2E[\hat\sigma^2]=E\left[\frac{1}{N}\sum_{n=0}^{N-1}x^2[n]\right]\\[0.3cm] =\frac{1}{N}E\left[\sum_{n=0}^{N-1}x^2[n]\right]\\[0.3cm] =\frac{1}{N}\sum_{n=0}^{N-1}E[x^2[n]]\\[0.3cm] \text{Var}(x[n])=\sigma^2=E[x^2[n]]-(E[x[n]])^2\\[0.2cm] E[x^2[n]]=\sigma^2+(E[x[n]])^2=\sigma^2+0^2=\sigma^2\\[0.3cm] E[\hat\sigma^2]=\frac{1}{N}\cdot N\cdot\sigma^2

Solve 2

Var(σ^2)=Var(1Nx2[n])=1N2Var(x2[n])=1N2NVar(x2[n])=E[x4[n]]σ4NVar(x2[n])=E[x4[n]]E[x2[n]]2=E[x4[n]]σ4E[(xμ)p]={0if oddσp(p1)if even  p:531E[x4]=σ4(31)=3σ4Var(σ^2)=3σ4σ4N=2σ4N\text{Var}(\hat\sigma^2)=\text{Var}\left(\frac{1}{N}\sum x^2[n]\right)\\[0.3cm] =\frac{1}{N^2}\sum\text{Var}(x^2[n])\\[0.3cm] =\frac{1}{N^2}\cdot N \cdot \text{Var}(x^2[n])\\[0.3cm] =\frac{E[x^4[n]]-\sigma^4}{N}\\[0.3cm] \text{Var}(x^2[n])=E[x^4[n]]-E[x^2[n]]^2=E[x^4[n]]-\sigma^4\\[0.3cm] E[(x-\mu)^p]= \begin{cases} 0\quad\text{if odd}\\ \sigma^p(p-1)\quad\text{if even}\;p:5\cdot3\cdot1 \end{cases}\\[0.3cm] E[x^4]=\sigma^4(3\cdot1)=3\sigma^4\\[0.3cm] \text{Var}(\hat\sigma^2)=\frac{3\sigma^4-\sigma^4}{N}=\frac{2\sigma^4}{N}
  • When NN \rightarrow \infty, Var(σ^2)\text{Var}(\hat\sigma^2) goes 0


P4

Prove that the PDF of A^\hat{A} given in Problem 3 is N(A,σ2/N)\mathcal{N}(A,\sigma^2 / N).


Solve

  • E[Xi]=AE[X_i]=A
  • Var(Xi)=σ2\text{Var}(X_i) = \sigma^2
  • Check E[A^]E[\hat A]
    A^=1Ni=1NXiE[A^]=E[1Ni=1NXi]=1Ni=1NE[Xi]=1NNA=A\hat A=\frac{1}{N}\sum_{i=1}^NX_i\\[0.3cm] E[\hat A]=E\left[\frac{1}{N}\sum_{i=1}^N X_i\right]=\frac{1}{N}\sum_{i=1}^NE[X_i]=\frac{1}{N}\cdot N\cdot A=A
  • Check Var(A^)\text{Var}(\hat A)
    Var(A^)=1N2i=1NVar(Xi)=1N2Nσ2=σ2N\text{Var}(\hat A)=\frac{1}{N^2}\sum_{i=1}^N\text{Var}(X_i)=\frac{1}{N^2}\cdot N \cdot \sigma^2=\frac{\sigma^2}{N}
    A^N(A,σ2N)\therefore \hat A \sim \mathcal{N}\left(A,\frac{\sigma^2}{N}\right)


P5

For the problem described in Problem 3, show that as NN \to \infty, A^A\hat{A} \to A by using the results of Problem 4.
1. To do so, prove that

limNPr{A^A>ϵ}=0\lim_{N \to \infty} \Pr \left\{ |\hat{A} - A| > \epsilon \right\} = 0

for any ϵ>0\epsilon > 0. In this case, the estimator A^\hat{A} is said to be consistent.
2. Investigate what happens if the alternative estimator

A^=12Nn=0N1x[n]\hat{A} = \frac{1}{2N} \sum_{n=0}^{N-1} x[n]

is used instead


Solve 1

Pr{A^A>ϵ}=Pr{A^Aσ2/N>ϵσ2/N}Pr(A^A>ϵ)=2Pr(Z>ϵσ2N)If n,  Pr()0\text{Pr}\left\{ |\hat{A} - A| > \epsilon \right\} = \text{Pr}\left\{ \left| \frac{\hat{A} - A}{\sqrt{\sigma^2 / N}} \right| > \frac{\epsilon}{\sqrt{\sigma^2 / N}} \right\}\\[0.3cm] \text{Pr}\left( |\hat{A} - A| > \epsilon \right) = 2 \cdot \text{Pr} \left( Z > \frac{\epsilon}{\sqrt{\frac{\sigma^2}{N}}} \right)\\[0.3cm] \therefore \text{If n}\rightarrow \infty,\;\text{Pr}(\cdot) \rightarrow 0

Solve 2

New A^=12Nn=0N1x[n]E[A^]=12NE[n=0N1x[n]]=12NNA=A2A\text{New }\hat A=\frac{1}{2N}\sum_{n=0}^{N-1}x[n]\\[0.3cm] E[\hat A]=\frac{1}{2N}E\left[\sum_{n=0}^{N-1}x[n]\right]=\frac{1}{2N}\cdot N\cdot A = \frac{A}{2}\neq A
  • New A^\hat A is biased estimator
    Var(A^)=1(2N)2n=0N1Var(x[n])=14N2Nσ2=σ24Nσ2N\text{Var}(\hat A)=\frac{1}{(2N)^2}\sum_{n=0}^{N-1}\text{Var}(x[n])=\frac{1}{4N^2}\cdot N \cdot \sigma^2=\frac{\sigma^2}{4N}\neq\frac{\sigma^2}{N}


P6

This problem illustrates what happens to an unbiased estimator when it undergoes a nonlinear trasformation. In Problem 3, if we choose to estimate the unknown parameter θ=A2\theta=A^2 by

θ^=(1Nn=0N1x[n])2,\hat \theta=\left(\frac{1}{N}\sum_{n=0}^{N-1}x[n]\right)^2,

can we say that the estimator is unbiased? What happens as NN\rightarrow \infty?


Solve

  • A^=A\hat A =A
  • Var(A^)=σ2N\text{Var}(\hat A)=\frac{\sigma^2}{N}
  • Check E[θ^]E[\hat \theta]
    x[n]N(A,σ2)E[θ^]=E[(1Nn=0N1x[n])2]E[A^2]=Var(A^)+(E[A^])2E[θ^]=σ2N+A2θ^ is biased estimatorx[n]\sim\mathcal{N}(A, \sigma^2)\\[0.2cm] E[\hat\theta]=E\left[\left(\frac{1}{N}\sum_{n=0}^{N-1}x[n]\right)^2\right]\\[0.3cm] E[\hat A^2]=\text{Var}(\hat A)+(E[\hat A])^2\\[0.2cm] E[\hat \theta]=\frac{\sigma^2}{N}+A^2\\[0.3cm] \therefore \hat \theta \text{ is biased estimator}
  • Check E[θ^]E[\hat \theta] when NN\rightarrow \infty
    E[θ^]=σ2N+A2A2θ^ is Asymptatically UnbiasedE[\hat \theta]=\frac{\sigma^2}{N}+A^2\rightarrow A^2\\[0.3cm] \therefore \hat \theta \text{ is Asymptatically Unbiased}


Bonus

Unbiased Estimator for DC Level in White Gaussian Noise

Consider the observation

x[n]=A+w[n]n=0,1,,N1x[n]=A+w[n]\quad n=0,1,\dots,N-1

where AA is the parameter to be estimated and w[n]w[n] is WGN

  • The parameter AA can take on any value in the interval <A<-\infty<A<\infty
  • Then, a reasonable estimator for the average value of x[n]x[n] is
    A^=1Nn=0N1x[n]\hat A=\frac{1}{N}\sum_{n=0}^{N-1}x[n]
    or the sample mean.
  • Due to the linearity properties of the expectation operator
    E[A^]=E[1Nn=0N1x[n]]=1Nn=0N1E[x[n]]=1Nn=0N1A=AE[\hat A]=E\left[\frac{1}{N}\sum_{n=0}^{N-1}x[n]\right]\\[0.3cm] =\frac{1}{N}\sum_{n=0}^{N-1}E[x[n]]\\[0.3cm] =\frac{1}{N}\sum_{n=0}^{N-1} A\\[0.3cm] =A
    for all AA
    The sample mean is unbiased\therefore \text{The sample mean is unbiased}

All Content has been written based on lecture of Prof. eui-seok.Hwang in GIST(Detection and Estimation)

profile
AI, Security

0개의 댓글