P1
Let X 0 , X 1 , X 2 , … , X N − 1 X_0, X_1, X_2, \dots, X_{N-1} X 0 , X 1 , X 2 , … , X N − 1 be a random sample of X X X having unknown mean μ \mu μ and variance σ 2 \sigma^2 σ 2 .
Show that the estimator of variance defined by
σ ^ 2 = 1 N − 1 ∑ i = 0 N − 1 ( X i − X ˉ ) 2 \hat{\sigma}^2 = \frac{1}{N-1} \sum_{i=0}^{N-1} (X_i - \bar{X})^2 σ ^ 2 = N − 1 1 i = 0 ∑ N − 1 ( X i − X ˉ ) 2
is an unbiased estimator of σ 2 \sigma^2 σ 2 , where X ˉ \bar{X} X ˉ is the sample mean as
X ˉ = 1 N ∑ i = 0 N − 1 X i \bar{X} = \frac{1}{N} \sum_{i=0}^{N-1} X_i X ˉ = N 1 i = 0 ∑ N − 1 X i
Solve
E [ σ ^ 2 ] = E [ 1 N − 1 ∑ i = 0 N − 1 ( X i − X ˉ ) 2 ] = 1 N − 1 E [ ∑ i = 0 N − 1 ( X i − X ˉ ) 2 ] = 1 N − 1 E [ ( X i 2 − 2 X i X ˉ + X ˉ 2 ) ] = 1 N − 1 ( E [ X i 2 ] − 2 N X ˉ 2 + N X ˉ 2 ) = 1 N − 1 ( E [ X i 2 ] − N X ˉ 2 ) σ 2 = Var ( X i ) = E [ X i ] 2 − ( E [ X i ] ) 2 E [ X i 2 ] = Var ( X i ) + ( E [ X i ] ) 2 = σ 2 + μ 2 Var ( X ˉ ) = σ 2 N E [ X ˉ 2 ] = Var ( X ˉ ) + ( E [ X ˉ ] ) 2 = σ 2 N + μ 2 E [ σ ^ 2 ] = 1 N − 1 [ N ( σ 2 + μ 2 ) − N ( σ 2 N + μ 2 ) ] = 1 N − 1 ( N − 1 ) σ 2 = σ 2 E\left[\hat\sigma^2\right]=E\left[\frac{1}{N-1}\sum_{i=0}^{N-1}(X_i-\bar{X})^2\right]\\[0.3cm] =\frac{1}{N-1}E\left[\sum_{i=0}^{N-1}(X_i-\bar{X})^2\right]\\[0.3cm] =\frac{1}{N-1}E\left[\left(X^2_i-2X_i\bar{X}+\bar{X}^2\right)\right]\\[0.3cm] =\frac{1}{N-1}\left(E[X_i^2]-2N\bar{X}^2+N\bar{X}^2\right)=\frac{1}{N-1}\left(E[X_i^2]-N\bar{X}^2\right)\\[0.3cm] \sigma^2=\text{Var}(X_i)=E[X_i]^2-(E[X_i])^2\\[0.3cm] E[X_i^2]=\text{Var}(X_i)+(E[X_i])^2=\sigma^2+\mu^2\\[0.3cm] \text{Var}(\bar{X})=\frac{\sigma^2}{N}\\[0.3cm] E[\bar{X}^2]=\text{Var}(\bar{X})+(E[\bar{X}])^2=\frac{\sigma^2}{N}+\mu^2\\[0.3cm] E[\hat\sigma^2]=\frac{1}{N-1}\left[N\left(\sigma^2+\mu^2\right)-N\left(\frac{\sigma^2}{N}+\mu^2\right)\right]\\[0.3cm] =\frac{1}{N-1}(N-1)\sigma^2=\sigma^2 E [ σ ^ 2 ] = E [ N − 1 1 i = 0 ∑ N − 1 ( X i − X ˉ ) 2 ] = N − 1 1 E [ i = 0 ∑ N − 1 ( X i − X ˉ ) 2 ] = N − 1 1 E [ ( X i 2 − 2 X i X ˉ + X ˉ 2 ) ] = N − 1 1 ( E [ X i 2 ] − 2 N X ˉ 2 + N X ˉ 2 ) = N − 1 1 ( E [ X i 2 ] − N X ˉ 2 ) σ 2 = Var ( X i ) = E [ X i ] 2 − ( E [ X i ] ) 2 E [ X i 2 ] = Var ( X i ) + ( E [ X i ] ) 2 = σ 2 + μ 2 Var ( X ˉ ) = N σ 2 E [ X ˉ 2 ] = Var ( X ˉ ) + ( E [ X ˉ ] ) 2 = N σ 2 + μ 2 E [ σ ^ 2 ] = N − 1 1 [ N ( σ 2 + μ 2 ) − N ( N σ 2 + μ 2 ) ] = N − 1 1 ( N − 1 ) σ 2 = σ 2
P2
Let X 0 , X 1 , X 2 , … , X N − 1 X_0, X_1, X_2, \ldots, X_{N-1} X 0 , X 1 , X 2 , … , X N − 1 be a random sample of a Poisson random variable X X X with unknown parameter λ \lambda λ .
1. Show that the estimators λ ^ 1 \hat{\lambda}_1 λ ^ 1 and λ ^ 2 \hat{\lambda}_2 λ ^ 2 of the parameter λ \lambda λ defined by
λ ^ 1 = 1 N ∑ i = 0 N − 1 X i and λ ^ 2 = X 0 + X 1 2 \hat{\lambda}_1 = \frac{1}{N} \sum_{i=0}^{N-1} X_i \quad \text{and} \quad \hat{\lambda}_2 = \frac{X_0 + X_1}{2} λ ^ 1 = N 1 i = 0 ∑ N − 1 X i and λ ^ 2 = 2 X 0 + X 1
are both unbiased estimators of λ \lambda λ .
2. Which estimator is more efficient?
Solve 1
Lambda 1 is unbiased estimator
E [ λ ^ 1 ] = E [ 1 N ∑ i = 0 N − 1 X i ] = 1 N E [ ∑ i = 0 N − 1 X i ] = 1 N × N × λ = λ E[\hat\lambda_1]=E\left[\frac{1}{N}\sum_{i=0}^{N-1}X_i\right]\\[0.3cm] =\frac{1}{N}E\left[\sum_{i=0}^{N-1}X_i\right]=\frac{1}{N}\times N \times \lambda = \lambda E [ λ ^ 1 ] = E [ N 1 i = 0 ∑ N − 1 X i ] = N 1 E [ i = 0 ∑ N − 1 X i ] = N 1 × N × λ = λ
Lambda 2 is also unbiased estimator
E [ λ ^ 2 ] = E [ X 0 + X 1 2 ] = 1 2 ( E [ X 0 ] + E [ X 1 ] ) = 1 2 ( λ + λ ) = λ E[\hat\lambda_2]=E\left[\frac{X_0+X_1}{2}\right]\\[0.3cm] =\frac{1}{2}\left(E[X_0]+E[X_1]\right)=\frac{1}{2}(\lambda + \lambda) = \lambda E [ λ ^ 2 ] = E [ 2 X 0 + X 1 ] = 2 1 ( E [ X 0 ] + E [ X 1 ] ) = 2 1 ( λ + λ ) = λ
Solve 2
Variance of Lambda 1
Var ( λ 1 ) = λ Var ( λ ^ 1 ) = 1 N 2 ∑ i = 0 N − 1 Var ( X i ) = 1 N 2 ⋅ N ⋅ λ = λ N \text{Var}(\lambda_1)=\lambda\\[0.3cm] \text{Var}(\hat\lambda_1)=\frac{1}{N^2}\sum_{i=0}^{N-1}\text{Var}(X_i)=\frac{1}{N^2}\cdot N \cdot \lambda=\frac{\lambda}{N} Var ( λ 1 ) = λ Var ( λ ^ 1 ) = N 2 1 i = 0 ∑ N − 1 Var ( X i ) = N 2 1 ⋅ N ⋅ λ = N λ
Variance of Lambda 2
Var ( λ 2 ) = λ Var ( λ ^ 2 ) = 1 4 ( Var ( X 0 ) + Var ( X 1 ) ) = 1 4 ⋅ 2 λ = λ 2 \text{Var}(\lambda_2)=\lambda\\[0.3cm] \text{Var}(\hat\lambda_2)=\frac{1}{4}(\text{Var}(X_0)+\text{Var}(X_1))=\frac{1}{4}\cdot 2\lambda=\frac{\lambda}{2} Var ( λ 2 ) = λ Var ( λ ^ 2 ) = 4 1 ( Var ( X 0 ) + Var ( X 1 ) ) = 4 1 ⋅ 2 λ = 2 λ
Lambda 1 is more Efficient Estimator
Var ( λ ^ 1 ) ≤ Var ( λ ^ 2 ) = λ N ≤ λ 2 , ( N > = 2 ) \text{Var}(\hat\lambda_1)\leq\text{Var}(\hat\lambda_2)=\frac{\lambda}{N}\leq\frac{\lambda}{2},\;(N>=2) Var ( λ ^ 1 ) ≤ Var ( λ ^ 2 ) = N λ ≤ 2 λ , ( N > = 2 )
P3
The data { x [ 0 ] , x [ 1 ] , … , x [ N − 1 ] } \{x[0], x[1], \ldots, x[N-1]\} { x [ 0 ] , x [ 1 ] , … , x [ N − 1 ] } are observed where the x [ n ] x[n] x [ n ] 's are independent and identically distributed (IID) as N ( 0 , σ 2 ) \mathcal{N}(0, \sigma^2) N ( 0 , σ 2 ) . We wish to estimate the variance σ 2 \sigma^2 σ 2 as
σ ^ 2 = 1 N ∑ n = 0 N − 1 x 2 [ n ] . \hat{\sigma}^2 = \frac{1}{N} \sum_{n=0}^{N-1} x^2[n]. σ ^ 2 = N 1 n = 0 ∑ N − 1 x 2 [ n ] .
Is this an unbiased estimator? Find the variance of σ ^ 2 \hat{\sigma}^2 σ ^ 2 and examine what happens as N → ∞ N \to \infty N → ∞ .
Solve 1
Unbiased Estimator
E [ σ ^ 2 ] = E [ 1 N ∑ n = 0 N − 1 x 2 [ n ] ] = 1 N E [ ∑ n = 0 N − 1 x 2 [ n ] ] = 1 N ∑ n = 0 N − 1 E [ x 2 [ n ] ] Var ( x [ n ] ) = σ 2 = E [ x 2 [ n ] ] − ( E [ x [ n ] ] ) 2 E [ x 2 [ n ] ] = σ 2 + ( E [ x [ n ] ] ) 2 = σ 2 + 0 2 = σ 2 E [ σ ^ 2 ] = 1 N ⋅ N ⋅ σ 2 E[\hat\sigma^2]=E\left[\frac{1}{N}\sum_{n=0}^{N-1}x^2[n]\right]\\[0.3cm] =\frac{1}{N}E\left[\sum_{n=0}^{N-1}x^2[n]\right]\\[0.3cm] =\frac{1}{N}\sum_{n=0}^{N-1}E[x^2[n]]\\[0.3cm] \text{Var}(x[n])=\sigma^2=E[x^2[n]]-(E[x[n]])^2\\[0.2cm] E[x^2[n]]=\sigma^2+(E[x[n]])^2=\sigma^2+0^2=\sigma^2\\[0.3cm] E[\hat\sigma^2]=\frac{1}{N}\cdot N\cdot\sigma^2 E [ σ ^ 2 ] = E [ N 1 n = 0 ∑ N − 1 x 2 [ n ] ] = N 1 E [ n = 0 ∑ N − 1 x 2 [ n ] ] = N 1 n = 0 ∑ N − 1 E [ x 2 [ n ] ] Var ( x [ n ] ) = σ 2 = E [ x 2 [ n ] ] − ( E [ x [ n ] ] ) 2 E [ x 2 [ n ] ] = σ 2 + ( E [ x [ n ] ] ) 2 = σ 2 + 0 2 = σ 2 E [ σ ^ 2 ] = N 1 ⋅ N ⋅ σ 2
Solve 2
Var ( σ ^ 2 ) = Var ( 1 N ∑ x 2 [ n ] ) = 1 N 2 ∑ Var ( x 2 [ n ] ) = 1 N 2 ⋅ N ⋅ Var ( x 2 [ n ] ) = E [ x 4 [ n ] ] − σ 4 N Var ( x 2 [ n ] ) = E [ x 4 [ n ] ] − E [ x 2 [ n ] ] 2 = E [ x 4 [ n ] ] − σ 4 E [ ( x − μ ) p ] = { 0 if odd σ p ( p − 1 ) if even p : 5 ⋅ 3 ⋅ 1 E [ x 4 ] = σ 4 ( 3 ⋅ 1 ) = 3 σ 4 Var ( σ ^ 2 ) = 3 σ 4 − σ 4 N = 2 σ 4 N \text{Var}(\hat\sigma^2)=\text{Var}\left(\frac{1}{N}\sum x^2[n]\right)\\[0.3cm] =\frac{1}{N^2}\sum\text{Var}(x^2[n])\\[0.3cm] =\frac{1}{N^2}\cdot N \cdot \text{Var}(x^2[n])\\[0.3cm] =\frac{E[x^4[n]]-\sigma^4}{N}\\[0.3cm] \text{Var}(x^2[n])=E[x^4[n]]-E[x^2[n]]^2=E[x^4[n]]-\sigma^4\\[0.3cm] E[(x-\mu)^p]= \begin{cases} 0\quad\text{if odd}\\ \sigma^p(p-1)\quad\text{if even}\;p:5\cdot3\cdot1 \end{cases}\\[0.3cm] E[x^4]=\sigma^4(3\cdot1)=3\sigma^4\\[0.3cm] \text{Var}(\hat\sigma^2)=\frac{3\sigma^4-\sigma^4}{N}=\frac{2\sigma^4}{N} Var ( σ ^ 2 ) = Var ( N 1 ∑ x 2 [ n ] ) = N 2 1 ∑ Var ( x 2 [ n ] ) = N 2 1 ⋅ N ⋅ Var ( x 2 [ n ] ) = N E [ x 4 [ n ] ] − σ 4 Var ( x 2 [ n ] ) = E [ x 4 [ n ] ] − E [ x 2 [ n ] ] 2 = E [ x 4 [ n ] ] − σ 4 E [ ( x − μ ) p ] = { 0 if odd σ p ( p − 1 ) if even p : 5 ⋅ 3 ⋅ 1 E [ x 4 ] = σ 4 ( 3 ⋅ 1 ) = 3 σ 4 Var ( σ ^ 2 ) = N 3 σ 4 − σ 4 = N 2 σ 4
When N → ∞ N \rightarrow \infty N → ∞ , Var ( σ ^ 2 ) \text{Var}(\hat\sigma^2) Var ( σ ^ 2 ) goes 0
P4
Prove that the PDF of A ^ \hat{A} A ^ given in Problem 3 is N ( A , σ 2 / N ) \mathcal{N}(A,\sigma^2 / N) N ( A , σ 2 / N ) .
Solve
E [ X i ] = A E[X_i]=A E [ X i ] = A
Var ( X i ) = σ 2 \text{Var}(X_i) = \sigma^2 Var ( X i ) = σ 2
Check E [ A ^ ] E[\hat A] E [ A ^ ] A ^ = 1 N ∑ i = 1 N X i E [ A ^ ] = E [ 1 N ∑ i = 1 N X i ] = 1 N ∑ i = 1 N E [ X i ] = 1 N ⋅ N ⋅ A = A \hat A=\frac{1}{N}\sum_{i=1}^NX_i\\[0.3cm] E[\hat A]=E\left[\frac{1}{N}\sum_{i=1}^N X_i\right]=\frac{1}{N}\sum_{i=1}^NE[X_i]=\frac{1}{N}\cdot N\cdot A=A A ^ = N 1 i = 1 ∑ N X i E [ A ^ ] = E [ N 1 i = 1 ∑ N X i ] = N 1 i = 1 ∑ N E [ X i ] = N 1 ⋅ N ⋅ A = A
Check Var ( A ^ ) \text{Var}(\hat A) Var ( A ^ ) Var ( A ^ ) = 1 N 2 ∑ i = 1 N Var ( X i ) = 1 N 2 ⋅ N ⋅ σ 2 = σ 2 N \text{Var}(\hat A)=\frac{1}{N^2}\sum_{i=1}^N\text{Var}(X_i)=\frac{1}{N^2}\cdot N \cdot \sigma^2=\frac{\sigma^2}{N} Var ( A ^ ) = N 2 1 i = 1 ∑ N Var ( X i ) = N 2 1 ⋅ N ⋅ σ 2 = N σ 2 ∴ A ^ ∼ N ( A , σ 2 N ) \therefore \hat A \sim \mathcal{N}\left(A,\frac{\sigma^2}{N}\right) ∴ A ^ ∼ N ( A , N σ 2 )
P5
For the problem described in Problem 3, show that as N → ∞ N \to \infty N → ∞ , A ^ → A \hat{A} \to A A ^ → A by using the results of Problem 4.
1. To do so, prove that
lim N → ∞ Pr { ∣ A ^ − A ∣ > ϵ } = 0 \lim_{N \to \infty} \Pr \left\{ |\hat{A} - A| > \epsilon \right\} = 0 N → ∞ lim Pr { ∣ A ^ − A ∣ > ϵ } = 0
for any ϵ > 0 \epsilon > 0 ϵ > 0 . In this case, the estimator A ^ \hat{A} A ^ is said to be consistent .
2. Investigate what happens if the alternative estimator
A ^ = 1 2 N ∑ n = 0 N − 1 x [ n ] \hat{A} = \frac{1}{2N} \sum_{n=0}^{N-1} x[n] A ^ = 2 N 1 n = 0 ∑ N − 1 x [ n ]
is used instead
Solve 1
Pr { ∣ A ^ − A ∣ > ϵ } = Pr { ∣ A ^ − A σ 2 / N ∣ > ϵ σ 2 / N } Pr ( ∣ A ^ − A ∣ > ϵ ) = 2 ⋅ Pr ( Z > ϵ σ 2 N ) ∴ If n → ∞ , Pr ( ⋅ ) → 0 \text{Pr}\left\{ |\hat{A} - A| > \epsilon \right\} = \text{Pr}\left\{ \left| \frac{\hat{A} - A}{\sqrt{\sigma^2 / N}} \right| > \frac{\epsilon}{\sqrt{\sigma^2 / N}} \right\}\\[0.3cm] \text{Pr}\left( |\hat{A} - A| > \epsilon \right) = 2 \cdot \text{Pr} \left( Z > \frac{\epsilon}{\sqrt{\frac{\sigma^2}{N}}} \right)\\[0.3cm] \therefore \text{If n}\rightarrow \infty,\;\text{Pr}(\cdot) \rightarrow 0 Pr { ∣ A ^ − A ∣ > ϵ } = Pr { ∣ ∣ ∣ ∣ ∣ ∣ σ 2 / N A ^ − A ∣ ∣ ∣ ∣ ∣ ∣ > σ 2 / N ϵ } Pr ( ∣ A ^ − A ∣ > ϵ ) = 2 ⋅ Pr ⎝ ⎜ ⎛ Z > N σ 2 ϵ ⎠ ⎟ ⎞ ∴ If n → ∞ , Pr ( ⋅ ) → 0
Solve 2
New A ^ = 1 2 N ∑ n = 0 N − 1 x [ n ] E [ A ^ ] = 1 2 N E [ ∑ n = 0 N − 1 x [ n ] ] = 1 2 N ⋅ N ⋅ A = A 2 ≠ A \text{New }\hat A=\frac{1}{2N}\sum_{n=0}^{N-1}x[n]\\[0.3cm] E[\hat A]=\frac{1}{2N}E\left[\sum_{n=0}^{N-1}x[n]\right]=\frac{1}{2N}\cdot N\cdot A = \frac{A}{2}\neq A New A ^ = 2 N 1 n = 0 ∑ N − 1 x [ n ] E [ A ^ ] = 2 N 1 E [ n = 0 ∑ N − 1 x [ n ] ] = 2 N 1 ⋅ N ⋅ A = 2 A = A
New A ^ \hat A A ^ is biased estimatorVar ( A ^ ) = 1 ( 2 N ) 2 ∑ n = 0 N − 1 Var ( x [ n ] ) = 1 4 N 2 ⋅ N ⋅ σ 2 = σ 2 4 N ≠ σ 2 N \text{Var}(\hat A)=\frac{1}{(2N)^2}\sum_{n=0}^{N-1}\text{Var}(x[n])=\frac{1}{4N^2}\cdot N \cdot \sigma^2=\frac{\sigma^2}{4N}\neq\frac{\sigma^2}{N} Var ( A ^ ) = ( 2 N ) 2 1 n = 0 ∑ N − 1 Var ( x [ n ] ) = 4 N 2 1 ⋅ N ⋅ σ 2 = 4 N σ 2 = N σ 2
P6
This problem illustrates what happens to an unbiased estimator when it undergoes a nonlinear trasformation. In Problem 3, if we choose to estimate the unknown parameter θ = A 2 \theta=A^2 θ = A 2 by
θ ^ = ( 1 N ∑ n = 0 N − 1 x [ n ] ) 2 , \hat \theta=\left(\frac{1}{N}\sum_{n=0}^{N-1}x[n]\right)^2, θ ^ = ( N 1 n = 0 ∑ N − 1 x [ n ] ) 2 ,
can we say that the estimator is unbiased? What happens as N → ∞ N\rightarrow \infty N → ∞ ?
Solve
A ^ = A \hat A =A A ^ = A
Var ( A ^ ) = σ 2 N \text{Var}(\hat A)=\frac{\sigma^2}{N} Var ( A ^ ) = N σ 2
Check E [ θ ^ ] E[\hat \theta] E [ θ ^ ] x [ n ] ∼ N ( A , σ 2 ) E [ θ ^ ] = E [ ( 1 N ∑ n = 0 N − 1 x [ n ] ) 2 ] E [ A ^ 2 ] = Var ( A ^ ) + ( E [ A ^ ] ) 2 E [ θ ^ ] = σ 2 N + A 2 ∴ θ ^ is biased estimator x[n]\sim\mathcal{N}(A, \sigma^2)\\[0.2cm] E[\hat\theta]=E\left[\left(\frac{1}{N}\sum_{n=0}^{N-1}x[n]\right)^2\right]\\[0.3cm] E[\hat A^2]=\text{Var}(\hat A)+(E[\hat A])^2\\[0.2cm] E[\hat \theta]=\frac{\sigma^2}{N}+A^2\\[0.3cm] \therefore \hat \theta \text{ is biased estimator} x [ n ] ∼ N ( A , σ 2 ) E [ θ ^ ] = E ⎣ ⎢ ⎡ ( N 1 n = 0 ∑ N − 1 x [ n ] ) 2 ⎦ ⎥ ⎤ E [ A ^ 2 ] = Var ( A ^ ) + ( E [ A ^ ] ) 2 E [ θ ^ ] = N σ 2 + A 2 ∴ θ ^ is biased estimator
Check E [ θ ^ ] E[\hat \theta] E [ θ ^ ] when N → ∞ N\rightarrow \infty N → ∞ E [ θ ^ ] = σ 2 N + A 2 → A 2 ∴ θ ^ is Asymptatically Unbiased E[\hat \theta]=\frac{\sigma^2}{N}+A^2\rightarrow A^2\\[0.3cm] \therefore \hat \theta \text{ is Asymptatically Unbiased} E [ θ ^ ] = N σ 2 + A 2 → A 2 ∴ θ ^ is Asymptatically Unbiased
Bonus
Unbiased Estimator for DC Level in White Gaussian Noise
Consider the observation
x [ n ] = A + w [ n ] n = 0 , 1 , … , N − 1 x[n]=A+w[n]\quad n=0,1,\dots,N-1 x [ n ] = A + w [ n ] n = 0 , 1 , … , N − 1
where A A A is the parameter to be estimated and w [ n ] w[n] w [ n ] is WGN
The parameter A A A can take on any value in the interval − ∞ < A < ∞ -\infty<A<\infty − ∞ < A < ∞
Then, a reasonable estimator for the average value of x [ n ] x[n] x [ n ] isA ^ = 1 N ∑ n = 0 N − 1 x [ n ] \hat A=\frac{1}{N}\sum_{n=0}^{N-1}x[n] A ^ = N 1 n = 0 ∑ N − 1 x [ n ] or the sample mean.
Due to the linearity properties of the expectation operatorE [ A ^ ] = E [ 1 N ∑ n = 0 N − 1 x [ n ] ] = 1 N ∑ n = 0 N − 1 E [ x [ n ] ] = 1 N ∑ n = 0 N − 1 A = A E[\hat A]=E\left[\frac{1}{N}\sum_{n=0}^{N-1}x[n]\right]\\[0.3cm] =\frac{1}{N}\sum_{n=0}^{N-1}E[x[n]]\\[0.3cm] =\frac{1}{N}\sum_{n=0}^{N-1} A\\[0.3cm] =A E [ A ^ ] = E [ N 1 n = 0 ∑ N − 1 x [ n ] ] = N 1 n = 0 ∑ N − 1 E [ x [ n ] ] = N 1 n = 0 ∑ N − 1 A = A for all A A A ∴ The sample mean is unbiased \therefore \text{The sample mean is unbiased} ∴ The sample mean is unbiased
All Content has been written based on lecture of Prof. eui-seok.Hwang in GIST(Detection and Estimation)