P1
Suppose that X 0 , X 1 , X 2 , … , X N − 1 X_0, X_1, X_2,\dots,X_{N-1} X 0 , X 1 , X 2 , … , X N − 1 be a random sample of an exponential random variable X X X with an unknown parameter α \alpha α , corresponding the mean of X X X f X ( x ; α ) = 1 α e − x / α f_X(x;\alpha)=\frac{1}{\alpha}e^{-x/\alpha} f X ( x ; α ) = α 1 e − x / α When α \alpha α is to be estimated, find the Cramer-Rao lower bound (CRLB)
Solution
CRLB = 1 I ( α ) \text{CRLB}=\frac{1}{I(\alpha)}\\[0.3cm] CRLB = I ( α ) 1
Log Likelihoodln p ( x ; α ) = ∑ i = 0 N − 1 log ( f X ( X i ; α ) ) = ∑ i = 0 N − 1 log ( 1 α e − X i / α ) = ∑ i = 0 N − 1 ( − log ( α ) − X i α ) = − N log ( α ) − 1 α ∑ i = 0 N − 1 X i \ln p(x;\alpha)=\sum_{i=0}^{N-1}\log(f_X(X_i;\alpha))\\[0.3cm] =\sum_{i=0}^{N-1}\log\left(\frac{1}{\alpha}e^{-X_i/\alpha}\right)\\[0.3cm] =\sum_{i=0}^{N-1}\left(-\log(\alpha)-\frac{X_i}{\alpha}\right)\\[0.3cm] =-N\log(\alpha)-\frac{1}{\alpha}\sum_{i=0}^{N-1} X_i ln p ( x ; α ) = i = 0 ∑ N − 1 log ( f X ( X i ; α ) ) = i = 0 ∑ N − 1 log ( α 1 e − X i / α ) = i = 0 ∑ N − 1 ( − log ( α ) − α X i ) = − N log ( α ) − α 1 i = 0 ∑ N − 1 X i
1st Derivative of Log Likelihood∂ ln p ( x ; α ) ∂ α = − N α + 1 α 2 ∑ i = 0 N − 1 X i \frac{\partial\ln p(x;\alpha)}{\partial\alpha}=-\frac{N}{\alpha}+\frac{1}{\alpha^2}\sum_{i=0}^{N-1}X_i\\[0.3cm] ∂ α ∂ ln p ( x ; α ) = − α N + α 2 1 i = 0 ∑ N − 1 X i
Fisher Information FunctionI ( α ) = − E [ ∂ 2 ln p ( x ; α ) ∂ α 2 ] ∂ 2 ln p ( x ; α ) ∂ α 2 = N α 2 − 2 α 3 ∑ i = 0 N − 1 X i I(\alpha)=-E\left[\frac{\partial^2 \ln p(x;\alpha)}{\partial\alpha^2}\right]\\[0.3cm] \frac{\partial^2\ln p(x;\alpha)}{\partial\alpha^2}=\frac{N}{\alpha^2}-\frac{2}{\alpha^3}\sum_{i=0}^{N-1} X_i\\[0.3cm] I ( α ) = − E [ ∂ α 2 ∂ 2 ln p ( x ; α ) ] ∂ α 2 ∂ 2 ln p ( x ; α ) = α 2 N − α 3 2 i = 0 ∑ N − 1 X i
E [ X i ] = α E[X_i]=\alpha E [ X i ] = α I ( α ) = N α 2 ∴ CRLB = α 2 N I(\alpha)=\frac{N}{\alpha^2}\\[0.3cm] \therefore \text{CRLB}=\frac{\alpha^2}{N} I ( α ) = α 2 N ∴ CRLB = N α 2
P2
Let Y 0 , Y 1 , Y 2 , … , Y N − 1 Y_0,Y_1,Y_2,\dots,Y_{N-1} Y 0 , Y 1 , Y 2 , … , Y N − 1 be a random sample of a Gaussian random variable of mean α + β x i \alpha+\beta x_i α + β x i and variance 1 1 1 , where constants x 0 , x 1 , … , x N − 1 x_0,x_1,\dots,x_{N-1} x 0 , x 1 , … , x N − 1 are known, whereas α \alpha α and β \beta β are unknown parameters. Please derive the Fisher information matrix for CRLB of θ = [ α β ] T \theta=[\alpha\;\beta]^T θ = [ α β ] T
Solution
PDF
mean of Y i Y_i Y i : α + β x i \alpha + \beta x_i α + β x i
variance of Y i Y_i Y i : 1 1 1 f Y ( y i ; α , β ) = 1 2 π exp ( − 1 2 ( y i − α − β x i ) 2 ) f_Y(y_i;\alpha,\beta)=\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}(y_i-\alpha-\beta x_i)^2\right)\\[0.3cm] f Y ( y i ; α , β ) = 2 π 1 exp ( − 2 1 ( y i − α − β x i ) 2 )
Log Liklihood Function
ln p ( x ; α , β ) = ∑ i = 0 N − 1 log ( f Y ( y i ; α , β ) ) = − N 2 log ( 2 π ) − 1 2 ∑ i = 0 N − 1 ( y i − α − β x i ) 2 \ln p(x;\alpha,\beta)=\sum_{i=0}^{N-1}\log(f_Y(y_i;\alpha,\beta))\\[0.3cm] =-\frac{N}{2}\log(2\pi)-\frac{1}{2}\sum_{i=0}^{N-1}(y_i-\alpha-\beta x_i)^2 ln p ( x ; α , β ) = i = 0 ∑ N − 1 log ( f Y ( y i ; α , β ) ) = − 2 N log ( 2 π ) − 2 1 i = 0 ∑ N − 1 ( y i − α − β x i ) 2
1st Derivative of α \alpha α
∂ ln p ( x ; α , β ) ∂ α = ∑ i = 0 N − 1 ( y i − α − β x i ) \frac{\partial \ln p(x;\alpha,\beta)}{\partial\alpha}=\sum_{i=0}^{N-1}(y_i-\alpha-\beta x_i) ∂ α ∂ ln p ( x ; α , β ) = i = 0 ∑ N − 1 ( y i − α − β x i )
1st Derivative of β \beta β
∂ ln p ( x ; α , β ) ∂ β = ∑ i = 0 N − 1 ( y i − α − β x i ) x i \frac{\partial \ln p(x;\alpha, \beta)}{\partial\beta}=\sum_{i=0}^{N-1}(y_i-\alpha-\beta x_i)x_i ∂ β ∂ ln p ( x ; α , β ) = i = 0 ∑ N − 1 ( y i − α − β x i ) x i
2nd Derivative of α \alpha α
∂ 2 ln p ( x ; α , β ) ∂ α 2 = − N \frac{\partial^2 \ln p(x;\alpha,\beta)}{\partial\alpha^2}=-N ∂ α 2 ∂ 2 ln p ( x ; α , β ) = − N
2nd Derivative of β \beta β
∂ 2 ln p ( x ; α , β ) ∂ β 2 = − ∑ i = 0 N − 1 x i 2 \frac{\partial^2 \ln p(x;\alpha,\beta)}{\partial\beta^2}=-\sum_{i=0}^{N-1}x_i^2 ∂ β 2 ∂ 2 ln p ( x ; α , β ) = − i = 0 ∑ N − 1 x i 2
α \alpha α , β \beta β 2nd Derivative
∂ ln p ( x ; α , β ) ∂ α ∂ β = − ∑ i = 0 N − 1 x i \frac{\partial \ln p(x;\alpha, \beta)}{\partial\alpha\partial\beta}=-\sum_{i=0}^{N-1}x_i ∂ α ∂ β ∂ ln p ( x ; α , β ) = − i = 0 ∑ N − 1 x i
Fisher Information Function
I ( θ ) = [ − N − ∑ i = 0 N − 1 x i − ∑ i = 0 N − 1 x i − ∑ i = 0 N − 1 x i 2 ] I(\theta) = \begin{bmatrix} -N & -\sum_{i=0}^{N-1} x_i \\ -\sum_{i=0}^{N-1} x_i & -\sum_{i=0}^{N-1} x_i^2 \end{bmatrix} I ( θ ) = [ − N − ∑ i = 0 N − 1 x i − ∑ i = 0 N − 1 x i − ∑ i = 0 N − 1 x i 2 ]
P3
The data x [ n ] = A r n + w [ n ] x[n]=Ar^n +w[n] x [ n ] = A r n + w [ n ] for n = 0 , 1 , … , N − 1 n=0,1,\dots,N-1 n = 0 , 1 , … , N − 1 are observed, where w [ n ] w[n] w [ n ] is WGN with variance σ 2 \sigma^2 σ 2 and r > 0 r>0 r > 0 is known. Find the CRLB for A A A . Show that an efficient estimator exists and find its variance. What happens to the variance as N → ∞ N\rightarrow\infty N → ∞ for various values of r r r ?
PDF
f ( x [ n ] ; A ) = 1 2 π σ 2 exp ( − ( x [ n ] − A r n ) 2 2 σ 2 ) f(x[n];A)=\frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{(x[n]-Ar^n)^2}{2\sigma^2}\right) f ( x [ n ] ; A ) = 2 π σ 2 1 exp ( − 2 σ 2 ( x [ n ] − A r n ) 2 )
Log Likelihood Function
ln p ( x ; A ) = ∑ n = 0 N − 1 log f ( x [ n ] ; A ) = − N 2 log ( 2 π σ 2 ) − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A r n ) 2 \ln p(x;A)=\sum_{n=0}^{N-1}\log f(x[n];A)\\[0.3cm] =-\frac{N}{2}\log(2\pi\sigma^2)-\frac{1}{2\sigma^2}\sum_{n=0}^{N-1}(x[n]-Ar^n)^2 ln p ( x ; A ) = n = 0 ∑ N − 1 log f ( x [ n ] ; A ) = − 2 N log ( 2 π σ 2 ) − 2 σ 2 1 n = 0 ∑ N − 1 ( x [ n ] − A r n ) 2
1st Derivative of Log Likelihood Function
∂ ln p ( x ; A ) ∂ A = 1 σ 2 ∑ n = 0 N − 1 ( x [ n ] − A r n ) r n \frac{\partial\ln p(x;A)}{\partial A}=\frac{1}{\sigma^2}\sum_{n=0}^{N-1}(x[n]-Ar^n)r^n ∂ A ∂ ln p ( x ; A ) = σ 2 1 n = 0 ∑ N − 1 ( x [ n ] − A r n ) r n
2nd Derivative of Log Likelihood Function
∂ 2 ln p ( x ; A ) ∂ A 2 = − 1 σ 2 ∑ n = 0 N − 1 ( r n ) 2 \frac{\partial^2 \ln p(x;A)}{\partial A^2}=-\frac{1}{\sigma^2}\sum_{n=0}^{N-1}(r^n)^2 ∂ A 2 ∂ 2 ln p ( x ; A ) = − σ 2 1 n = 0 ∑ N − 1 ( r n ) 2
Fisher Information Function
I ( A ) = 1 σ 2 ∑ n = 0 N − 1 ( r n ) 2 I(A)=\frac{1}{\sigma^2}\sum_{n=0}^{N-1}(r^n)^2 I ( A ) = σ 2 1 n = 0 ∑ N − 1 ( r n ) 2
CRLB
var ( A ^ ) ≥ 1 I ( A ) = σ 2 ∑ n = 0 N − 1 ( r n ) 2 \text{var}(\hat A) \geq \frac{1}{I(A)}=\frac{\sigma^2}{\sum_{n=0}^{N-1}(r^n)^2} var ( A ^ ) ≥ I ( A ) 1 = ∑ n = 0 N − 1 ( r n ) 2 σ 2
Efficient Estimator and Variance
If 1st derivative of log liklihood function equal to 0 then MLE
A ^ = ∑ n = 0 N − 1 x [ n ] r n ∑ n = 0 N − 1 ( r n ) 2 var ( A ^ ) = σ 2 ∑ n = 0 N − 1 ( r n ) 2 = CRLB \hat A = \frac{\sum_{n=0}^{N-1}x[n]r^n}{\sum_{n=0}^{N-1}(r^n)^2}\\[0.3cm] \text{var}(\hat A)=\frac{\sigma^2}{\sum_{n=0}^{N-1}(r^n)^2}=\text{CRLB} A ^ = ∑ n = 0 N − 1 ( r n ) 2 ∑ n = 0 N − 1 x [ n ] r n var ( A ^ ) = ∑ n = 0 N − 1 ( r n ) 2 σ 2 = CRLB
What happens if N → ∞ N\rightarrow\infty N → ∞
∑ n = 0 N − 1 ( r n ) 2 = ∑ n = 0 N − 1 r 2 n = 1 − r 2 N 1 − r 2 , for r < 1 When N → ∞ , r < 1 var ( A ^ ) → σ 2 ( 1 − r n ) 1 − r 2 N ≈ σ 2 1 − r 2 When r = 1 ∑ n = 0 N − 1 ( r n ) 2 = N When N → ∞ , r = 1 Var ( A ^ ) → σ 2 N → 0 \sum_{n=0}^{N-1}(r^n)^2=\sum_{n=0}^{N-1}r^{2n}=\frac{1-r^{2N}}{1-r^2},\;\text{for }r<1\\[0.3cm] \text{When }N\rightarrow\infty,\;r<1\\[0.3cm] \text{var}(\hat A)\rightarrow\frac{\sigma^2(1-r^n)}{1-r^{2N}}\approx\frac{\sigma^2}{1-r^2}\\[0.3cm] \text{When }r=1\\[0.3cm] \sum_{n=0}^{N-1}(r^n)^2=N\\[0.3cm] \text{When }N\rightarrow\infty,\;r=1\\[0.3cm] \text{Var}(\hat A)\rightarrow\frac{\sigma^2}{N}\rightarrow0 n = 0 ∑ N − 1 ( r n ) 2 = n = 0 ∑ N − 1 r 2 n = 1 − r 2 1 − r 2 N , for r < 1 When N → ∞ , r < 1 var ( A ^ ) → 1 − r 2 N σ 2 ( 1 − r n ) ≈ 1 − r 2 σ 2 When r = 1 n = 0 ∑ N − 1 ( r n ) 2 = N When N → ∞ , r = 1 Var ( A ^ ) → N σ 2 → 0
P4
Prove that
1 N ∑ n = 0 N − 1 cos ( 4 π f 0 n + 2 ϕ ) ≈ 0 \frac{1}{N}\sum_{n=0}^{N-1}\cos(4\pi f_0 n+2\phi)\approx0 N 1 n = 0 ∑ N − 1 cos ( 4 π f 0 n + 2 ϕ ) ≈ 0
What conditions on f 0 f_0 f 0 are required for this to hold? Note that
∑ n = 0 N − 1 cos ( α n + β ) = Re ( ∑ n = 0 N − 1 exp [ j ( α n + β ) ] ) \sum_{n=0}^{N-1}\cos(\alpha n+\beta)=\text{Re}\left(\sum_{n=0}^{N-1}\exp[j(\alpha n+\beta)]\right) n = 0 ∑ N − 1 cos ( α n + β ) = Re ( n = 0 ∑ N − 1 exp [ j ( α n + β ) ] )
and use the geometrix progression sum formula
Solution
코사인의 복소수 표현cos ( θ ) = Re ( e j θ ) ∑ n = 0 N − 1 cos ( 4 π f 0 n + 2 ϕ ) = Re ( ∑ n = 0 N − 1 exp ( j ( 4 π f 0 n + 2 ϕ ) ) ) = Re ( e j 2 ϕ ∑ n = 0 N − 1 e j 4 π f 0 n ) \cos(\theta)=\text{Re}(e^{j\theta})\\[0.3cm] \sum_{n=0}^{N-1}\cos(4\pi f_0 n+2\phi)=\text{Re}\left(\sum_{n=0}^{N-1}\exp(j(4\pi f_0 n+2\phi))\right)\\[0.3cm] =\text{Re}\left(e^{j2\phi}\sum_{n=0}^{N-1}e^{j4\pi f_0 n}\right) cos ( θ ) = Re ( e j θ ) n = 0 ∑ N − 1 cos ( 4 π f 0 n + 2 ϕ ) = Re ( n = 0 ∑ N − 1 exp ( j ( 4 π f 0 n + 2 ϕ ) ) ) = Re ( e j 2 ϕ n = 0 ∑ N − 1 e j 4 π f 0 n ) 문제는 ∑ n = 0 N − 1 e j 4 π f 0 n \sum_{n=0}^{N-1}e^{j4\pi f_0 n} ∑ n = 0 N − 1 e j 4 π f 0 n 의 합을 계산해야함
기하급수 수열의 합 공식 적용∑ n = 0 N − 1 r n = 1 − r N 1 − r , r ≠ 1 r = e j 4 π f 0 ∑ n = 0 N − 1 e j 4 π f 0 n = 1 − e j 4 π f 0 N 1 − e j 4 π f 0 \sum_{n=0}^{N-1}r^n=\frac{1-r^N}{1-r},\;r\neq1\\[0.3cm] r=e^{j4\pi f_0}\\[0.3cm] \sum_{n=0}^{N-1}e^{j4\pi f_0n}=\frac{1-e^{j4\pi f_0N}}{1-e^{j4\pi f_0}} n = 0 ∑ N − 1 r n = 1 − r 1 − r N , r = 1 r = e j 4 π f 0 n = 0 ∑ N − 1 e j 4 π f 0 n = 1 − e j 4 π f 0 1 − e j 4 π f 0 N
조건 도출
e j 4 π f 0 N = 1 e^{j4\pi f_0 N} = 1 e j 4 π f 0 N = 1 의 경우 f 0 f_0 f 0 가 정수여야 함 그럼 기하급수의 합은 0∑ n = 0 N − 1 e j 4 π f 0 n = 0 if f 0 ∈ Integer 1 N ∑ n = 0 N − 1 cos ( 4 π f 0 n + 2 ϕ ) ≈ 0 \sum_{n=0}^{N-1}e^{j4\pi f_0 n}=0\;\text{if }f_0\in\text{Integer}\\[0.3cm] \frac{1}{N}\sum_{n=0}^{N-1}\cos(4\pi f_0 n+2\phi)\approx0 n = 0 ∑ N − 1 e j 4 π f 0 n = 0 if f 0 ∈ Integer N 1 n = 0 ∑ N − 1 cos ( 4 π f 0 n + 2 ϕ ) ≈ 0
P5
We observe two samples of a DC level in correlated Gaussian noisex [ 0 ] = A + w [ 0 ] x [ 1 ] = A + w [ 1 ] x[0]=A+w[0]\\ x[1]=A+w[1] x [ 0 ] = A + w [ 0 ] x [ 1 ] = A + w [ 1 ] where w = [ w [ 0 ] w [ 1 ] ] T \text{w}=[w[0]\;w[1]]^T w = [ w [ 0 ] w [ 1 ] ] T is zero mean with covariance matrixC = σ 2 [ 1 ρ ρ 1 ] \mathbf{C} = \sigma^2 \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix} C = σ 2 [ 1 ρ ρ 1 ]
The parameter ρ \rho ρ is the correlation coefficient between w [ 0 ] w[0] w [ 0 ] and w [ 1 ] w[1] w [ 1 ]
Compute the CRLB for A A A and compare it to the case when w [ n ] w[n] w [ n ] is WGN or ρ = 0 \rho=0 ρ = 0
Also, explain what happens when ρ → ± 1 \rho \rightarrow \pm1 ρ → ± 1
Finally, comment on the additivity property of the Fisher Information for nonindependent observations
x [ 0 ] = A + w [ 0 ] , x [ 1 ] = A + w [ 1 ] x[0]=A+w[0],\;x[1]=A+w[1] x [ 0 ] = A + w [ 0 ] , x [ 1 ] = A + w [ 1 ]
CRLBI ( A ) = ∂ μ T ∂ A C − 1 ∂ μ ∂ A μ = [ A A ] T → ∂ μ ∂ A = [ 1 1 ] T C − 1 = 1 σ 2 ( 1 − ρ 2 ) [ 1 − ρ − ρ 1 ] I ( A ) = [ 1 1 ] 1 σ 2 ( 1 − ρ 2 ) [ 1 − ρ − ρ 1 ] [ 1 1 ] T = 1 σ 2 ( 1 − ρ 2 ) ( ( 1 ) ( 1 ) + ( − ρ ) ( 1 ) + ( − ρ ) ( 1 ) + ( 1 ) ( 1 ) ) = 1 σ 2 ( 1 − ρ 2 ) ( 2 − 2 ρ ) = 2 ( 1 − ρ ) σ 2 ( 1 − ρ 2 ) I ( A ) = 2 ( 1 − ρ ) σ 2 ( 1 − ρ 2 ) Var ( A ^ ) ≥ σ 2 ( 1 − ρ 2 ) 2 ( 1 − ρ ) = CRLB I(A)=\frac{\partial\mu^T}{\partial A}C^{-1}\frac{\partial\mu}{\partial A}\\[0.3cm] \mu=[A\;A]^T\rightarrow\frac {\partial\mu}{\partial A}=[1\;1]^T\\[0.3cm] C^{-1}=\frac{1}{\sigma^2(1-\rho^2)}\begin{bmatrix} 1 & -\rho \\ -\rho & 1 \end{bmatrix}\\[0.3cm] I(A)=[1\;1]\frac{1}{\sigma^2(1-\rho^2)}\begin{bmatrix} 1 & -\rho \\ -\rho & 1 \end{bmatrix}[1\;1]^T\\[0.3cm] =\frac{1}{\sigma^2(1-\rho^2)}((1)(1)+(-\rho)(1)+(-\rho)(1)+(1)(1))\\[0.3cm] =\frac{1}{\sigma^2(1-\rho^2)}(2-2\rho)\\[0.3cm] =\frac{2(1-\rho)}{\sigma^2(1-\rho^2)}\\[0.3cm] I(A)=\frac{2(1-\rho)}{\sigma^2(1-\rho^2)}\\[0.3cm] \text{Var}(\hat A)\geq\frac{\sigma^2(1-\rho^2)}{2(1-\rho)}=\text{CRLB} I ( A ) = ∂ A ∂ μ T C − 1 ∂ A ∂ μ μ = [ A A ] T → ∂ A ∂ μ = [ 1 1 ] T C − 1 = σ 2 ( 1 − ρ 2 ) 1 [ 1 − ρ − ρ 1 ] I ( A ) = [ 1 1 ] σ 2 ( 1 − ρ 2 ) 1 [ 1 − ρ − ρ 1 ] [ 1 1 ] T = σ 2 ( 1 − ρ 2 ) 1 ( ( 1 ) ( 1 ) + ( − ρ ) ( 1 ) + ( − ρ ) ( 1 ) + ( 1 ) ( 1 ) ) = σ 2 ( 1 − ρ 2 ) 1 ( 2 − 2 ρ ) = σ 2 ( 1 − ρ 2 ) 2 ( 1 − ρ ) I ( A ) = σ 2 ( 1 − ρ 2 ) 2 ( 1 − ρ ) Var ( A ^ ) ≥ 2 ( 1 − ρ ) σ 2 ( 1 − ρ 2 ) = CRLB
ρ = 0 \rho=0 ρ = 0 (WGN)I ( A ) = 2 σ 2 Var ( A ^ ) ≥ σ 2 2 = CRLB I(A)=\frac{2}{\sigma^2}\\[0.3cm] \text{Var}(\hat A) \geq \frac{\sigma^2}{2}=\text{CRLB} I ( A ) = σ 2 2 Var ( A ^ ) ≥ 2 σ 2 = CRLB We can't get CRLB
ρ → ± 1 \rho \rightarrow \pm 1 ρ → ± 1
If ρ → 1 \rho \rightarrow 1 ρ → 1 I ( A ) → 0 I(A) \rightarrow 0 I ( A ) → 0
If ρ → − 1 \rho \rightarrow -1 ρ → − 1 I ( A ) = 2 ( 1 − ( − 1 ) ) σ 2 ( 1 − ( − 1 ) 2 ) = 2 ( 2 ) σ 2 ( 4 ) = 1 σ 2 Var ( A ^ ) ≥ σ 2 = CRLB I(A)=\frac{2(1-(-1))}{\sigma^2(1-(-1)^2)}=\frac{2(2)}{\sigma^2(4)}=\frac{1}{\sigma^2}\\[0.3cm] \text{Var}(\hat A)\geq\sigma^2=\text{CRLB} I ( A ) = σ 2 ( 1 − ( − 1 ) 2 ) 2 ( 1 − ( − 1 ) ) = σ 2 ( 4 ) 2 ( 2 ) = σ 2 1 Var ( A ^ ) ≥ σ 2 = CRLB The Estimator getting worse
Conclude
When ρ = 0 \rho=0 ρ = 0 , lowest CRLB
When ρ → ± 1 \rho \rightarrow \pm1 ρ → ± 1 decrease Fisher information, increase CRLB
P6
Consider a generalization of the line fitting problem as described in Problem 4 termed polynomial or curve fitting
The data model isx [ n ] = ∑ k = 0 p − 1 A k n k + w [ n ] x[n]=\sum_{k=0}^{p-1}A_kn^k+w[n] x [ n ] = k = 0 ∑ p − 1 A k n k + w [ n ] for n = 0 , 1 , … , N − 1 n=0,1,\dots,N-1 n = 0 , 1 , … , N − 1
As before, w [ n ] w[n] w [ n ] is WGN with variance σ 2 \sigma^2 σ 2
It is desired to estimate { A 0 , A 1 , … , A p − 1 } \{A_0,A_1,\dots,A_{p-1}\} { A 0 , A 1 , … , A p − 1 }
Find the Fisher Information matrix for this problem
Solution
μ [ n ] = ∑ k = 0 p − 1 A k n k ∂ μ [ n ] ∂ A k = n k I ( A ) = 1 σ 2 ∑ n = 0 N − 1 ∂ μ [ n ] ∂ A ∂ μ [ n ] ∂ A T I ( A ) = 1 σ 2 ∑ n = 0 N − 1 [ 1 n n 2 n 3 ⋯ n p − 1 n n 2 n 3 n 4 ⋯ n p n 2 n 3 n 4 n 5 ⋯ n p + 1 ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ n p − 1 n p n p + 1 n p + 2 ⋯ n 2 p − 2 ] I i , j = 1 σ 2 ∑ n = 0 N − 1 n i + j \mu[n]=\sum_{k=0}^{p-1}A_kn^k\\[0.3cm] \frac{\partial\mu[n]}{\partial A_k}=n^k\\[0.3cm] I(A)=\frac{1}{\sigma^2}\sum_{n=0}^{N-1}\frac{\partial\mu[n]}{\partial A} \frac{\partial\mu[n]}{\partial A}^T\\[0.3cm] I(A) = \frac{1}{\sigma^2} \sum_{n=0}^{N-1} \begin{bmatrix} 1 & n & n^2 & n^3 & \cdots & n^{p-1} \\ n & n^2 & n^3 & n^4 & \cdots & n^p \\ n^2 & n^3 & n^4 & n^5 & \cdots & n^{p+1} \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ n^{p-1} & n^p & n^{p+1} & n^{p+2} & \cdots & n^{2p-2} \end{bmatrix}\\[0.5cm] I_{i,j}=\frac{1}{\sigma^2}\sum_{n=0}^{N-1}n^{i+j} μ [ n ] = k = 0 ∑ p − 1 A k n k ∂ A k ∂ μ [ n ] = n k I ( A ) = σ 2 1 n = 0 ∑ N − 1 ∂ A ∂ μ [ n ] ∂ A ∂ μ [ n ] T I ( A ) = σ 2 1 n = 0 ∑ N − 1 ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎡ 1 n n 2 ⋮ n p − 1 n n 2 n 3 ⋮ n p n 2 n 3 n 4 ⋮ n p + 1 n 3 n 4 n 5 ⋮ n p + 2 ⋯ ⋯ ⋯ ⋱ ⋯ n p − 1 n p n p + 1 ⋮ n 2 p − 2 ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎤ I i , j = σ 2 1 n = 0 ∑ N − 1 n i + j
P7
It is desired to estimate the total power P 0 P_0 P 0 of a WSS random process, whose PSD is given asP x x ( f ) = P 0 Q ( f ) P_{xx}(f)=P_0Q(f) P x x ( f ) = P 0 Q ( f ) where∫ − 1 2 1 2 Q ( f ) d f = 1 \int_{-\frac{1}{2}}^{\frac{1}{2}}Q(f)df=1 ∫ − 2 1 2 1 Q ( f ) d f = 1 and Q ( f ) Q(f) Q ( f ) is known
If N N N observations are available, find the CRLB for the total power using the exact from as well as the asymptotic approximation and compare
Solution
P 0 = ∫ − 1 2 1 2 P x x ( f ) d f P 0 = P 0 ∫ − 1 / 2 1 / 2 Q ( f ) d f = P 0 ⋅ 1 P_0=\int_{-\frac{1}{2}}^{\frac{1}{2}}P_{xx}(f)df\\[0.3cm] P_0=P_0\int_{-1/2}^{1/2}Q(f)df=P_0\cdot 1 P 0 = ∫ − 2 1 2 1 P x x ( f ) d f P 0 = P 0 ∫ − 1 / 2 1 / 2 Q ( f ) d f = P 0 ⋅ 1
Fisher Information FunctionI ( P 0 ) = N ⋅ ∫ − 1 / 2 1 / 2 1 P x x ( f ) ( ∂ P x x ( f ) ∂ P 0 ) 2 d f P x x ( f ) = P 0 Q ( f ) I ( P 0 ) = N ⋅ ∫ − 1 / 2 1 / 2 1 P 0 2 Q ( f ) ( Q ( f ) ) 2 d f = N ⋅ 1 P 0 2 ∫ − 1 / 2 1 / 2 Q ( f ) d f = N ⋅ 1 P 0 2 ⋅ 1 = N P 0 2 I(P_0)=N\cdot \int_{-1/2}^{1/2}\frac{1}{P_{xx}(f)}\left(\frac{\partial P_{xx}(f)}{\partial P_0}\right)^2 df\\[0.3cm] P_{xx}(f)=P_0Q(f)\\[0.3cm] I(P_0)=N\cdot \int_{-1/2}^{1/2}\frac{1}{P^2_0Q(f)}(Q(f))^2df\\[0.3cm] =N\cdot\frac{1}{P^2_0}\int_{-1/2}^{1/2}Q(f)df\\[0.3cm] =N\cdot\frac{1}{P^2_0}\cdot 1=\frac{N}{P^2_0} I ( P 0 ) = N ⋅ ∫ − 1 / 2 1 / 2 P x x ( f ) 1 ( ∂ P 0 ∂ P x x ( f ) ) 2 d f P x x ( f ) = P 0 Q ( f ) I ( P 0 ) = N ⋅ ∫ − 1 / 2 1 / 2 P 0 2 Q ( f ) 1 ( Q ( f ) ) 2 d f = N ⋅ P 0 2 1 ∫ − 1 / 2 1 / 2 Q ( f ) d f = N ⋅ P 0 2 1 ⋅ 1 = P 0 2 N
CRLBVar ( P ^ 0 ) ≥ 1 I ( P 0 ) = P 0 2 N = CRLB \text{Var}(\hat P_0)\geq\frac{1}{I(P_0)}=\frac{P^2_0}{N}=\text{CRLB} Var ( P ^ 0 ) ≥ I ( P 0 ) 1 = N P 0 2 = CRLB
Asymptotic approximationVar ( P ^ 0 ) ≈ P 0 2 N \text{Var}(\hat P_0)\approx\frac{P^2_0}{N} Var ( P ^ 0 ) ≈ N P 0 2
aa