[DetnEst] Assignment 9

KBC·2024년 12월 13일
0

Detection and Estimation

목록 보기
23/23

P1

Show the NP detector for a Gaussian rank one signal as a Gaussian random process whose covariance matrix is Cs=σA2hhTC_s=\sigma^2_Ahh^T embedded in WGN with variance σ2\sigma^2 can be written as

T(x)=(n=0N1x[n]h[n])2T'(\text{x})=\left(\sum^{N-1}_{n=0}x[n]h[n]\right)^2

Also, determine PFAP_{FA} and PDP_D.
Hint: The test statistic is a scaled χ12\chi^2_1 random variable under H0\mathcal{H}_0 and H1\mathcal{H}_1

Problem Setup

We are tasked with deriving the Neyman-Pearson (NP) detector for a Gaussian rank-one signal embedded in white Gaussian noise (WGN). The covariance matrix of the signal is given by:

Cs=σA2hhT\mathbf{C}_s = \sigma_A^2 \mathbf{h} \mathbf{h}^T

Observations:

  • Signal Model:

    • Under ( \mathcal{H}_0 ): Noise only, ( \mathbf{x} \sim \mathcal{N}(0, \sigma^2 \mathbf{I}) )
    • Under ( \mathcal{H}_1 ): Signal + Noise, ( \mathbf{x} \sim \mathcal{N}(0, \sigma^2 \mathbf{I} + \sigma_A^2 \mathbf{h} \mathbf{h}^T) )
  • Test Statistic:

    T(x)=(n=0N1x[n]h[n])2=(hTx)2T'(\mathbf{x}) = \left( \sum_{n=0}^{N-1} x[n] h[n] \right)^2 = \left( \mathbf{h}^T \mathbf{x} \right)^2

Step 1: Distribution of the Test Statistic

Under ( \mathcal{H}_0 ):

  • Since ( \mathbf{x} \sim \mathcal{N}(0, \sigma^2 \mathbf{I}) ), the projection ( \mathbf{h}^T \mathbf{x} ) is Gaussian:

    hTxN(0,σ2h2)\mathbf{h}^T \mathbf{x} \sim \mathcal{N}(0, \sigma^2 \|\mathbf{h}\|^2)
  • Thus, the test statistic ( T'(\mathbf{x}) ) follows a scaled chi-squared distribution with 1 degree of freedom:

    T(x)σ2h2χ12T'(\mathbf{x}) \sim \sigma^2 \|\mathbf{h}\|^2 \chi_1^2

Under ( \mathcal{H}_1 ):

  • The mean of ( \mathbf{h}^T \mathbf{x} ) is:

    μ=h2σA2\mu = \sqrt{\|\mathbf{h}\|^2} \sigma_A^2
  • The variance is:

    Var(hTx)=σ2h2\text{Var}(\mathbf{h}^T \mathbf{x}) = \sigma^2 \|\mathbf{h}\|^2
  • Hence, ( \mathbf{h}^T \mathbf{x} \sim \mathcal{N}(\mu, \sigma^2 |\mathbf{h}|^2) ), and ( T'(\mathbf{x}) ) follows a noncentral chi-squared distribution:

    T(x)σ2h2χ12(λ)T'(\mathbf{x}) \sim \sigma^2 \|\mathbf{h}\|^2 \chi_1^2(\lambda)

    where the noncentrality parameter is:

    λ=σA2h2σ2\lambda = \frac{\sigma_A^2 \|\mathbf{h}\|^2}{\sigma^2}

Step 2: Determine ( P_{FA} ) and ( P_D )

Probability of False Alarm (( P_{FA} )):

  • Under ( \mathcal{H}_0 ), ( T'(\mathbf{x}) \sim \sigma^2 |\mathbf{h}|^2 \chi_1^2 )
  • For a threshold ( \eta ), the probability of false alarm is:
    PFA=P(T(x)>ηH0)=Qχ12(ησ2h2)P_{FA} = P(T'(\mathbf{x}) > \eta | \mathcal{H}_0) = Q_{\chi_1^2}\left(\frac{\eta}{\sigma^2 \|\mathbf{h}\|^2}\right)

Probability of Detection (( P_D )):

  • Under ( \mathcal{H}_1 ), ( T'(\mathbf{x}) \sim \sigma^2 |\mathbf{h}|^2 \chi_1^2(\lambda) )
  • For the same threshold ( \eta ), the probability of detection is:
    PD=P(T(x)>ηH1)=Qχ12(λ)(ησ2h2)P_D = P(T'(\mathbf{x}) > \eta | \mathcal{H}_1) = Q_{\chi_1^2(\lambda)}\left(\frac{\eta}{\sigma^2 \|\mathbf{h}\|^2}\right)

Here, ( Q{\chi_1^2} ) and ( Q{\chi_1^2(\lambda)} ) are the complementary cumulative distribution functions (CCDFs) of the central and noncentral chi-squared distributions, respectively.


Summary of Results

  1. The test statistic ( T'(\mathbf{x}) ) is:

    T(x)=(hTx)2T'(\mathbf{x}) = \left( \mathbf{h}^T \mathbf{x} \right)^2
  2. Under ( \mathcal{H}_0 ):

    T(x)σ2h2χ12T'(\mathbf{x}) \sim \sigma^2 \|\mathbf{h}\|^2 \chi_1^2
  3. Under ( \mathcal{H}_1 ):

    T(x)σ2h2χ12(λ),λ=σA2h2σ2T'(\mathbf{x}) \sim \sigma^2 \|\mathbf{h}\|^2 \chi_1^2(\lambda), \quad \lambda = \frac{\sigma_A^2 \|\mathbf{h}\|^2}{\sigma^2}
  4. Probabilities:

    • False Alarm:
      PFA=Qχ12(ησ2h2)P_{FA} = Q_{\chi_1^2}\left(\frac{\eta}{\sigma^2 \|\mathbf{h}\|^2}\right)
    • Detection:
      PD=Qχ12(λ)(ησ2h2)P_D = Q_{\chi_1^2(\lambda)}\left(\frac{\eta}{\sigma^2 \|\mathbf{h}\|^2}\right)

P2

A rank one signal is a random signal whose mean is zero and whose covariance matrix has rank one. As such the covariance matrix can be written as Cs=uuTC_s=uu^T, where uu is an N×1N\times 1 vector.
Show that the signal s[n]=Ah[n]s[n]=Ah[n] for n=0,1,,N1n=0,1,\dots,N-1 where h[n]h[n] is a deterministic sequence and AA is a random variable with E(A)=0E(A)=0 and var(A)=σA2\text{var}(A)=\sigma^2_A is a rank one signal

Problem Statement

A rank-one signal satisfies the following:
1. Mean E[s[n]]=0E[s[n]] = 0,
2. Covariance matrix Cs\mathbf{C}_s of rank one:

Cs=uuT\mathbf{C}_s = \mathbf{u} \mathbf{u}^T

where u\mathbf{u} is an N×1N \times 1 vector.

We need to verify that the signal s[n]=Ah[n]s[n] = A h[n] is a rank-one signal, where:

  • h[n]h[n] is a deterministic sequence,
  • AA is a random variable with:
    E[A]=0andVar(A)=σA2.E[A] = 0 \quad \text{and} \quad \text{Var}(A) = \sigma_A^2.

Step 1: Expressing the Signal

The signal s[n]s[n] is defined as:

s[n]=Ah[n],n=0,1,,N1s[n] = A h[n], \quad n = 0, 1, \ldots, N-1

In vector form, this can be written as:

s=Ah,\mathbf{s} = A \mathbf{h},

where:

  • s=[s[0],s[1],,s[N1]]T\mathbf{s} = [s[0], s[1], \ldots, s[N-1]]^T,
  • h=[h[0],h[1],,h[N1]]T\mathbf{h} = [h[0], h[1], \ldots, h[N-1]]^T.

Step 2: Mean of the Signal

The mean of s\mathbf{s} is given by:

E[s]=E[Ah].E[\mathbf{s}] = E[A \mathbf{h}].

Using the property of expectation and the fact that AA is a random variable with E[A]=0E[A] = 0:

E[s]=E[A]h=0h=0.E[\mathbf{s}] = E[A] \mathbf{h} = 0 \cdot \mathbf{h} = \mathbf{0}.

Thus, the mean of the signal is zero:

E[s[n]]=0.E[s[n]] = 0.

Step 3: Covariance Matrix of the Signal

The covariance matrix of s\mathbf{s} is defined as:

Cs=E[ssT].\mathbf{C}_s = E[\mathbf{s} \mathbf{s}^T].

Substitute s=Ah\mathbf{s} = A \mathbf{h}:

Cs=E[(Ah)(Ah)T].\mathbf{C}_s = E[(A \mathbf{h})(A \mathbf{h})^T].

Simplify:

Cs=E[A2]hhT.\mathbf{C}_s = E[A^2] \mathbf{h} \mathbf{h}^T.

Since E[A2]=Var(A)=σA2E[A^2] = \text{Var}(A) = \sigma_A^2 (because E[A]=0E[A] = 0):

Cs=σA2hhT.\mathbf{C}_s = \sigma_A^2 \mathbf{h} \mathbf{h}^T.

Step 4: Rank of the Covariance Matrix

The matrix Cs=σA2hhT\mathbf{C}_s = \sigma_A^2 \mathbf{h} \mathbf{h}^T is a rank-one matrix because it is formed as the outer product of the vector h\mathbf{h} with itself. Specifically:
1. The rank of hhT\mathbf{h} \mathbf{h}^T is 1 if h0\mathbf{h} \neq \mathbf{0}.
2. Scaling by σA2\sigma_A^2 does not change the rank.

Thus, Cs\mathbf{C}_s is a rank-one matrix.


Conclusion

The signal s[n]=Ah[n]s[n] = A h[n] satisfies:
1. E[s[n]]=0E[s[n]] = 0,
2. The covariance matrix Cs=σA2hhT\mathbf{C}_s = \sigma_A^2 \mathbf{h} \mathbf{h}^T is of rank one.

Therefore, s[n]=Ah[n]s[n] = A h[n] is a rank-one signal.


P3

We observe two independent samples x[n]x[n] for n=0,1n = 0, 1 from the exponential PDF:

p(x[n])={λexp(λx[n])for x[n]>0,0for x[n]<0,p(x[n]) = \begin{cases} \lambda \exp(-\lambda x[n]) & \text{for } x[n] > 0, \\ 0 & \text{for } x[n] < 0, \end{cases}

where λ\lambda is unknown and λ>0\lambda > 0.

The hypothesis testing problem is:

  • H0:λ=λ0H_0: \lambda = \lambda_0
  • H1:λ>λ0H_1: \lambda > \lambda_0

We aim to determine:
1. If a uniformly most powerful (UMP) test exists,
2. The test statistic T(x)T(x),
3. PFAP_{FA} as a function of the threshold.


Step 1: Likelihood Ratio Test (LRT)

Joint PDF

The two samples x[0]x[0] and x[1]x[1] are independent and identically distributed (IID). The joint PDF is:

p(x[0],x[1]λ)=λ2exp(λ(x[0]+x[1])),x[0],x[1]>0.p(x[0], x[1] | \lambda) = \lambda^2 \exp\left(-\lambda (x[0] + x[1])\right), \quad x[0], x[1] > 0.

Likelihood Ratio

The likelihood ratio (LR) is:

Λ(x)=p(x[0],x[1]λ1)p(x[0],x[1]λ0)=λ12exp(λ1(x[0]+x[1]))λ02exp(λ0(x[0]+x[1])).\Lambda(x) = \frac{p(x[0], x[1] | \lambda_1)}{p(x[0], x[1] | \lambda_0)} = \frac{\lambda_1^2 \exp(-\lambda_1 (x[0] + x[1]))}{\lambda_0^2 \exp(-\lambda_0 (x[0] + x[1]))}.

Simplify:

Λ(x)=(λ1λ0)2exp((λ1λ0)(x[0]+x[1])).\Lambda(x) = \left(\frac{\lambda_1}{\lambda_0}\right)^2 \exp\left(-(\lambda_1 - \lambda_0)(x[0] + x[1])\right).

Taking the logarithm (monotonic transformation), we can write:

lnΛ(x)=2ln(λ1λ0)(λ1λ0)S,\ln \Lambda(x) = 2 \ln\left(\frac{\lambda_1}{\lambda_0}\right) - (\lambda_1 - \lambda_0) S,

where S=x[0]+x[1]S = x[0] + x[1] is the sufficient statistic.


Step 2: Uniformly Most Powerful (UMP) Test

Neyman-Pearson Lemma

For a one-sided hypothesis test (H1:λ>λ0H_1: \lambda > \lambda_0), the Neyman-Pearson Lemma ensures that a UMP test exists if the likelihood ratio Λ(x)\Lambda(x) is a monotonic function of the sufficient statistic S=x[0]+x[1]S = x[0] + x[1].

From the likelihood ratio:

  • Λ(x)\Lambda(x) increases as SS decreases (since (λ1λ0)>0(\lambda_1 - \lambda_0) > 0).

Thus, the UMP test exists, and the test statistic is:

T(x)=S=x[0]+x[1].T(x) = S = x[0] + x[1].

Step 3: Test Threshold and Decision Rule

The decision rule for the UMP test is:

  • Reject H0H_0 if T(x)<ηT(x) < \eta, where η\eta is the threshold.

Step 4: False Alarm Probability (PFAP_{FA})

Under H0H_0, λ=λ0\lambda = \lambda_0, and the sufficient statistic S=x[0]+x[1]S = x[0] + x[1] is the sum of two IID exponential random variables. The sum of two exponential random variables follows a Gamma distribution:

fS(sλ0)=λ02sexp(λ0s),s>0.f_S(s | \lambda_0) = \lambda_0^2 s \exp(-\lambda_0 s), \quad s > 0.

The cumulative distribution function (CDF) is:

FS(sλ0)=1exp(λ0s)(1+λ0s).F_S(s | \lambda_0) = 1 - \exp(-\lambda_0 s)(1 + \lambda_0 s).

The false alarm probability is:

PFA=P(T(x)<ηH0)=FS(ηλ0).P_{FA} = P(T(x) < \eta | H_0) = F_S(\eta | \lambda_0).

Substitute the CDF:

PFA=1exp(λ0η)(1+λ0η).P_{FA} = 1 - \exp(-\lambda_0 \eta)(1 + \lambda_0 \eta).

Summary

  1. UMP Test: The UMP test exists.
  2. Test Statistic: T(x)=x[0]+x[1]T(x) = x[0] + x[1].
  3. False Alarm Probability:
    PFA=1exp(λ0η)(1+λ0η).P_{FA} = 1 - \exp(-\lambda_0 \eta)(1 + \lambda_0 \eta).

The threshold η\eta can be set to achieve a desired PFAP_{FA}.

profile
AI, Security

0개의 댓글