P1
Let ( X 0 , X 1 , … , X N − 1 ) (X_0,X_1,\dots,X_{N-1}) ( X 0 , X 1 , … , X N − 1 ) be a normal random variable X X X with maen μ \mu μ and variance σ 2 \sigma^2 σ 2 , where μ \mu μ is unknown. Assume that μ \mu μ is itself to be a normal random variable with mean μ 1 \mu_1 μ 1 and variance σ 1 2 \sigma^2_1 σ 1 2 .
Find the Bayes estimate of μ \mu μ
Solution
μ ^ = E ( μ ∣ x ) = ∫ μ p ( μ ∣ x ) d μ p ( μ ∣ x ) = p ( x ∣ μ ) p ( μ ) p ( x ) = p ( x ∣ μ ) p ( μ ) ∫ p ( x ∣ μ ) p ( μ ) d μ p ( x ∣ μ ) = ∏ n = 0 N − 1 1 2 π σ 2 exp [ − 1 2 σ 2 ( x [ n ] − μ ) 2 ] = 1 ( 2 π σ 2 ) N / 2 exp [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − μ ) 2 ] p ( μ ) = 1 2 π σ 1 2 [ − 1 2 σ 1 2 ( μ − μ 1 ) 2 ] → p ( μ ∣ x ) = 1 ( 2 π σ 2 ) N / 2 2 π σ 1 2 exp [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − μ ) 2 − 1 2 σ 1 2 ( μ − μ 1 ) 2 ] ∫ 1 ( 2 π σ 2 ) N / 2 2 π σ 1 2 exp [ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − μ ) 2 − 1 2 σ 1 2 ( μ − μ 1 ) 2 ] d μ − 1 2 σ 2 ∑ n = 0 N − 1 ( x [ n ] − μ ) 2 − 1 2 σ 1 2 ( μ − μ 1 ) 2 = μ 2 ( − N 2 σ 2 − 1 2 σ 1 2 ) + μ ( 1 σ 2 ∑ n = 0 N − 1 x [ n ] + μ 1 σ 1 2 ) − ∑ x 2 [ n ] 2 σ 2 − μ 1 2 2 σ 1 2 Let σ μ ∣ x 2 = 1 N σ 2 + 1 σ 1 2 , μ μ ∣ x = ( 1 σ 2 ∑ n = 0 N − 1 x [ n ] + μ 1 σ 1 2 ) σ μ ∣ x 2 \hat \mu=E(\mu|\text{x})=\int\mu p(\mu|\text{x})d\mu\\[0.2cm] p(\mu|\text{x})=\frac{p(\text{x}|\mu)p(\mu)}{p(\text{x})}=\frac{p(\text{x}|\mu)p(\mu)}{\int p(\text{x}|\mu)p(\mu)d\mu}\\[0.2cm] p(\text{x}|\mu)=\prod^{N-1}_{n=0} \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left[-\frac{1}{2\sigma^2}\left(x[n]-\mu\right)^2\right]=\frac{1}{(2\pi\sigma^2)^{N/2}}\exp\left[-\frac{1}{2\sigma^2}\sum^{N-1}_{n=0}(x[n]-\mu)^2\right]\\[0.3cm] p(\mu)=\frac{1}{\sqrt{2\pi\sigma^2_1}}\left[-\frac{1}{2\sigma^2_1}(\mu-\mu_1)^2\right]\\[0.2cm] \rightarrow p(\mu|\text{x})=\frac{\frac{1}{(2\pi\sigma^2)^{N/2}\sqrt{2\pi\sigma^2_1}}\exp\left[-\frac{1}{2\sigma^2}\sum^{N-1}_{n=0}(x[n]-\mu)^2-\frac{1}{2\sigma^2_1}(\mu-\mu_1)^2\right]}{\int\frac{1}{(2\pi\sigma^2)^{N/2}\sqrt{2\pi\sigma^2_1}}\exp\left[-\frac{1}{2\sigma^2}\sum^{N-1}_{n=0}(x[n]-\mu)^2-\frac{1}{2\sigma^2_1}(\mu-\mu_1)^2\right]d\mu}\\[0.3cm] -\frac{1}{2\sigma^2}\sum^{N-1}_{n=0}(x[n]-\mu)^2-\frac{1}{2\sigma^2_1}(\mu-\mu_1)^2\\[0.2cm] =\mu^2\left(-\frac{N}{2\sigma^2}-\frac{1}{2\sigma^2_1}\right)+\mu\left(\frac{1}{\sigma^2}\sum^{N-1}_{n=0}x[n]+\frac{\mu_1}{\sigma^2_1}\right)-\frac{\sum x^2[n]}{2\sigma^2}-\frac{\mu^2_1}{2\sigma^2_1}\\[0.3cm] \color{red}{\text{Let }\sigma^2_{\mu|\text{x}}=\frac{1}{\frac{N}{\sigma^2}+\frac{1}{\sigma^2_1}},\;\mu_{\mu|\text{x}}=\left(\frac{1}{\sigma^2}\sum^{N-1}_{n=0}x[n]+\frac{\mu_1}{\sigma^2_1}\right)\sigma^2_{\mu|\text{x}}}\\[0.3cm] μ ^ = E ( μ ∣ x ) = ∫ μ p ( μ ∣ x ) d μ p ( μ ∣ x ) = p ( x ) p ( x ∣ μ ) p ( μ ) = ∫ p ( x ∣ μ ) p ( μ ) d μ p ( x ∣ μ ) p ( μ ) p ( x ∣ μ ) = n = 0 ∏ N − 1 2 π σ 2 1 exp [ − 2 σ 2 1 ( x [ n ] − μ ) 2 ] = ( 2 π σ 2 ) N / 2 1 exp [ − 2 σ 2 1 n = 0 ∑ N − 1 ( x [ n ] − μ ) 2 ] p ( μ ) = 2 π σ 1 2 1 [ − 2 σ 1 2 1 ( μ − μ 1 ) 2 ] → p ( μ ∣ x ) = ∫ ( 2 π σ 2 ) N / 2 2 π σ 1 2 1 exp [ − 2 σ 2 1 ∑ n = 0 N − 1 ( x [ n ] − μ ) 2 − 2 σ 1 2 1 ( μ − μ 1 ) 2 ] d μ ( 2 π σ 2 ) N / 2 2 π σ 1 2 1 exp [ − 2 σ 2 1 ∑ n = 0 N − 1 ( x [ n ] − μ ) 2 − 2 σ 1 2 1 ( μ − μ 1 ) 2 ] − 2 σ 2 1 n = 0 ∑ N − 1 ( x [ n ] − μ ) 2 − 2 σ 1 2 1 ( μ − μ 1 ) 2 = μ 2 ( − 2 σ 2 N − 2 σ 1 2 1 ) + μ ( σ 2 1 n = 0 ∑ N − 1 x [ n ] + σ 1 2 μ 1 ) − 2 σ 2 ∑ x 2 [ n ] − 2 σ 1 2 μ 1 2 Let σ μ ∣ x 2 = σ 2 N + σ 1 2 1 1 , μ μ ∣ x = ( σ 2 1 n = 0 ∑ N − 1 x [ n ] + σ 1 2 μ 1 ) σ μ ∣ x 2
= − 1 2 1 σ μ ∣ x 2 ( μ 2 − 2 μ μ ∣ x μ + μ μ ∣ x 2 ) − 1 2 ⋅ μ μ ∣ x 2 σ μ ∣ x 2 − ∑ x 2 [ n ] 2 σ 2 − μ 1 2 2 σ 1 2 ⋯ not depend on μ =-\frac{1}{2}\frac{1}{\sigma^2_{\mu|\text{x}}}(\mu^2-2\mu_{\mu|\text{x}}\mu+\mu_{\mu|\text{x}}^2)\color{blue}{-\frac{1}{2}\cdot\frac{\mu^2_{\mu|\text{x}}}{\sigma^2_{\mu|\text{x}}}-\frac{\sum x^2[n]}{2\sigma^2}-\frac{\mu^2_1}{2\sigma^2_1}} \cdots \text{not depend on }\mu = − 2 1 σ μ ∣ x 2 1 ( μ 2 − 2 μ μ ∣ x μ + μ μ ∣ x 2 ) − 2 1 ⋅ σ μ ∣ x 2 μ μ ∣ x 2 − 2 σ 2 ∑ x 2 [ n ] − 2 σ 1 2 μ 1 2 ⋯ not depend on μ
→ p ( μ ∣ x ) = exp [ − 1 2 σ μ ∣ x 2 ( μ − μ μ ∣ x ) 2 ] ∫ exp [ − 1 2 σ μ ∣ x 2 ( μ − μ μ ∣ x ) 2 ] d μ = 1 2 π σ μ ∣ x 2 exp [ − 1 2 σ μ ∣ x 2 ( μ − μ μ ∣ x ) 2 ] ⋯ Gaussian → μ ^ = E [ μ ∣ x ] = μ μ ∣ x = ( 1 σ 2 ∑ n = 0 N − 1 x [ n ] + μ 1 σ 1 2 ) ( 1 N σ 2 + 1 σ 1 2 ) ∴ μ ^ = σ 1 2 ∑ n = 0 N − 1 x [ n ] + μ 1 σ 2 N σ 1 2 + σ 2 \rightarrow p(\mu|\text{x})=\frac{\exp\left[-\frac{1}{2\sigma^2_{\mu|\text{x}}}(\mu-\mu_{\mu|\text{x}})^2\right]}{\int\exp\left[-\frac{1}{2\sigma^2_{\mu|\text{x}}}(\mu-\mu_{\mu|\text{x}})^2\right]d\mu}\\[0.3cm] =\frac{1}{\sqrt{2\pi\sigma^2_{\mu|\text{x}}}}\exp\left[-\frac{1}{2\sigma^2_{\mu|\text{x}}}(\mu-\mu_{\mu|\text{x}})^2\right]\cdots\text{Gaussian}\\[0.3cm] \rightarrow \hat \mu=E[\mu|\text{x}]=\mu_{\mu|\text{x}}=\left(\frac{1}{\sigma^2}\sum^{N-1}_{n=0}x[n]+\frac{\mu_1}{\sigma^2_1}\right)\left(\frac{1}{\frac{N}{\sigma^2}+\frac{1}{\sigma^2_1}}\right)\\[0.3cm] \therefore \hat \mu=\frac{\sigma^2_1\sum^{N-1}_{n=0}x[n]+\mu_1\sigma^2}{N\sigma^2_1+\sigma^2} → p ( μ ∣ x ) = ∫ exp [ − 2 σ μ ∣ x 2 1 ( μ − μ μ ∣ x ) 2 ] d μ exp [ − 2 σ μ ∣ x 2 1 ( μ − μ μ ∣ x ) 2 ] = 2 π σ μ ∣ x 2 1 exp [ − 2 σ μ ∣ x 2 1 ( μ − μ μ ∣ x ) 2 ] ⋯ Gaussian → μ ^ = E [ μ ∣ x ] = μ μ ∣ x = ( σ 2 1 n = 0 ∑ N − 1 x [ n ] + σ 1 2 μ 1 ) ( σ 2 N + σ 1 2 1 1 ) ∴ μ ^ = N σ 1 2 + σ 2 σ 1 2 ∑ n = 0 N − 1 x [ n ] + μ 1 σ 2
P2
Suppose our observation Y Y Y is Poisson random variable with rate θ \theta θ ,
p ( y ∣ θ ) = θ 2 e − θ ∣ y ∣ p(y|\theta)=\frac{\theta}{2}e^{-\theta|y|} p ( y ∣ θ ) = 2 θ e − θ ∣ y ∣
where θ \theta θ is unknown with a prior density
p ( θ ) = { 1 θ 1 ≤ θ ≤ e 0 otherwise p(\theta)=\begin{cases} \frac{1}{\theta}\;1\leq\theta\leq e\\0\;\text{otherwise}\end{cases} p ( θ ) = { θ 1 1 ≤ θ ≤ e 0 otherwise
Find the MAP estimator of θ \theta θ
Find MMSE estimator of θ \theta θ
Solution
i) maximizing ln p ( y ∣ θ ) + ln p ( θ ) = ln [ θ N 2 N exp [ − θ ⋅ ∑ n = 0 N − 1 ( y [ n ] ) ] ] + ln 1 θ = N ln θ − N ln 2 − θ ∑ n = 0 N − 1 ∣ y [ n ] ∣ − ln θ ∂ ∂ θ ( ln p ( y ∣ θ ) + ln p ( θ ) ) = N θ − ∑ n = 0 N − 1 ∣ y [ n ] ∣ − 1 θ = 0 → θ ^ = N − 1 ∑ n = 0 N − 1 ∣ y [ n ] ∣ \text{i) maximizing }\ln p(\text{y}|\theta)+\ln p(\theta)\\[0.2cm] =\ln\left[\frac{\theta^N}{2^N}\exp\left[-\theta\cdot\sum^{N-1}_{n=0}(y[n])\right]\right]+\ln\frac{1}{\theta}\\[0.2cm] =N\ln\theta-N\ln 2-\theta\sum^{N-1}_{n=0}|y[n]|-\ln\theta\\[0.2cm] \frac{\partial}{\partial\theta}(\ln p(\text{y}|\theta)+\ln p(\theta))=\frac{N}{\theta}-\sum^{N-1}_{n=0}|y[n]|-\frac{1}{\theta}=0\\[0.2cm] \rightarrow \hat \theta=\frac{N-1}{\sum^{N-1}_{n=0}|y[n]|} i) maximizing ln p ( y ∣ θ ) + ln p ( θ ) = ln [ 2 N θ N exp [ − θ ⋅ n = 0 ∑ N − 1 ( y [ n ] ) ] ] + ln θ 1 = N ln θ − N ln 2 − θ n = 0 ∑ N − 1 ∣ y [ n ] ∣ − ln θ ∂ θ ∂ ( ln p ( y ∣ θ ) + ln p ( θ ) ) = θ N − n = 0 ∑ N − 1 ∣ y [ n ] ∣ − θ 1 = 0 → θ ^ = ∑ n = 0 N − 1 ∣ y [ n ] ∣ N − 1
ii) p ( θ ∣ y ) = p ( y ∣ θ ) p ( θ ) ∫ p ( y ∣ θ ) p ( θ ) d θ = { θ N − 1 2 N e − θ ∑ ∣ y ∣ ∫ θ N − 1 2 N e − θ ∑ ∣ y ∣ d θ 1 ≤ θ ≤ e 0 otherwise = { 1 2 e − θ ∑ ∣ y ∣ − 1 ∑ ∣ y ∣ ( e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ ) 1 ≤ θ ≤ e 0 otherwise → E ( θ ∣ y ) = − ∫ 1 θ θ ⋅ ∑ ∣ y ∣ 2 ⋅ e − θ ∑ ∣ y ∣ d θ ⋅ 1 e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ − ∫ 1 θ θ ⋅ ∑ ∣ y ∣ 2 ⋅ e − θ ∑ ∣ y ∣ d θ = ∫ 1 e ( − θ ⋅ 1 2 e − θ ∑ ∣ y ∣ ) d θ + ∫ 1 θ 1 2 e − θ ∑ ∣ y ∣ d θ = − 1 2 ( e ⋅ e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ ) − 1 2 ∑ ∣ y ∣ ( e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ ) = e 1 − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ 2 ( e − ∑ ∣ y ∣ − e − e ∑ ∣ y ∣ ) − 1 2 ∑ [ n ] = θ ^ ⋯ MMSE \text{ii) }p(\theta|y)=\frac{p(\text{y}|\theta)p(\theta)}{\int p(\text{y}|\theta)p(\theta)d\theta}\\[0.3cm] =\begin{cases} \frac{\frac{\theta^{N-1}}{2^N}e^{-\theta\sum|y|}}{\int\frac{\theta^{N-1}}{2^N}e^{-\theta\sum|y|}d\theta}\quad1\leq\theta\leq e\\ 0\quad\text{otherwise}\end{cases}\\[0.4cm] =\begin{cases}\frac{\frac{1}{2}e^{-\theta\sum|y|}}{-\frac{1}{\sum|y|}\left(e^{-e\sum|y|}-e^{-\sum|y|}\right)}\quad 1\leq\theta\leq e\\ 0\quad\text{otherwise}\end{cases}\\[0.4cm] \rightarrow E(\theta|\text{y})=-\int^\theta_1\theta\cdot\frac{\sum|y|}{2}\cdot e^{-\theta\sum|y|}d\theta\cdot\frac{1}{e^{-e\sum|y|}-e^{-\sum|y|}}\\[0.4cm] -\int^\theta_1\theta\cdot\frac{\sum|y|}{2}\cdot e^{-\theta\sum|y|}d\theta=\int^e_1\left(-\theta\cdot \frac{1}{2}e^{-\theta\sum|y|}\right)d\theta+\int^\theta_1\frac{1}{2}e^{-\theta\sum|y|}d\theta\\[0.4cm] =-\frac{1}{2}\left(e\cdot e^{-e\sum|y|}-e^{-\sum|y|}\right)-\frac{1}{2\sum|y|}\left(e^{-e\sum|y|}-e^{-\sum|y|}\right)\\[0.3cm] =\frac{e^{1-e\sum|y|}-e^{-\sum|y|}}{2(e^{-\sum|y|}-e^{-e\sum|y|})}-\frac{1}{2\sum[n]}=\hat\theta\cdots\text{MMSE} ii) p ( θ ∣ y ) = ∫ p ( y ∣ θ ) p ( θ ) d θ p ( y ∣ θ ) p ( θ ) = ⎩ ⎪ ⎨ ⎪ ⎧ ∫ 2 N θ N − 1 e − θ ∑ ∣ y ∣ d θ 2 N θ N − 1 e − θ ∑ ∣ y ∣ 1 ≤ θ ≤ e 0 otherwise = ⎩ ⎪ ⎨ ⎪ ⎧ − ∑ ∣ y ∣ 1 ( e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ ) 2 1 e − θ ∑ ∣ y ∣ 1 ≤ θ ≤ e 0 otherwise → E ( θ ∣ y ) = − ∫ 1 θ θ ⋅ 2 ∑ ∣ y ∣ ⋅ e − θ ∑ ∣ y ∣ d θ ⋅ e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ 1 − ∫ 1 θ θ ⋅ 2 ∑ ∣ y ∣ ⋅ e − θ ∑ ∣ y ∣ d θ = ∫ 1 e ( − θ ⋅ 2 1 e − θ ∑ ∣ y ∣ ) d θ + ∫ 1 θ 2 1 e − θ ∑ ∣ y ∣ d θ = − 2 1 ( e ⋅ e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ ) − 2 ∑ ∣ y ∣ 1 ( e − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ ) = 2 ( e − ∑ ∣ y ∣ − e − e ∑ ∣ y ∣ ) e 1 − e ∑ ∣ y ∣ − e − ∑ ∣ y ∣ − 2 ∑ [ n ] 1 = θ ^ ⋯ MMSE
P3
The data x [ n ] x[n] x [ n ] for n = 0 , 1 , … , N − 1 n=0,1,\dots,N-1 n = 0 , 1 , … , N − 1 are observed, each sample having the conditional PDF
p ( x [ n ] ∣ θ ) = { exp [ − ( x [ n ] − θ ) ] x [ n ] > θ 0 x [ n ] < θ p(x[n]|\theta)=\begin{cases}\exp[-(x[n]-\theta)]\quad x[n]>\theta\\ 0\quad x[n]<\theta\end{cases} p ( x [ n ] ∣ θ ) = { exp [ − ( x [ n ] − θ ) ] x [ n ] > θ 0 x [ n ] < θ
and conditioned on θ \theta θ the observations are independent. The prior PDF is
p ( θ ) = { exp ( − θ ) θ > 0 0 θ < 0 p(\theta)=\begin{cases}\exp(-\theta)\quad\theta>0\\0\quad\theta<0\end{cases} p ( θ ) = { exp ( − θ ) θ > 0 0 θ < 0
Find the MMSE estimator of θ \theta θ .
Solution
p ( x ∣ θ ) = exp [ − ∑ n = 0 N − 1 x [ n ] + N θ ] ⋅ u [ min ( x [ n ] ) − θ ] p ( θ ) = exp ( − θ ) u [ θ ] → p ( x ∣ θ ) ⋅ p ( θ ) = exp [ − ∑ n = 0 N − 1 x [ n ] + ( N − 1 ) θ ] ⋅ u [ min ( x [ n ] ) − θ ] u [ θ ] → p ( θ ∣ x ) = p ( x ∣ θ ) p ( θ ) ∫ p ( x ∣ θ ) p ( θ ) d θ = exp [ ( N − 1 ) θ ] u [ min ( x [ n ] ) − θ ] u [ θ ] ∫ 0 min x [ n ] exp [ ( N − 1 ) θ ] d θ = e ( N − 1 ) θ u [ min ( x [ n ] ) − θ ] u [ θ ] 1 N − 1 e ( N − 1 ) min x [ n ] − 1 → E [ θ ∣ x ] = ∫ 0 min x [ n ] θ ⋅ ( N − 1 ) ⋅ e ( N − 1 ) θ e ( N − 1 ) min x [ n ] − 1 d θ = N − 1 e ( N − 1 ) min x [ n ] − 1 ∫ 0 min x [ n ] θ ⋅ e ( N − 1 ) θ d θ = N − 1 e ( N − 1 ) min x [ n ] − 1 [ ( θ N − 1 e ( N − 1 ) θ ) ∣ 0 min x [ n ] − ∫ 0 min x [ n ] 1 N − 1 e ( N − 1 ) θ d θ ] = N − 1 e ( N − 1 ) min x [ n ] − 1 [ min x [ n ] N − 1 e ( N − 1 ) min x [ n ] − 1 ( N − 1 ) 2 e ( N − 1 ) min x [ n ] + 1 ( N − 1 ) 2 ] = min x [ n ] e ( N − 1 ) min x [ n ] e ( N − 1 ) min x [ n ] − 1 − e ( N − 1 ) min x [ n ] ( N − 1 ) ( e ( N − 1 ) min x [ n ] − 1 ) + 1 ( N − 1 ) ( e ( N − 1 ) min x [ n ] − 1 ) = min x [ n ] e ( N − 1 ) min x [ n ] e ( N − 1 ) min x [ n ] − 1 − 1 N − 1 = min x [ n ] 1 − e − ( N − 1 ) min x [ n ] − 1 N − 1 p(\text{x}|\theta)=\exp\left[-\sum^{N-1}_{n=0}x[n]+N\theta\right]\cdot u[\min(x[n])-\theta]\\[0.2cm] p(\theta)=\exp(-\theta)u[\theta]\\[0.3cm] \rightarrow p(\text{x}|\theta)\cdot p(\theta)=\exp\left[-\sum^{N-1}_{n=0}x[n]+(N-1)\theta\right]\cdot u[\min(x[n])-\theta]u[\theta]\\[0.3cm] \rightarrow p(\theta|\text{x})=\frac{p(\text{x}|\theta)p(\theta)}{\int p(\text{x}|\theta)p(\theta)d\theta}=\frac{\exp[(N-1)\theta]u[\min(x[n])-\theta]u[\theta]}{\int^{\min x[n]}_0\exp[(N-1)\theta]d\theta}\\[0.3cm] =\frac{e^{(N-1)\theta}u[\min(x[n])-\theta]u[\theta]}{\frac{1}{N-1}e^{(N-1)\min x[n]}-1}\\[0.3cm] \rightarrow E[\theta|\text{x}]=\int^{\min x[n]}_0\theta\cdot \frac{(N-1)\cdot e^{(N-1)\theta}}{e^{(N-1)\min x[n]}-1}d\theta\\[0.3cm] =\frac{N-1}{e^{(N-1)\min x[n]}-1}\int^{\min x[n]}_0\theta\cdot e^{(N-1)\theta}d\theta\\[0.3cm] =\frac{N-1}{e^{(N-1)\min x[n]}-1}\left[\left(\frac{\theta}{N-1}e^{(N-1)\theta}\right)|^{\min x[n]}_0-\int^{\min x[n]}_0 \frac{1}{N-1}e^{(N-1)\theta}d\theta\right]\\[0.3cm] =\frac{N-1}{e^{(N-1)\min x[n]}-1}\left[\frac{\min x[n]}{N-1}e^{(N-1)\min x[n]}-\frac{1}{(N-1)^2}e^{(N-1)\min x[n]}+\frac{1}{(N-1)^2}\right]\\[0.3cm] =\frac{\min x[n]e^{(N-1) \min x[n]}}{e^{(N-1) \min x[n]}-1}-\frac{e^{(N-1)\min x[n]}}{(N-1)\left(e^{(N-1)\min x[n] }-1\right)}+\frac{1}{(N-1)(e^{(N-1)\min x[n]}-1)}\\[0.3cm] =\frac{\min x[n]e^{(N-1)\min x[n]}}{e^{(N-1)\min x[n]}-1}-\frac{1}{N-1}\\[0.3cm] =\frac{\min x[n]}{1-e^{-(N-1)\min x[n]}}-\frac{1}{N-1} p ( x ∣ θ ) = exp [ − n = 0 ∑ N − 1 x [ n ] + N θ ] ⋅ u [ min ( x [ n ] ) − θ ] p ( θ ) = exp ( − θ ) u [ θ ] → p ( x ∣ θ ) ⋅ p ( θ ) = exp [ − n = 0 ∑ N − 1 x [ n ] + ( N − 1 ) θ ] ⋅ u [ min ( x [ n ] ) − θ ] u [ θ ] → p ( θ ∣ x ) = ∫ p ( x ∣ θ ) p ( θ ) d θ p ( x ∣ θ ) p ( θ ) = ∫ 0 m i n x [ n ] exp [ ( N − 1 ) θ ] d θ exp [ ( N − 1 ) θ ] u [ min ( x [ n ] ) − θ ] u [ θ ] = N − 1 1 e ( N − 1 ) m i n x [ n ] − 1 e ( N − 1 ) θ u [ min ( x [ n ] ) − θ ] u [ θ ] → E [ θ ∣ x ] = ∫ 0 m i n x [ n ] θ ⋅ e ( N − 1 ) m i n x [ n ] − 1 ( N − 1 ) ⋅ e ( N − 1 ) θ d θ = e ( N − 1 ) m i n x [ n ] − 1 N − 1 ∫ 0 m i n x [ n ] θ ⋅ e ( N − 1 ) θ d θ = e ( N − 1 ) m i n x [ n ] − 1 N − 1 [ ( N − 1 θ e ( N − 1 ) θ ) ∣ 0 m i n x [ n ] − ∫ 0 m i n x [ n ] N − 1 1 e ( N − 1 ) θ d θ ] = e ( N − 1 ) m i n x [ n ] − 1 N − 1 [ N − 1 min x [ n ] e ( N − 1 ) m i n x [ n ] − ( N − 1 ) 2 1 e ( N − 1 ) m i n x [ n ] + ( N − 1 ) 2 1 ] = e ( N − 1 ) m i n x [ n ] − 1 min x [ n ] e ( N − 1 ) m i n x [ n ] − ( N − 1 ) ( e ( N − 1 ) m i n x [ n ] − 1 ) e ( N − 1 ) m i n x [ n ] + ( N − 1 ) ( e ( N − 1 ) m i n x [ n ] − 1 ) 1 = e ( N − 1 ) m i n x [ n ] − 1 min x [ n ] e ( N − 1 ) m i n x [ n ] − N − 1 1 = 1 − e − ( N − 1 ) m i n x [ n ] min x [ n ] − N − 1 1