linear way MVUE was easy to find :covariance matrix, CRLB, if you got lucky, you could writeMVUE, which would also be an efficient estimator(meeting the CRLB)MVUE may existA systematic way of determining the
MVUEis introduced, if it exists
sufficient statistic for the paramter to be estimated Neyman-Fisher factorization theoremsufficient statistic is also completenot complete, we can say nothing more about the MVUEStep 3MVUE from is one of two ways using the Rao-Blackwell-Lehmann-Scheffe(RBLS) theorem :sufficient statistic that yields an unbiased estimator , the MVUEcompleteness of the statistic, this will yield the MVUEunbiased estimator for MVUE often does not exist or can't be found!Solution : If the PDF is known, then
MLEcan always be used!!
(MLE is one of the most popular practical methods)
Optimal for large enough data sizeNot optimal for small data sizeCRLB theorem or RBLS theorem is not applicable)unknown variance CRLB theorem can give an efficient estimator :CRLB is givenMVUE based on sufficient statisticssufficient statistic for by Neyman-Fisher factorization theoremcomplete, find a function of that produces an unbiased estimatorMVUE : propose an estimator that is approximately optimaloptimalityBetter estimator may exist, but finding them may not be easybiased, but as Asymtatically Unbiased
linearization works
CRLBis achieved
What is the probability that I would get the data set I really got?
is the value of that maximizes the Likelihood Function,
Note : Equivalently, maximizes the log-likelihood function
General Analytical Procedure to Find the MLE

log-likelihood function : We are still discussing
Classicalestimation( is not a random variable)
- Bayesian MLE is that maximizes with a prior probability
- While MAP estimator maximizes
Chap. 11
unknown varianceCRLB theorem or RBLS theorem is not applicable)Finally, we check the second derivative to determine if it is a
maximumor aminimum
MLE is asymptotically, Unbiased, Efficient(i.e. achieves CRLB) and had a Gaussian PDFML procedure finds it!If then achieve
CRLB
Asymptotic properties of the MLEregularity conditions, then the MLE of the unknown parameter is asymptotically distributed (for large data records) according toFisher information evaluated at the true value of , the unknown parameterRegularity condition : existence of the derivatives of the log-likelihood function and nonzero Fisher informationCentral limit theoremAWGNMC simulation for each value of interestWGN having unit variance, SNRSNR values → do one MC simulation for each SNR valueAsymptotic properties of the MLE)
std → 1/2, var → 1/4MLE of the sinusoidal phase in WGNMVUE will fail! → So... Try MLEsufficient statisticsMLE : that maximizesminimizeMLE is function of the sufficient statistics since,Asymptotic PDF of the estimator
Given PDF but want an estimate of
What is the MLE for ?
Two cases :
is a one-to-one function

is not a one-to-one function
Need to define
modified likelihood function

MLE of Maximizing results inMLE of the transformed parameter is the transform of the MLE of the original parameterInvariance property
MLE of is the value that maximizes the maximum between and 
Invariance property of the MLEMLE of is given byMLE of , found by maximizing If is not a one-to-one function, then maximizes the
modified likelihoodfunction , defined as
MLENumerical determination of the MLEMLE is that we can find it for a given data set numericallygrid searchinfinite intervalIterative methods ▷ will result in local maximum ▷ good initial guess is important

For the parameters that can span infinite interval - iterative method

initial estimate Convergence Issues : May not converge, or may converge, but to local maximum
Good initial guess is needed
(Use rough grid search to initialize or multiple initialization)
MLENewton-Raphson method
may not convergeglobal maximum but possibly only a local maximum or even a local minimumGood initial guess is important
scoringscoringThe expectation provides more
stableiteration
Asymptotic properties of the MLE (vector parameter)regularity conditions,MLE of the unknown parameter is asymptotically distributed(for large data records) according toFisher information matrix evaluated at the true value of the unknown parameterML is asymptotically unbiased and efficientInvariance Property Holds for Vector Casecentral limit theoremcomplete and incomplete data conceptsminimize :Decoupled objective function to minimize :complete dataincomplete datanot unique :complete to incomplete data transformationMLE : that maximizes EM algorithm : maximize , but is not available, so maximize![]() | ![]() |
|---|---|
| PSD of | ACF of |
Likelihood Function :So,
MLEimplementation is based onCross-correlation:
"Correlate" received signal with trasmitted signal
