Lecture 10. A) Finding UMVU Estimators

From Significant Statistics
Jump to navigation Jump to search
.

Finding UMVU estimators

In the previous lecture, we have introduced Rao-Blackwell’s theorem, which can be used to reduce the variance of an existing estimator while preserving its mean. This immediately implies that UMVU estimators need to be based on sufficient statistics. Otherwise, one could Rao-Blackwellize such estimators - in the limit, by using the whole sample as a sufficient statistic - to obtain a more efficient estimator.

While the Rao-Blackwell theorem is useful to find a more efficient estimator, we are still to discover a method to produce an UMVU estimator. It turns out that Rao-Blackwellization can be used to produce the unique UMVU under certain conditions. These are defined in the Lehmann-Scheffé Theorem.

Lehmann-Scheffé Theorem

Let [math]T[/math] be a sufficient and complete statistic for [math]\theta[/math]. Then, if [math]\widehat{\theta}[/math] is unbiased,

[math]\widetilde{\theta}=E_{\theta}\left(\left.\widehat{\theta}\right|T\right)[/math] is the unique UMVU.

We will define what it means for a statistic to be complete soon.

A question that immediately arises is whether UMVU estimators are always unique. The answer is yes, they are. This can be shown by contradiction, because if there existed (for example) two different UMVU estimators, the variance of their arithmetic mean can be calculated, and is lower than each individual variance.

The Lehmann-Scheffé Theorem shows that Rao-Blackwellization based on a sufficient and complete statistic of an unbiased estimator provides the UMVU.

The intuition is as follows: If one can ensure that Rao-Blackwellization based on a given type of statistic always yields the same estimator, then it must the UMVU, because for any other UMVU candidate, we could always Rao-Blackwellize it and obtain a unique estimator, whose variance will not be higher than our candidate's.