Full Lecture 9

From Significant Statistics
Jump to navigation Jump to search

Point Estimation (cont.)

Example: Uniform

Suppose [math]X_{i}\overset{iid}{\sim}U\left(0,\theta\right)[/math] where [math]\theta\gt 0[/math] is unknown.

The likelihood function equals [math]L\left(\left.\theta\right|x_{1}..x_{n}\right)=\Pi_{i=1}^{n}f\left(\left.x_{i}\right|\theta\right)=\Pi_{i=1}^{n}\frac{1}{\theta}1\left(0\leq x_{i}\leq\theta\right)[/math]

Since [math]x_{i}[/math]’s are draws from [math]U\left(0,\theta\right)[/math], the [math]0\leq x_{i}[/math] constraint will always be satisfied. However, since we are uncertain about the true value of [math]\theta[/math], the upper constraint may be binding.

This yields the following likelihood: [math]\Pi_{i=1}^{n}\frac{1}{\theta}1\left(0\leq x_{i}\leq\theta\right)=\Pi_{i=1}^{n}\frac{1}{\theta}1\left(x_{i}\leq\theta\right)=\frac{1}{\theta^{n}}1\left(x_{\left(n\right)}\leq\theta\right)[/math]

Notice that [math]L\left(\left.\cdot\right|x_{1}..x_{n}\right)[/math] is not differentiable at [math]\theta=x_{\left(n\right)}[/math]. We separate the problem:

  • [math]L\left(\left.\cdot\right|x_{1}..x_{n}\right)=0[/math] if [math]\theta\lt x_{\left(n\right)}[/math]; this reveals the impossibility that a value is generated above [math]\theta[/math].
  • [math]L\left(\left.\cdot\right|x_{1}..x_{n}\right)=\frac{1}{\theta^{n}}[/math] if [math]\theta\geq x_{\left(n\right)}[/math]; it is decreasing in [math]\theta[/math], so constraint is active, and [math]\widehat{\theta}_{ML}=x_{\left(n\right)}[/math].

Notice that the maximum likelihood estimator is different from the method of moments, [math]\widehat{\theta}_{ML}=x_{\left(n\right)}[/math] while [math]\widehat{\theta}_{MM}=2\overline{x}[/math].

Unlike the method of moments, we cannot obtain an estimator st [math]x_{i}\gt \widehat{\theta}_{ML}[/math].

However, as we will discuss later, there are some bad news.

The fact that we can never obtain [math]\widehat{\theta}_{ML}\gt \theta_{0}[/math], where [math]\theta_{0}[/math] is the true value of parameter [math]\theta[/math], means that the maximum likelihood estimator is likely to systematically underestimate the true parameter value.


Evaluating Estimators

A good estimator of [math]\theta[/math] is close to [math]\theta[/math] in some probabilistic sense. For reasons of convenience, the leading criterion is the mean squared error:

The mean squared error (MSE) of an estimator [math]\widehat{\theta}[/math] of [math]\theta\in\Theta\subseteq\mathbb{R}[/math] is a function (of [math]\widehat{\theta}[/math]) given by

[math]MSE_{\theta}\left(\widehat{\theta}\right)=E_{\theta}\left[\left(\theta-\widehat{\theta}\right)^{2}\right][/math]

Where from here on, we use notation [math]E_{\theta}\left[\cdot\right]=E_{\theta}\left[\left.\cdot\right|\theta\right][/math], that is, the subscript indicates the variable to be conditioned on (before, it used to mean the variable of integration). So,

[math]MSE_{\theta}\left(\widehat{\theta}\right)=E_{\theta}\left[\left(\theta-\widehat{\theta}\right)^{2}\right]=E\left[\left.\left(\theta-\widehat{\theta}\right)^{2}\right|\theta\right][/math] The interpretation is that MSE gives us the expected quadratic difference between our estimator and a specific value of [math]\theta[/math], which we usually assume to be some true value.

MSE is mostly popular due to its tractability. When [math]\theta[/math] is a vector of parameters, we employ the vector version instead: [math]MSE_{\theta}\left(\widehat{\theta}\right)=E_{\theta}\left[\left(\theta-\widehat{\theta}\right).\left(\theta-\widehat{\theta}\right)^{'}\right][/math]

The vector version of the MSE produces a matrix. Two compare 2 estimator vectors, we compare these matrices. Namely, we say an MSE is lower than another if the difference of the matrices is positive semi-definite (i.e., [math]z'.M.z\geq0,\,\forall z\neq0[/math]). We will confine ourselves to the scalar case most of the time.

If [math]MSE_{\theta}\left(\widehat{\theta}_{1}\right)\gt MSE_{\theta}\left(\widehat{\theta}_{2}\right)[/math] for all values of [math]\theta[/math], we are tempted to say that [math]\widehat{\theta}_{2}[/math] is better, since it is on average closer to [math]\theta[/math], whatever value it has. However, we may feel different if [math]\widehat{\theta}_{2}[/math] systematically underestimates (or overestimates) [math]\theta[/math].

In order to take this into account we now introduce the concept of bias:

[math]Bias_{\theta}\left(\widehat{\theta}\right)=E_{\theta}\left(\theta-\widehat{\theta}\right)[/math]

Whenever [math]E_{\theta}\left(\theta-\widehat{\theta}\right)=0[/math] - or equivalently, [math]E_{\theta}\left(\widehat{\theta}\right)=\theta[/math] - we say estimator [math]\widehat{\theta}[/math] is unbiased.

What follows is fundamental result about the decomposition of the MSE:

[math]MSE_{\theta}\left(\widehat{\theta}\right)=Var_{\theta}\left(\widehat{\theta}\right)+Bias_{\theta}\left(\widehat{\theta}\right)^{2}[/math]

This means that, for estimators with a given MSE, for example, there is a tradeoff between bias and variance.

The proof of the result is obtained by adding and subtracting [math]E_{\theta}\left(\widehat{\theta}\right)[/math]:

[math]\begin{aligned} MSE_{\theta}\left(\widehat{\theta}\right)=E_{\theta}\left[\left(\theta-\widehat{\theta}\right)^{2}\right] & =E_{\theta}\left[\left(\theta-\widehat{\theta}+E_{\theta}\left(\widehat{\theta}\right)-E_{\theta}\left(\widehat{\theta}\right)\right)^{2}\right]\\ & =E_{\theta}\left[\left(\widehat{\theta}-E_{\theta}\left(\widehat{\theta}\right)\right)^{2}\right]+\underset{=\left(\theta-E_{\theta}\left(\widehat{\theta}\right)\right)^{2}}{\underbrace{E_{\theta}\left[\left(E_{\theta}\left(\widehat{\theta}\right)-\theta\right)^{2}\right]}}+\underset{=0}{\underbrace{...}}\\ & =Var_{\theta}\left(\widehat{\theta}\right)+Bias_{\theta}\left(\widehat{\theta}\right)^{2}\end{aligned}[/math]

We now define an efficient estimator:

Let [math]W[/math] be a collection of estimators of [math]\theta\in\Theta[/math]. An estimator [math]\widehat{\theta}[/math] is efficient relative to [math]W[/math] is [math]MSE_{\theta}\left(\widehat{\theta}\right)\leq MSE_{\theta}\left(w\right),\,\forall\theta\in\Theta,\,\forall w\in W[/math].

In order to find a “best” estimator, we have to restrict [math]W[/math] in some way (otherwise, we can often find many estimators with equal MSE, by exploiting the bias/variance tradeoff).


Minimum Variance Estimators

We usually focus our attention on unbiased estimators. Those that, one average, produce the correct result.

The collection of unbiased estimators is [math]W_{u}=\left\{ w:\,Bias_{\theta}\left(w\right)=0,\,Var_{\theta}\left(w\right)\lt \infty,\,\forall\theta\in\Theta\right\}[/math].

So, if [math]\widehat{\theta}\in W_{u}[/math], then [math]MSE_{\theta}\left(\widehat{\theta}\right)=Var_{\theta}\left(\widehat{\theta}\right).[/math]

We can now define a type of minimum variance estimator:

An estimator [math]\widehat{\theta}\in W_{u}[/math] of [math]\theta[/math] is a uniform minimum-variance unbiased (UMVU) estimator of [math]\theta[/math] if it is efficient relative to [math]W_{u}[/math].

The minimum-variance unbiased part of UMVU should be clear. Of the unbiased estimators, [math]\widehat{\theta}[/math] is “MVU” if it achieves the lowest variance and is unbiased. The “uniform” part simply means that [math]\widehat{\theta}[/math] is unbiased and minimum variance for all values that [math]\theta[/math] may hold. It is MVU if [math]\theta=4[/math], and if [math]\theta=-3[/math], etc.

It is often possible to identify UMVU estimators. The tool to do this is the Rao-Blackwell theorem. Before we do so, we need to introduce an additional concept.


Sufficient Statistics

Let [math]X_{1}..X_{n}[/math] be a random sample from a distribution with pmf/pdf [math]f\left(\left.\cdot\right|\theta\right)[/math], where [math]\theta\in\Theta[/math] is unknown.

A statistic [math]T=T\left(X_{1}..X_{n}\right)[/math] is a sufficient statistic for parameter [math]\theta[/math] if the conditional pmf/pdf of [math]\left(X_{1}..X_{n}\right)[/math] given [math]T[/math] does not depend on [math]\theta[/math], i.e.,

[math]f\left(\left.X\right|\theta,T\right)=f\left(\left.X\right|T\right)[/math].

The reason why we are interested in sufficient statistics will be clear once we present the Rao-Blackwell theorem. However, it is worth thinking a bit about the meaning of sufficient statistics first.

Intuitively, sufficient statistics summarize all the useful information in a sample that can be used to characterize [math]\theta[/math]. Once conditioned upon, the a sufficient statistic informs [math]\theta[/math] so completely that there is nothing left in the sample that can be useful to inform [math]\theta[/math].

Intuitive Example: Uniform

Consider the uniform sample pdf

[math]f_{X_{1}..X_{n}}\left(\left.x\right|\theta\right)=\frac{1}{\theta^{n}}1\left(X_{1}\leq\theta\wedge X_{_{2}}\leq\theta\wedge...\wedge X_{n}\leq\theta\right)[/math].

In order to estimate [math]\theta[/math], one could write down the likelihood function based on [math]f_{X}\left(\left.x\right|\theta\right)[/math], which uses each observation in the sample. However, note that it is sufficient to have information about the maximum observation, [math]X_{\left(n\right)}[/math].

The previous pdf can also be written as [math]f_{X}\left(\left.x\right|\theta\right)=\frac{1}{\theta^{n}}1\left(X_{\left(n\right)}\leq\theta\right)[/math], meaning that a researcher employing maximum likelihood will obtain the same estimate of [math]\theta[/math] independently of whether she observes the whole sample or simply the sample maximum.

Indeed, [math]X_{\left(n\right)}[/math] is a sufficient statistic for [math]\theta[/math], since it characterizes it completely.

Example: Normal

The pdf of a normal random variable with variance 1 is [math]f_{\left.X_{i}\right|\mu}\left(x\right)=\frac{1}{\sqrt{2\pi}}\exp\left\{ -\frac{1}{2}\left(x-\mu\right)^{2}\right\}[/math].

We also know that the pdf of the mean of normal random variables - in this case each with variance 1 - is distributed according to [math]\overline{X}\sim N\left(\mu,\frac{1}{n}\right)[/math].

Let us now check whether [math]\overline{X}[/math] is a sufficient statistic for [math]\mu[/math].

We can obtain [math]f_{\left.X_{i}\right|\mu,\overline{X}}[/math] by using the conditional normal formula

[math]\begin{aligned} \mu_{\left.X_{i}\right|\overline{X}} & =\mu_{X_{i}}+\frac{\sigma_{12}}{\sigma_{\overline{X}}^{2}}\left(\overline{x}-\mu_{\overline{X}}\right)\\ \sigma_{\left.X_{i}\right|\overline{X}}^{2} & =\sigma_{X_{i}}^{2}-\frac{\sigma_{12}^{2}}{\sigma_{\overline{X}}^{2}}\end{aligned}[/math]

where

[math]\sigma_{12}[/math] the covariance between [math]X_{i}[/math] and [math]\overline{X}[/math].

This covariance equals

[math]\sigma_{12}=Cov\left(X_{i},\overline{X}\right)=Cov\left(X_{i},\frac{1}{n}\sum_{j=1}^{n}X_{j}\right)=0+0+...+\frac{1}{n}Cov\left(X_{i},X_{i}\right)=\frac{1}{n}[/math].

In addition, [math]\mu_{X_{i}}=\mu_{\overline{X}}=\mu[/math], [math]\sigma_{X_{i}}^{2}=1[/math], and [math]\sigma_{\overline{X}}^{2}=\frac{1}{n}[/math] such that

[math]f_{\left.X_{i}\right|\mu,\overline{X}}\left(x\right)=N\left(\mu_{\left.X_{i}\right|\overline{X}},\sigma_{\left.X_{i}\right|\overline{X}}^{2}\right)=N\left(\mu+\left(\overline{x}-\mu\right),1-\frac{\frac{1}{n^{2}}}{\frac{1}{n}}\right)=N\left(\overline{x},1-\frac{1}{n}\right)[/math], which does not depend on [math]\mu[/math].

Given that [math]f_{\left.X_{1}..X_{n}\right|\mu,\overline{X}}\left(x\right)=\Pi_{i=1}^{n}f_{\left.X_{i}\right|\mu,\overline{X}}\left(x\right)[/math], it is clear that [math]f_{\left.X_{1}..X_{n}\right|\mu,\overline{X}}\left(x\right)=f_{\left.X_{1}..X_{n}\right|\overline{X}}\left(x\right)[/math]: The distribution of the sample, once conditioned on [math]\overline{X}[/math], does not depend on [math]\mu[/math].

Hence, [math]\overline{X}[/math] is a sufficient statistic for [math]\mu[/math], and the effect of [math]\mu[/math] on a maximum likelihood estimator - through its role in generating the data - takes place only through its effect on [math]\overline{X}[/math].


Rao-Blackwell Theorem

The Rao-Blackwell theorem allows us to take an existing estimator, and create a more efficient one. In order to do this, one requires a sufficient statistic.

The theorem states the following:

Let [math]\widehat{\theta}\in W_{u}[/math] and let [math]T[/math] be a sufficient statistic for [math]\theta[/math].

Then,

  • [math]\widetilde{\theta}=E\left(\left.\widehat{\theta}\right|T\right)\in W_{u}[/math]
  • [math]Var_{\theta}\left(\widetilde{\theta}\right)\leq Var_{\theta}\left(\widehat{\theta}\right),\,\forall\theta\in\Theta[/math]

The new estimator [math]\widetilde{\theta}[/math] is the expected value of a previous one, [math]\widehat{\theta}[/math], conditioning on statistic [math]T[/math]. As we will see, the conditioning preserves the mean (so that if [math]\widehat{\theta}[/math] is unbiased, so is [math]\widetilde{\theta}[/math]), and reduces variance.

Let us first open up the formula for the new estimator:

[math]\begin{aligned} \widetilde{\theta}\left(x\right)=E\left(\left.\widehat{\theta}\right|T\right) & =\int_{-\infty}^{\infty}\widehat{\theta}\left(x\right)f_{\left.X\right|\theta,T}\left(x\right)dx\\ & =\int_{-\infty}^{\infty}\widehat{\theta}\left(x\right)f_{\left.X\right|T}\left(x\right)dx\end{aligned}[/math]

where the second equality follows from the fact that [math]T[/math] is a sufficient statistic. This clarifies why we require a sufficient statistic [math]T[/math] to apply the Rao-Blackwell theorem: If this was not the case, the expectation [math]E\left(\left.\widehat{\theta}\right|T\right)[/math] would have produced a function of [math]\theta[/math], which cannot be an estimator by definition.

We now prove the theorem:

  • [math]E_{\theta}\left(\widetilde{\theta}\right)=E_{\theta}\left(E\left(\left.\widehat{\theta}\right|T\right)\right)\underset{L.I.E.}{\underbrace{=}}E_{\theta}\left(\widehat{\theta}\right)\underset{\widehat{\theta}\in W_{u}}{\underbrace{=}}\theta.[/math]
  • [math]Var_{\theta}\left(\widetilde{\theta}\right)=Var_{\theta}\left(E\left(\left.\widehat{\theta}\right|T\right)\right)\underset{C.V.I.}{\underbrace{=}}Var_{\theta}\left(\widehat{\theta}\right)-E_{\theta}\left(Var\left(\left.\widehat{\theta}\right|T\right)\right)[/math]. Because [math]E_{\theta}\left(Var\left(\left.\widehat{\theta}\right|T\right)\right)\geq0[/math], [math]Var_{\theta}\left(E\left(\left.\widehat{\theta}\right|T\right)\right)\leq Var_{\theta}\left(\widehat{\theta}\right)[/math].

The operation of producing an estimator via the conditional expectation on a sufficient statistic is often called Rao-Blackwellization.


Factorization Theorem

As we saw in the example of the Normal distribution, it can be tedious to find a sufficient statistic for a parameter. Luckily, there the factorization theorem makes it easy, provided the pmf/pdf of the sample is available:

Let [math]X_{1}..X_{n}[/math] be a random sample from a distribution with pmf/pdf [math]f\left(\left.\cdot\right|\theta\right)[/math], where [math]\theta\in\Theta[/math] is unknown.

A statistic [math]T=T\left(X_{1}..X_{n}\right)[/math] is sufficient for [math]\theta[/math] if and only if there exist functions [math]g\left(\cdot\right)[/math] and [math]h\left(\cdot\right)[/math] s.t. [math]\Pi_{i=1}^{n}f\left(\left.x_{i}\right|\theta\right)=g\left(\left.T\left(x_{1},...,x_{n}\right)\right|\theta\right).h\left(x_{1},...,x_{n}\right)[/math] for every [math]\left(x_{1}..x_{n}\right)\in\mathbb{R}^{n}[/math] and every [math]\theta\in\Theta[/math].

Example: Uniform

Suppose [math]X_{i}\sim U\left(0,\theta\right)[/math] such that the joint pdf equals

[math]\Pi_{i=1}^{n}f\left(\left.x_{i}\right|\theta\right)=\underset{g\left(\left.x_{\left(n\right)}\right|\theta\right)}{\underbrace{\frac{1}{\theta^{n}}.1\left(x_{\left(n\right)}\leq\theta\right)}}.\underset{h\left(x_{1}..x_{n}\right)}{\underbrace{1\left(x_{\left(1\right)}\geq0\right)}}[/math]

Hence, [math]x_{\left(n\right)}[/math] is a sufficient statistic for [math]\theta[/math]. One intuition for this result is that the maximization of the likelihood function w.r.t. [math]\theta[/math] will only depend on [math]x_{\left(n\right)}[/math], since [math]h\left(x_{1}..x_{n}\right)[/math] is a constant that will not affect the estimator.