Full Lecture 17

From Significant Statistics
Jump to navigation Jump to search

Ordinary Least Squares

Suppose we have some data [math]\left\{ x_{i},y_{i}\right\} _{i=1}^{N}[/math]. We would like to relate it through a line, i.e.,

An intuitive estimator minimizes the distance between [math]y_{i}[/math] and [math]\beta_{0}+\beta_{1}x_{i}[/math], for example,

[math]\min_{\beta_{0},\beta_{1}}\,\sum_{i=1}^{n}\left(y_{i}-\beta_{0}-\beta_{1}x_{i}\right)^{2}[/math]

The quadratic distance is especially tractable, hence its use. Calculating the first order conditions,

[math]\begin{aligned} & \left\{ \begin{array}{c} foc\left(\beta_{0}\right):\,\sum_{i=1}^{n}-2\left(y_{i}-\beta_{0}-\beta_{1}x_{i}\right)=0\\ foc\left(\beta_{1}\right):\,\sum_{i=1}^{n}-2x_{i}\left(y_{i}-\beta_{0}-\beta_{1}x_{i}\right)=0 \end{array}\right.\\ \Leftrightarrow & \left\{ \begin{array}{c} \beta_{0}=\frac{\sum y_{i}}{n}-\beta_{1}\frac{\sum x_{i}}{n}=\overline{y}-\beta_{1}\overline{x}\\ \sum x_{i}y_{i}-\beta_{1}\sum x_{i}^{2}-n\overline{x}\beta_{0}=0 \end{array}\right.\\ \Leftrightarrow & \left\{ \begin{array}{c} \\ \sum x_{i}y_{i}-\beta_{1}\sum x_{i}^{2}-n\overline{x}\left(\overline{y}-\beta_{1}\overline{x}\right)=0 \end{array}\right.\\ \Leftrightarrow & \left\{ \begin{array}{c} \\ \beta_{1}=\frac{\sum x_{i}y_{i}-n\overline{x}\overline{y}}{\sum x_{i}^{2}-n\overline{x}^{2}}=\frac{\sum\left(x_{i}-\overline{x}\right)\left(y_{i}-\overline{y}\right)}{\sum\left(x_{i}-\overline{x}\right)^{2}} \end{array}.\right.\end{aligned}[/math]

So, we have learned that

[math]\widehat{\beta_{0}}^{OLS}=\overline{y}-\beta_{1}\overline{x}.[/math]

[math]\widehat{\beta_{1}}^{OLS}=\frac{Cov\left(x_{i},y_{i}\right)}{Var\left(x_{i}\right)}.[/math]

The expression of the slope parameter is interesting: It represents the fraction of the variation in [math]x_{i}[/math] that covaries with [math]y_{i}[/math].

Some Remarks

  • After estimating [math]\beta[/math], we can predict [math]y_{i}[/math] via

[math]\widehat{y_{i}}=\widehat{\beta_{0}}^{OLS}+\widehat{\beta_{1}}^{OLS}x_{i}[/math]

  • We can also define the prediction errors as [math]\widehat{\varepsilon_{i}}=y_{i}-\widehat{y_{i}}[/math], which we can estimate according to

[math]\begin{aligned} \widehat{\varepsilon}_{i} & =y_{i}-\widehat{y_{i}}\\ & =y_{i}-\left(\widehat{\beta_{0}}^{OLS}+\widehat{\beta_{1}}^{OLS}x_{i}\right)\end{aligned}[/math]

These estimated errors provide the vertical distance between our estimation line and the height of each data point, indexed by [math]i[/math].

  • Notice also that we could estimate the sample variance of the errors, [math]Var\left(\widehat{\varepsilon_{i}}\right)[/math]. This statistic provides a notion of how far the estimated line is from each data point.


Normal Linear Model

In the previous example, the notion of random variable wasn’t mentioned. We simply wanted to draw a predictive line along some points. In this example, we introduce a few features:

  • We will include multiple regressors/independent variables, [math]x_{i1},x_{i2},...,x_{iK}[/math].
  • These regressors are constants. When we do hypothesis testing, for example, we will assume they will remain constant, no matter the number of experiments we run. For example, suppose we would like to regress [math]y_{i}[/math] on the months of a year, i.e., [math]1..12[/math]. These numbers won’t change once we collect information for different years.
  • We will assume that the errors are normally distributed. This is a big difference from the previous example: We are stating that [math]\varepsilon_{i}[/math] are themselves random variables. In a sense, they are a primitive of this model.
  • We will denote matrices by uppercase letters (e.g., [math]X[/math]).
  • We represent vectors by lowercase letters (e.g., [math]y[/math], [math]x_{1}[/math]).
  • We define

[math]x_{i}=\underset{\left(K\times1\right)}{\left[\begin{array}{c} x_{i1}\\ x_{i2}\\ \vdots\\ x_{iK} \end{array}\right]}[/math]

such that each vector [math]x_{i}[/math] contains the regressors for observation [math]i[/math]. For example, if [math]i[/math] is an individual, then [math]x_{i}[/math] could contain his/her age, gender, income, etc.

We will also define, for each observation, [math]y_{i}[/math] and [math]\varepsilon_{i}\sim N\left(0,\sigma^{2}\right)[/math], s.t.

[math]y_{i}=\beta_{1}x_{i1}+\beta_{2}x_{i2}+...+\beta_{K}x_{iK}+\varepsilon_{i}[/math]

For each observation [math]i[/math], there exists a random variable [math]\varepsilon_{i}[/math]. Once this variable is added to a weighted sum of parameters [math]\left(\beta\right)[/math] and regressors [math]\left(x_{i}\right)[/math], it yields the variable [math]y_{i}[/math].

We can rewrite the equation above in a more compact form:

[math]\underset{\left(1\times1\right)}{y_{i}}=\underset{\left(1\times K\right)}{x_{i}^{'}}\underset{\left(K\times1\right)}{\beta}+\underset{\left(1\times1\right)}{\varepsilon_{i}}[/math]

Matrix Notation

It is possible to stack the equation above across observations. Let

[math]y=\left[\begin{array}{c} y_{1}\\ y_{2}\\ \vdots\\ y_{N} \end{array}\right];\,\varepsilon=\left[\begin{array}{c} \varepsilon_{1}\\ \varepsilon_{2}\\ \vdots\\ \varepsilon_{N} \end{array}\right];\,X=\left[\begin{array}{c} x_{1}^{'}\\ x_{2}^{'}\\ \vdots\\ x_{N}^{'} \end{array}\right]=\left[\begin{array}{ccc} x_{11} & \cdots & x_{1K}\\ x_{21} & \cdots & x_{2K}\\ \vdots & & \vdots\\ x_{N1} & \cdots & x_{NK} \end{array}\right][/math]

In this case, we can rewrite the linear model for the whole sample as

[math]\underset{\left(N\times1\right)}{y}=\underset{\left(N\times K\right)}{X}\underset{\left(K\times1\right)}{\beta}+\underset{\left(N\times1\right)}{\varepsilon}[/math]

We will make a few additional assumptions:

  • [math]\left\{ x_{i},y_{i}\right\} _{i=1}^{N}[/math] are i.i.d., with first finite moments.
  • [math]\left.\varepsilon_{i}\right|X\sim N\left(0,\sigma^{2}\right)[/math]

We will model the conditional distribution of [math]y[/math] as if [math]X[/math] was fixed. In other words, matrix [math]X[/math] has constants that never change. For example, if we drew a different random sample, we would observe different [math]y[/math]’s, but the same [math]X[/math]. (The reason we would observe different [math]y[/math]’s is because of the draws of [math]\varepsilon[/math]’s).

Log-likelihood of [math]Y[/math] conditional on [math]X[/math]

Notice that because [math]y[/math] equals a constant times parameters plus a normal random variable, [math]y[/math] is itself normally distributed:

[math]\left.y_{i}\right|x_{i}\sim N\left(x_{i}^{'}\beta,\varepsilon_{i}\right)[/math]

The log-likelihood of [math]y[/math] equals

[math]l\left(\beta,\sigma^{2}\right)=\sum_{i=1}^{n}\left\{ -\frac{1}{2}\log\left(2\pi\right)-\frac{1}{2}\log\left(\sigma^{2}\right)-\frac{1}{2\sigma^{2}}\left(y_{i}-x_{i}^{'}\beta\right)^{2}\right\}[/math]

Note that [math]\widehat{\beta}_{OLS}=\widehat{\beta}_{ML}=\text{argmax}_{\beta}\,l\left(\beta,\sigma^{2}\right)[/math], i.e., the solution for [math]\widehat{\beta}[/math] in the normal linear model is the same that the one for the OLS model.

Matrix Derivation

We will find vector [math]\widehat{\beta}_{ML}[/math] by minimizing the sum of squares,

[math]\begin{aligned} SSR & =\varepsilon^{'}\varepsilon\\ & =\left(y-X\beta\right)^{'}\left(y-X\beta\right)\\ & =y^{'}y-\underset{\left(1\times1\right)}{\beta^{'}X^{'}y}-\underset{\left(1\times1\right)}{y^{'}X\beta}+\beta^{'}X^{'}X\beta\\ & =y^{'}y-2y^{'}X\beta+\beta^{'}X^{'}X\beta\end{aligned}[/math]

where we have used the fact that [math]\left(X\beta\right)^{'}=\beta^{'}X^{'}[/math].

Notice that [math]y^{'}y[/math] does not depend on [math]\beta[/math], so we are left with problem [math]\widehat{\beta}_{ML}=\text{argmin}_{\beta}-2y^{'}X\beta+\beta^{'}X^{'}X\beta[/math]

Taking the first-order condition,

[math]\begin{aligned} foc\left(\beta\right):\, & \left(-2y^{'}X\right)+2\beta^{'}X^{'}X=0\\ \Leftrightarrow & \beta^{'}X^{'}X=y^{'}X\\ \Leftrightarrow & X^{'}X\beta=X^{'}y\\ & \widehat{\beta}_{ML}=\left(X^{'}X\right)^{-1}X^{'}y\end{aligned}[/math]

where we have used the fact that [math]\frac{d}{dv}A.v=A[/math] and [math]\frac{d}{dv}v^{'}Av=2v^{'}A[/math]. (You can also make the analogue transpose assumption; it works as long as you remain consistent with whatever assumption on vector derivatives you make). Above, we have assumed that [math]X^{'}X[/math] is invertible. It is also useful to note that [math]X^{'}X[/math] is symmetric.

We have found the ML estimator, which is consistent with our previous OLS example.

Distribution of [math]\widehat{\beta_{OLS}}[/math]

Let’s write the expression for [math]\widehat{\beta}[/math], opening up [math]y[/math], which is really just a function of [math]\beta[/math], [math]\varepsilon[/math] and [math]X[/math]:

[math]\begin{aligned} \widehat{\beta}_{ML} & =\left(X^{'}X\right)^{-1}X^{'}y\\ & =\left(X^{'}X\right)^{-1}X^{'}\left(X\beta+\varepsilon\right)\\ & =\beta+\left(X^{'}X\right)^{-1}X^{'}\varepsilon\end{aligned}[/math]

The result above implies that [math]\widehat{\beta}_{ML}[/math] is a linear combination of normal random variables, given [math]X[/math]. Moreover, the estimator has a mean and variance (remember that the only random variable is [math]\varepsilon[/math]):

[math]E_{\beta}\left(\widehat{\beta}\right)=\beta+\left(X^{'}X\right)^{-1}X^{'}E\left(\varepsilon\right)=\beta[/math]

and

[math]\begin{aligned} Var_{\beta}\left(\widehat{\beta}\right) & =Var\left(\beta+\left(X^{'}X\right)^{-1}X^{'}\varepsilon\right)\\ & =Var\left(\left(X^{'}X\right)^{-1}X^{'}\varepsilon\right)\\ & =\left(X^{'}X\right)^{-1}X^{'}Var\left(\varepsilon\right)X\left(X^{'}X\right)^{-1}\\ & =\left(X^{'}X\right)^{-1}X^{'}E\left(\varepsilon\varepsilon^{'}\right)X\left(X^{'}X\right)^{-1}\end{aligned}[/math]

where [math]E\left(\varepsilon\varepsilon^{'}\right)[/math] is the covariance matrix of [math]\varepsilon[/math], given that [math]E\left(\varepsilon\right)=0[/math]:

[math]Var\left(\varepsilon\right)=E\left(\varepsilon\varepsilon^{'}\right)=\left[\begin{array}{cccc} \sigma_{\varepsilon}^{2} & & & 0\\ & \sigma_{\varepsilon}^{2}\\ & & \ddots\\ 0 & & & \sigma_{\varepsilon}^{2} \end{array}\right]=\sigma^{2}I_{N},[/math]

where [math]I_{N}[/math] is the identity matrix with size [math]\left(N\times N\right)[/math].

Continuing,

[math]\begin{aligned} Var_{\beta}\left(\widehat{\beta}\right) & =\left(X^{'}X\right)^{-1}X^{'}E\left(\varepsilon\varepsilon^{'}\right)X\left(X^{'}X\right)^{-1}\\ & =\left(X^{'}X\right)^{-1}X^{'}I_{N}X^{'}\left(X^{'}X\right)^{-1}\\ & =\left(X^{'}X\right)^{-1}X^{'}X^{'}\left(X^{'}X\right)^{-1}\sigma^{2}\\ & =\left(X^{'}X\right)^{-1}\sigma^{2}.\end{aligned}[/math]

So, we have learned that

[math]\left.\widehat{\beta}_{ML}\right|X\sim N\left(\beta,\left(X^{'}X\right)^{-1}\sigma_{\varepsilon}\right)[/math]

To reiterate,

  • We require the rank of [math]X[/math] to be [math]K[/math], s.t. [math]\left(X^{'}X\right)^{-1}[/math] exists.
  • [math]X[/math] is considered as fixed, so that all calculations take [math]X[/math] as constant.


Asymptotic Properties of OLS

We now allow,

  • [math]X[/math] to be random variables
  • [math]\varepsilon[/math] to not necessarily be normally distributed.

In this case, we will need additional assumptions to be able to produce [math]\widehat{\beta}[/math]:

  • [math]\left\{ y_{i},x_{i}\right\}[/math] is a random sample.
  • Strict Exogeneity: [math]E\left(\left.\varepsilon_{i}\right|X\right)=0,\,i=1..N[/math].
  • Homoskedasticity: [math]E\left(\left.\varepsilon_{i}^{2}\right|X\right)=\sigma^{2},\,i=1..N[/math] and [math]E\left(\left.\varepsilon_{i}\varepsilon_{j}\right|X\right)=0,\,\forall i,j=1..N,i\neq j.[/math]

Implications of Strict Exogeneity

First, notice that if [math]E\left(\left.\varepsilon_{i}\right|X\right)=0[/math], then [math]E\left(\varepsilon\right)=0[/math]:

[math]E\left(\left.\varepsilon_{i}\right|X\right)=0\Leftrightarrow E\left(E\left(\left.\varepsilon_{i}\right|X\right)\right)=E\left(0\right)\Leftrightarrow E\left(\varepsilon_{i}\right)=0.[/math]

In other words, if the conditional expectation of [math]\varepsilon_{i}[/math] given any [math]X[/math] is zero, then the expectation of [math]\varepsilon_{i}[/math] also needs to be zero. This assumption implies that [math]\varepsilon_{i}[/math] is uncorrelated with [math]x_{i1}[/math], [math]x_{i2}[/math], ..., and also [math]x_{1k}[/math], [math]x_{2k}[/math], etc.

Second, the strict exogeneity assumption implies the orthogonality condition [math]E\left(x_{jk}\varepsilon_{i}\right)=0,\,\forall\,j,k[/math]. (i.e., no matter how you pick [math]x[/math]’s by selecting [math]j[/math] and [math]k[/math], the result is uncorrelated with [math]\varepsilon_{i}[/math]).

To see this, let [math]E\left(x_{j}\varepsilon_{i}\right)=\left[\begin{array}{c} E\left(x_{j1}\varepsilon_{i}\right)\\ E\left(x_{j2}\varepsilon_{i}\right)\\ \vdots\\ E\left(x_{jK}\varepsilon_{i}\right) \end{array}\right]=\left[\begin{array}{c} 0\\ 0\\ \vdots\\ 0 \end{array}\right],\,\forall i,j=1..N[/math]

Then, it follows that

[math]E\left(x_{jk}\varepsilon_{i}\right)=E\left[E\left(\left.x_{jk}\varepsilon_{i}\right|x_{jk}\right)\right]=E\left[x_{jk}\underset{=0}{\underbrace{E\left(\left.\varepsilon_{i}\right|x_{jk}\right)}}\right]=0.[/math]

Asymptotic Distribution

First, notice that

  • [math]X^{'}X=\sum_{i=1}^{n}x_{i}x_{i}^{'}[/math].
  • [math]X^{'}\varepsilon=\sum_{i=1}^{n}x_{i}\varepsilon_{i}[/math].

It is possible to prove that under the assumptions above,

[math]\sqrt{N}\left(\widehat{\beta}_{OLS}-\beta\right)\overset{\sim}{\sim}N\left(0,Q^{-1}\sigma^{2}\right)[/math]

where [math]Q=\text{plim}\,\frac{X^{'}X}{N}[/math].

This is relatively intuitive given our previous example. Yet, it is extremely useful: As long as we satisfy the assumptions laid out before, we can conduct hypothesis tests for OLS even if the distribution of [math]\varepsilon[/math] is unknown (up to some moments).

Proof

Note that

[math]\begin{aligned} \sqrt{N}\left(\widehat{\beta}-\beta\right) & =\sqrt{N}\left(\beta+\left(X^{'}X\right)^{-1}X^{'}\varepsilon-\beta\right)\\ & =\sqrt{N}\left(X^{'}X\right)^{-1}X^{'}\varepsilon\frac{N}{N}\\ & =\left(\frac{X^{'}X}{N}\right)^{-1}\frac{1}{\sqrt{N}}X^{'}\varepsilon\end{aligned}[/math]

While we will not show this here, we assume that [math]\frac{X^{'}X}{N}\overset{p}{\rightarrow}Q[/math], where [math]Q[/math] is a matrix. Notice that it is not implausible that [math]Q[/math] is a well-defined matrix: as [math]N[/math] increases, the size of [math]X^{'}X[/math] remains [math]\left(K\times K\right)[/math].

By a matrix version of Slutsky’s theorem, it follows that

[math]\left(\frac{X^{'}X}{N}\right)^{-1}\overset{p}{\rightarrow}Q^{-1}[/math]

As for the second factor, because of term [math]\varepsilon[/math], it is likely that it converges in distribution. Let

[math]\frac{1}{\sqrt{N}}X^{'}\varepsilon=\sqrt{N}\frac{1}{N}\sum x_{i}\varepsilon_{i}=\sqrt{N}\overline{w}[/math]

where [math]w_{i}=x_{i}\varepsilon_{i}[/math]. Then,

[math]E\left(\overline{w}\right)=E\left(\frac{1}{N}\sum x_{i}\varepsilon_{i}\right)=0[/math]

[math]\begin{aligned} Var\left(\overline{w}\right) & =\frac{1}{N^{2}}Var\left(\frac{1}{N}\sum x_{i}\varepsilon_{i}\right)=\frac{1}{N^{2}}\sum_{i=1}^{N}E\left(x_{i}E\left[\left.\varepsilon_{i}\varepsilon_{i}^{'}\right|x_{i}\right]x_{i}^{'}\right)\\ & =\frac{1}{N^{2}}\sigma^{2}\sum_{i=1}^{N}E\left(x_{i}x_{i}^{'}\right)\\ & =\frac{\sigma^{2}}{N}E\left(\frac{X^{'}X}{N}\right)\\ & =\frac{\sigma^{2}}{N}Q.\end{aligned}[/math]

By the CLT,

[math]\sqrt{N}\left(\overline{w}-E\left(\overline{w}\right)\right)\overset{d}{\rightarrow}N\left(0,\sigma^{2}Q\right)[/math]

and by Slutsky’s theorem,

[math]\begin{aligned} \sqrt{N}\left(\widehat{\beta}-\beta\right) & =\underset{\overset{\overset{p}{\rightarrow}}{Q}}{\underbrace{\left(\frac{X^{'}X}{N}\right)^{-1}}}\underset{\overset{\overset{d}{\rightarrow}}{N\left(0,\sigma^{2}Q\right)}}{\underbrace{\sqrt{N}\overline{w}}}\\ \overset{\sim}{\sim} & N\left(0,Q^{-1}\sigma^{2}QQ^{-1}\right)\\ = & N\left(0,Q^{-1}\sigma^{2}\right)\end{aligned}[/math]

Some Remarks

In practice, we use

[math]\widehat{\sigma^{2}}_{unbiased}=\frac{1}{N-K}\sum_{i=1}^{N}\left(y_{i}-x_{i}^{'}\widehat{\beta}\right)^{2}[/math]

or

[math]\widehat{\sigma^{2}}_{MLE}=\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-x_{i}^{'}\widehat{\beta}\right)^{2}[/math]

and

[math]\widehat{Q^{-1}}=\left(\frac{X^{'}X}{N}\right)^{-1}[/math].

A note on the variance of [math]\varepsilon_{i}[/math]

In the proof above, we have assumed that [math]Var\left(\varepsilon_{i}\right)=\sigma^{2}[/math]. However, it is possible that [math]Var\left(\varepsilon_{i}\right)=\sigma_{i}^{2}[/math]. In this case, the step [math]\frac{1}{N^{2}}\sum_{i=1}^{N}E\left(x_{i}E\left[\left.\varepsilon_{i}\varepsilon_{i}^{'}\right|x_{i}\right]x_{i}^{'}\right)=\frac{1}{N^{2}}\sigma^{2}\sum_{i=1}^{N}E\left(x_{i}x_{i}^{'}\right)[/math]

does not hold. In this case, it is possible to show that

[math]\overset{\sim}{\sim}\sqrt{N}\left(\widehat{\beta}-\beta\right)N\left(0,\Omega\right)[/math]

where

[math]\Omega=E\left(X^{'}X\right)^{-1}Var\left(X\varepsilon\right)E\left(X^{'}X\right)^{-1},[/math]

and

[math]\widehat{\Omega}=\left(\frac{X^{'}X}{N}\right)^{-1}\left(\frac{1}{N}\sum x_{i}x_{i}^{'}\widehat{\varepsilon}_{i}^{2}\right)\left(\frac{X^{'}X}{N}\right)^{-1}.[/math]

The estimator above is called the Huber-Eicker-White estimator (or a variation using 1 or 2 of these names).

An issue remains: How do we obtain [math]\widehat{\varepsilon}_{i}^{2}[/math]?

It turns out that [math]\widehat{\beta}_{OLS}[/math] is consistent even if [math]\varepsilon_{i}[/math] is heteroskedastic: It suffices that [math]E\left[\left(X^{'}X\right)^{-1}X^{'}\varepsilon\right]=0[/math], which is guaranteed by strict exogeneity. So, one can use the OLS estimates to produce estimator [math]\widehat{\Omega}[/math], and then perform valid asymptotic hypothesis tests.


Bootstrapping

The origin of the term “bootstrapping” may relate to someone pulling themselves up by their own boot straps/laces. In a sense, it means making do with little or nothing. Here is the idea: Suppose you would like to conduct an hypothesis test, but were unaware of the test distribution. Even if it converges to a normal, who knows what its asymptotic variance may be (i.e., the test statistic’s variance when [math]n[/math] tends to infinity)?

Consider the following approach: If one has enough data, then the distribution in the sample is representative of the distribution of the population. So, one may pretend that the sample itself is the population, and draw from that sample as if one was drawing from the population.

The bootstrap technique can be applied to MLE in the following way. Given a sample of size [math]N[/math], [math]\left\{ y_{i},x_{i}\right\} _{i=1}^{N}[/math]:

  • Estimate

[math]\widehat{\theta}_{ML}=\text{argmax}_{\theta}\,f\left(\left.y\right|X,\theta\right)[/math]

as usual.

  • Calculate the test statistic of interest, [math]T[/math]. (We could use the LRT, for example; notice that we do not know its distribution, nor the desirable critical value).
  • Then, resample (with replacement) [math]N[/math] pairs [math]\left\{ y_{i},x_{i}\right\}[/math] to get a new (bootstrap) sample [math]\left\{ y_{j}^{b},x_{j}^{b}\right\} _{j=1}^{N}[/math]. Do this [math]B[/math] times, such that each sample can be indexed by [math]b\in\left\{ 1,..,B\right\} .[/math]
  • For each bootstrap sample, estimate

[math]\widehat{\theta}_{b}=\text{argmax}_{\theta}\,f\left(\left.y\right|X,\theta\right)[/math]

  • Calculate the test statistic of interest for each estimation, [math]T^{b}[/math].

While we do not know the distribution of the test statistic, we can approximate it, since we drew it many times from our own sample. Moreover, we can now build confidence intervals for the test statistic (we just need to pick [math]\underline{t},\overline{t}[/math] s.t. 95% of the test statistics [math]T^{b}[/math] fall in the interval), and in the case of the LRT, we can reject the null hypothesis if [math]T[/math] is higher than at least 95% of the [math]T^{b}[/math] tests we drew. Notice that such a test commits a type 1 error with 5% probability, as is often conducted.