Difference between revisions of "Lecture 17. B) Normal Linear Model"

From Significant Statistics
Jump to navigation Jump to search
(Normal Linear Model)
(Log-likelihood of Y conditional on X)
 
Line 74: Line 74:
 
Notice that because <math display="inline">y</math> equals a constant times parameters plus a normal random variable, <math display="inline">y</math> is itself normally distributed:
 
Notice that because <math display="inline">y</math> equals a constant times parameters plus a normal random variable, <math display="inline">y</math> is itself normally distributed:
  
<math display="block">\left.y_{i}\right|x_{i}\sim N\left(x_{i}^{'}\beta,\varepsilon_{i}\right)</math>
+
<math display="block">\left.y_{i}\right|x_{i}\sim N\left(x_{i}^{'}\beta,\sigma^{2}\right)</math>
  
 
The log-likelihood of <math display="inline">y</math> equals
 
The log-likelihood of <math display="inline">y</math> equals

Latest revision as of 19:19, 3 December 2019


Normal Linear Model

In the previous example, the notion of random variable wasn’t mentioned. We simply wanted to draw a predictive line along some points. In this example, we introduce a few features:

  • We will include multiple regressors/independent variables, [math]x_{i1},x_{i2},...,x_{iK}[/math].
  • These regressors are constants. When we do hypothesis testing, for example, we will assume they will remain constant, no matter the number of experiments we run. For example, suppose we would like to regress [math]y_{i}[/math] on the months of a year, i.e., [math]1..12[/math]. These numbers won’t change once we collect information for different years.
  • We will assume that the errors are normally distributed. This is a big difference from the previous example: We are stating that [math]\varepsilon_{i}[/math] are themselves random variables. In a sense, they are a primitive of this model.
  • We will denote matrices by uppercase letters (e.g., [math]X[/math]).
  • We represent vectors by lowercase letters (e.g., [math]y[/math], [math]x_{1}[/math]).
  • We define

[math]x_{i}=\underset{\left(K\times1\right)}{\left[\begin{array}{c} x_{i1}\\ x_{i2}\\ \vdots\\ x_{iK} \end{array}\right]}[/math]

such that each vector [math]x_{i}[/math] contains the regressors for observation [math]i[/math]. For example, if [math]i[/math] is an individual, then [math]x_{i}[/math] could contain his/her age, gender, income, etc.

We will also define, for each observation, [math]y_{i}[/math] and [math]\varepsilon_{i}\sim N\left(0,\sigma^{2}\right)[/math], the model

[math]y_{i}=\beta_{1}x_{i1}+\beta_{2}x_{i2}+...+\beta_{K}x_{iK}+\varepsilon_{i}[/math]

For each observation [math]i[/math], there exists a random variable [math]\varepsilon_{i}[/math]. Once this variable is added to a weighted sum of parameters [math]\left(\beta\right)[/math] and regressors [math]\left(x_{i}\right)[/math], it yields the variable [math]y_{i}[/math].

We can rewrite the equation above in a more compact form:

[math]\underset{\left(1\times1\right)}{y_{i}}=\underset{\left(1\times K\right)}{x_{i}^{'}}\underset{\left(K\times1\right)}{\beta}+\underset{\left(1\times1\right)}{\varepsilon_{i}}[/math]

Matrix Notation

It is possible to stack the equation above across observations. Let

[math]y=\left[\begin{array}{c} y_{1}\\ y_{2}\\ \vdots\\ y_{N} \end{array}\right];\,\varepsilon=\left[\begin{array}{c} \varepsilon_{1}\\ \varepsilon_{2}\\ \vdots\\ \varepsilon_{N} \end{array}\right];\,X=\left[\begin{array}{c} x_{1}^{'}\\ x_{2}^{'}\\ \vdots\\ x_{N}^{'} \end{array}\right]=\left[\begin{array}{ccc} x_{11} & \cdots & x_{1K}\\ x_{21} & \cdots & x_{2K}\\ \vdots & & \vdots\\ x_{N1} & \cdots & x_{NK} \end{array}\right][/math]

In this case, we can rewrite the linear model for the whole sample as

[math]\underset{\left(N\times1\right)}{y}=\underset{\left(N\times K\right)}{X}\underset{\left(K\times1\right)}{\beta}+\underset{\left(N\times1\right)}{\varepsilon}[/math]

We will make a few additional assumptions:

  • [math]\left\{ x_{i},y_{i}\right\} _{i=1}^{N}[/math] are i.i.d., with first finite moments.
  • [math]\left.\varepsilon_{i}\right|X\sim N\left(0,\sigma^{2}\right)[/math]

We will model the conditional distribution of [math]y[/math] as if [math]X[/math] was fixed. In other words, matrix [math]X[/math] has constants that never change. For example, if we drew a different random sample, we would observe different [math]y[/math]’s, but the same [math]X[/math]. (The reason we would observe different [math]y[/math]’s is because of the draws of [math]\varepsilon[/math]’s).

Log-likelihood of [math]Y[/math] conditional on [math]X[/math]

Notice that because [math]y[/math] equals a constant times parameters plus a normal random variable, [math]y[/math] is itself normally distributed:

[math]\left.y_{i}\right|x_{i}\sim N\left(x_{i}^{'}\beta,\sigma^{2}\right)[/math]

The log-likelihood of [math]y[/math] equals

[math]l\left(\beta,\sigma^{2}\right)=\sum_{i=1}^{n}\left\{ -\frac{1}{2}\log\left(2\pi\right)-\frac{1}{2}\log\left(\sigma^{2}\right)-\frac{1}{2\sigma^{2}}\left(y_{i}-x_{i}^{'}\beta\right)^{2}\right\}[/math]

Note that [math]\widehat{\beta}_{OLS}=\widehat{\beta}_{ML}=\text{argmax}_{\beta}\,l\left(\beta,\sigma^{2}\right)[/math], i.e., the solution for [math]\widehat{\beta}[/math] in the normal linear model is the same than the one for the OLS model.

Matrix Derivation

We will find vector [math]\widehat{\beta}_{ML}[/math] by minimizing the sum of squares,

[math]\begin{aligned} SSR & =\varepsilon^{'}\varepsilon\\ & =\left(y-X\beta\right)^{'}\left(y-X\beta\right)\\ & =y^{'}y-\underset{\left(1\times1\right)}{\beta^{'}X^{'}y}-\underset{\left(1\times1\right)}{y^{'}X\beta}+\beta^{'}X^{'}X\beta\\ & =y^{'}y-2y^{'}X\beta+\beta^{'}X^{'}X\beta\end{aligned}[/math]

where we have used the fact that [math]\left(X\beta\right)^{'}=\beta^{'}X^{'}[/math].

Notice that [math]y^{'}y[/math] does not depend on [math]\beta[/math], so we are left with problem [math]\widehat{\beta}_{ML}=\text{argmin}_{\beta}-2y^{'}X\beta+\beta^{'}X^{'}X\beta[/math]

Taking the first-order condition,

[math]\begin{aligned} foc\left(\beta\right):\, & \left(-2y^{'}X\right)+2\beta^{'}X^{'}X=0\\ \Leftrightarrow & \beta^{'}X^{'}X=y^{'}X\\ \Leftrightarrow & X^{'}X\beta=X^{'}y\\ & \widehat{\beta}_{ML}=\left(X^{'}X\right)^{-1}X^{'}y\end{aligned}[/math]

where we have used the fact that [math]\frac{d}{dv}A.v=A[/math] and [math]\frac{d}{dv}v^{'}Av=2v^{'}A[/math]. (You can also make the analogue transpose assumption; it works as long as you remain consistent with whatever assumption on vector derivatives you make). Above, we have assumed that [math]X^{'}X[/math] is invertible. It is also useful to note that [math]X^{'}X[/math] is symmetric.

We have found the ML estimator, which is consistent with our previous OLS example.

Distribution of [math]\widehat{\beta_{OLS}}[/math]

Let’s write the expression for [math]\widehat{\beta}[/math], opening up [math]y[/math], which is really just a function of [math]\beta[/math], [math]\varepsilon[/math] and [math]X[/math]:

[math]\begin{aligned} \widehat{\beta}_{ML} & =\left(X^{'}X\right)^{-1}X^{'}y\\ & =\left(X^{'}X\right)^{-1}X^{'}\left(X\beta+\varepsilon\right)\\ & =\beta+\left(X^{'}X\right)^{-1}X^{'}\varepsilon\end{aligned}[/math]

The result above implies that [math]\widehat{\beta}_{ML}[/math] is a linear combination of normal random variables, given [math]X[/math]. Moreover, the estimator has a mean and variance (remember that the only random variable is [math]\varepsilon[/math]):

[math]E_{\beta}\left(\widehat{\beta}\right)=\beta+\left(X^{'}X\right)^{-1}X^{'}E\left(\varepsilon\right)=\beta[/math]

and

[math]\begin{aligned} Var_{\beta}\left(\widehat{\beta}\right) & =Var\left(\beta+\left(X^{'}X\right)^{-1}X^{'}\varepsilon\right)\\ & =Var\left(\left(X^{'}X\right)^{-1}X^{'}\varepsilon\right)\\ & =\left(X^{'}X\right)^{-1}X^{'}Var\left(\varepsilon\right)X\left(X^{'}X\right)^{-1}\\ & =\left(X^{'}X\right)^{-1}X^{'}E\left(\varepsilon\varepsilon^{'}\right)X\left(X^{'}X\right)^{-1}\end{aligned}[/math]

where [math]E\left(\varepsilon\varepsilon^{'}\right)[/math] is the covariance matrix of [math]\varepsilon[/math], given that [math]E\left(\varepsilon\right)=0[/math]:

[math]Var\left(\varepsilon\right)=E\left(\varepsilon\varepsilon^{'}\right)=\left[\begin{array}{cccc} \sigma_{\varepsilon}^{2} & & & 0\\ & \sigma_{\varepsilon}^{2}\\ & & \ddots\\ 0 & & & \sigma_{\varepsilon}^{2} \end{array}\right]=\sigma^{2}I_{N},[/math]

where [math]I_{N}[/math] is the identity matrix with size [math]\left(N\times N\right)[/math].

Continuing,

[math]\begin{aligned} Var_{\beta}\left(\widehat{\beta}\right) & =\left(X^{'}X\right)^{-1}X^{'}E\left(\varepsilon\varepsilon^{'}\right)X\left(X^{'}X\right)^{-1}\\ & =\left(X^{'}X\right)^{-1}X^{'}I_{N}X^{'}\left(X^{'}X\right)^{-1}\\ & =\left(X^{'}X\right)^{-1}X^{'}X^{'}\left(X^{'}X\right)^{-1}\sigma^{2}\\ & =\left(X^{'}X\right)^{-1}\sigma^{2}.\end{aligned}[/math]

So, we have learned that

[math]\left.\widehat{\beta}_{ML}\right|X\sim N\left(\beta,\left(X^{'}X\right)^{-1}\sigma_{\varepsilon}\right)[/math]

To reiterate,

  • We require the rank of [math]X[/math] to be [math]K[/math], s.t. [math]\left(X^{'}X\right)^{-1}[/math] exists.
  • [math]X[/math] is considered as fixed, so that all calculations take [math]X[/math] as constant.