# Multicollinearity

Consider the case where $X$ is given by:

$X=\overset{\begin{array}{ccc} \beta_{0} & \beta_{1} & \beta_{2}\end{array}}{\left[\begin{array}{ccc} 1 & 1 & 0\\ 1 & 0 & 1\\ 1 & 1 & 0\\ 1 & 0 & 1\\ 1 & 1 & 0 \end{array}\right]}$

Notice that $\beta_{0}=0;\beta_{1}=1;\beta_{2}=1$ predict the same value of $y_{i}$ as $\beta_{0}=1;\beta_{1}=0;\beta_{2}=0$. In this case, there is no unique solution for $\widehat{\beta}_{OLS}=\text{argmin}_{\beta}\left(y-X\beta\right)^{'}\left(y-X\beta\right).$

Issues may also arise when two variables are almost collinear. In this case, it can become challenging to identify the parameters of two highly correlated variables separately. Moreover, perturbing the data may move significance from one parameter to the other, and often only one of the two parameters will be significant (although removing the significant regressor will make the parameter of the remaining regressor significant).

When one cares only about prediction, then separating two coefficients may not be crucial: The joint effect is what the researcher is interested in this case, and it matters little that the effects from each regressor are hard to separate. On the other hand, if one cares about the specific effects, then there is no easy way around the problem.

Often, multicollinearity arises because of the so-called “dummy trap”. In the example above, $x_{1}$ could represent young age and $x_{2}$ could represent old age. By adding all possible cases as well as a constant, we have effectively introduced “too many cases” and induced multicollinearity. For example, notice that by simply introducing $x_{1}$, an observation where $x_{1}=0$ means that the individual is old, and the effect will be captured by $\beta_{0}$. When $x_{0}=1$, the effect of an individual being young is characterized by $\beta_{0}+\beta_{1}$. Alternatively, one could have removed the constant from the model.

If multicollinearity is not due to the dummy trap, one can use specific regression methods (e.g., ridge regression, principal components regression) that eliminate variables according to specific criteria. These techniques are especially useful in regressions with many variables, sometimes even when $K\gt N$. However, they do not solve the issue of separately identifying effects of each variable. Ideally, collecting more or better data will solve the problem.

# Partitioned Regression

Partitioned regression is a method to understand how some parameters in OLS depend on others. Consider the decomposition of the linear regression equation

\begin{aligned} & y=X\beta+\varepsilon\\ \Leftrightarrow & y=\left[\begin{array}{cc} X_{1} & X_{2}\end{array}\right]\left[\begin{array}{c} \beta_{1}\\ \beta_{2} \end{array}\right]+\varepsilon\end{aligned}

Where we are effectively partitioning the parameter vector $\beta$ into two sub-vectors $\beta_{1}$ and $\beta_{2}$.

What is $\widehat{\beta}_{1}$?

Starting with the OLS normal equation,

\begin{aligned} & X^{'}X\beta=X^{'}y\\ \Leftrightarrow & \left[\begin{array}{c} X_{1}^{'}\\ X_{2}^{'} \end{array}\right]\left[\begin{array}{cc} X_{1} & X_{2}\end{array}\right]\left[\begin{array}{c} \beta_{1}\\ \beta_{2} \end{array}\right]=\left[\begin{array}{c} X_{1}^{'}\\ X_{2}^{'} \end{array}\right]y\\ \Leftrightarrow & \left[\begin{array}{cc} X_{1}^{'}X_{1} & X_{1}^{'}X_{2}\\ X_{2}^{'}X_{1} & X_{2}^{'}X_{2} \end{array}\right]\left[\begin{array}{c} \beta_{1}\\ \beta_{2} \end{array}\right]=\left[\begin{array}{c} X_{1}^{'}\\ X_{2}^{'} \end{array}\right]y\\ \Leftrightarrow & \left\{ \begin{array}{c} X_{1}^{'}X_{1}\beta_{1}+X_{1}^{'}X_{2}\beta_{2}=X_{1}^{'}y\\ X_{2}^{'}X_{1}\beta_{1}+X_{2}^{'}X_{2}\beta_{2}=X_{2}^{'}y \end{array}\right.\end{aligned}

Define the equations immediately above as (1) and (2). Now, premultiply the first equation by $X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}$, to obtain

\begin{aligned} & X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}X_{1}\beta_{1}+X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}X_{2}\beta_{2}=X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}y\\ \Leftrightarrow & X_{2}^{'}X_{1}\beta_{1}+X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}X_{2}\beta_{2}=X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}y\end{aligned}

Removing the last equation from equation (3) yields:

$\left(X_{2}^{'}X_{2}-X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}X_{2}\right)\beta_{2}=\left[X_{2}^{'}-X_{2}^{'}X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}\right]y$

Now, let $P_{1}=X_{1}\left(X_{1}^{'}X_{1}\right)^{-1}X_{1}^{'}$, to get

\begin{aligned} & X_{2}^{'}\left(I-P_{1}\right)X_{2}\beta_{2}=X_{2}^{'}\left(I-P_{1}\right)y\\ \Leftrightarrow & \widehat{\beta_{2}}=\left[X_{2}^{'}\left(I-P_{1}\right)X_{2}\right]^{-1}X_{2}^{'}\left(I-P_{1}\right)y\end{aligned}

In order to interpret this equation, we need to understand the meaning of matrix $P_{1}$. In linear algebra, this matrix is called a projection matrix.

## Projections

Let $P_{X}=X\left(X^{'}X\right)^{-1}X^{'}$. When multiplied by a vector, matrix $P_{X}$ yields another vector that can be obtained by a weighted sum of vectors in $X$. Consider the following representation, which applies to the case where $N=3$ and $K=2$.

When multiplied by vector $y$, matrix $P_{x}$ yields vector $P_{x}y$, which lives in the column space of $X$ . This column space is the space defined by the vectors defined in the columns of $X$. Any vector in $Col\left(X\right)$ can be obtained by weighted sums of the vectors in the column space of $X$. In fact, notice that $P_{X}y=X\left(X^{'}X\right)^{-1}X^{'}y=X\widehat{\beta}_{OLS}$, i.e., it is the OLS prediction of $y$.

As for matrix $I-P_{X}$ , this matrix produces a vector that is orthogonal to the column space of $X$. In fact, it is given by the vertical dashed vector in the figure above. Notice that

$\left(I-P_{X}\right)y=y-\widehat{y}=\widehat{\varepsilon},$

i.e., this matrix produces the vector of estimated residuals, which is orthogonal (in the geometric sense) to the column space of $X$.

Projections are symmetric and idempotent, the last term meaning that repeated self-multiplication always yields the projection matrix itself.

## Partitioned Regression (cont.)

With the knowledge of projection matrices, equation

$\widehat{\beta_{2}}=\left[X_{2}^{'}\left(I-P_{1}\right)X_{2}\right]^{-1}X_{2}^{'}\left(I-P_{1}\right)y$

can be interpreted as rewritten as

$\widehat{\beta_{2}}=\left[X_{2}^{*'}X_{2}^{*}\right]^{-1}X_{2}^{*'}y^{*}$

where

$X_{2}^{*}=\left(I-P_{1}\right)X_{2}$ and $y^{*}=\left(I-P_{1}\right)y$ (notice that we are using the idempotence property).

Notice that $y^{*}=\left(I-P_{1}\right)y$ are the residuals from regressing $y$ on $X_{1}$, and $X_{2}^{*}=\left(I-P_{1}\right)X_{2}$ are the residuals from regressing each of the variables in $X_{2}$ on $X_{1}$. Finally, $\widehat{\beta_{2}}$ is obtained from regressing the residuals $y^{*}$ on $X_{2}^{*}$.

More than a clarification of how OLS operates, partitioned regression can be used to inform the variance of two-stage estimators in which estimation requires plugging in first stage estimates into a second stage where additional estimates are produced. It can also be used to inform variable selection problems.

# Gauss-Markov Theorem

The Gauss Markov theorem is an important result for the OLS estimator. It does not depend on asymptotics or normality assumptions. It states that, in the linear regression model - which does include the homoskedasticity assumption - $\widehat{\beta}_{OLS}$ is the minimum variance linear unbiased estimator (BLUE) of $\beta$.

The proof is not hard. We consider the case where $X$ is fixed (i.e., all expressions are conditioned on $X$; $\varepsilon$ is the random variable).

## Proof

Let

\begin{aligned} \widehat{\beta}_{OLS} & =\left(X^{'}X\right)^{-1}X^{'}y\\ \widetilde{\beta} & =Cy\end{aligned}

where $\widetilde{\beta}$ is some alternative linear estimator, with some matrix $C$ with dimensions $\left(N\times K\right)$.

For $\widetilde{\beta}$ to be unbiased, we require that

\begin{aligned} & E_{\beta}\left(\widetilde{\beta}\right)=\\ \Leftrightarrow & E_{\beta}\left(Cy\right)=\beta\\ \Leftrightarrow & E_{\beta}\left(C\left(X\beta+\varepsilon\right)\right)=\beta\\ \Leftrightarrow & CX\beta=\beta\end{aligned}

because $E_{\beta}\left(\varepsilon\right)=0$. Notice that for the equation above to hold, we require $CX=I$.

Let us now calculate the variance of $\widetilde{\beta}$:

$Var_{\beta}\left(\widetilde{\beta}\right)=Var_{\beta}\left(C\varepsilon\right)=CC^{'}\sigma^{2}$

Now, define $D$ as the difference between the “slope” of $\widetilde{\beta}$ and $\widehat{\beta}_{OLS}$, s.t. $D=C-\left(X^{'}X\right)^{-1}X^{'}$. Using this definition, we can rewrite $Var_{\beta}\left(\widetilde{\beta}\right)$ as

\begin{aligned} Var_{\beta}\left(\widetilde{\beta}\right) & =CC^{'}\sigma^{2}\\ & =\left(D+\left(X^{'}X\right)^{-1}X^{'}\right)\left(D+\left(X^{'}X\right)^{-1}X^{'}\right)^{'}\sigma^{2}\\ & =DD^{'}\sigma^{2}+\underset{=0}{\underbrace{D\left(X^{'}X\right)^{-1}X^{'}\sigma^{2}+\left(X^{'}X\right)^{-1}X^{'}D\sigma^{2}}}+\underset{Var\left(\widehat{\beta}_{OLS}\right)}{\underbrace{\left(X^{'}X\right)^{-1}\sigma^{2}}}\end{aligned}

The last term equals the variance of the OLS estimator. The second and third terms equal zero each, because

$\left.\begin{array}{c} CX=I\\ CX=DX+I \end{array}\right\} \Rightarrow DX=0$

where the first equation is an implication of unbiasedeness, and the second one follows from postmultiplying the definition of $D$ by $X$.

Hence, we have learned that

$Var_{\beta}\left(\widetilde{\beta}\right)=Var\left(\widehat{\beta}_{OLS}\right)+DD^{'}\sigma^{2}.$

Because $DD^{'}$ is a positive semidefinite matrix, by definition, $Var_{\beta}\left(\widetilde{\beta}\right)\geq Var\left(\widehat{\beta}_{OLS}\right).$

Finally, note that we did not make assumptions about the distribution of $\varepsilon$. In the case $\varepsilon\sim N\left(0,\sigma^{2}\right)$, then $\widehat{\beta}_{OLS}$ attains the Cramer-Rao lower bound: In this case, $\widehat{\beta}_{OLS}$ is the best unbiased estimator (BUE), i.e., even non-linear estimators can be more efficient in this case.