Lecture 14. B) Law of Large Numbers
Theorem: Law of Large Numbers
Let [math]X_{1}..X_{n}[/math] be a sequence of random variables, where [math]E\left(X_{i}\right)=\mu[/math] and [math]Var\left(X_{i}\right)\lt \sigma^{2}[/math].
Then,
[math]\overline{X}_{n}=\frac{1}{n}\sum_{i=1}^{n}X_{i}\overset{p}{\rightarrow}\mu.[/math]
This result is probably not very surprising, and in a sense, we have been using it intuitively for the method-of-moments estimator.
The sample mean, as [math]n[/math] increases, converges in probability to the population mean [math]E\left(X_{i}\right)[/math].
The proof follows from Chebychev’s inequality:
[math]Pr\left(\left|\overline{X}_{n}-\mu\right|\geq\varepsilon\right)\leq\frac{Var\left(\overline{X}_{n}\right)}{\varepsilon^{2}}=\frac{\sigma^{2}}{n\varepsilon^{2}}[/math] such that [math]\lim_{n\rightarrow\infty}\,Pr\left(\left|\overline{X}_{n}-\mu\right|\geq\varepsilon\right)\leq\lim_{n\rightarrow\infty}\,\frac{\sigma^{2}}{n\varepsilon^{2}}=0,\,\forall\varepsilon\gt 0.[/math]
Notice that we did not restrict [math]X_{i}[/math] to be a random sample. If we are willing to impose that condition, we can prove a stronger result:
Strong Law of Large Numbers
Let [math]X_{1}..X_{n}[/math] be a random sample with mean [math]\mu[/math]. Then,
[math]\overline{X}_{n}=\overset{a.s.}{\rightarrow}\mu[/math]
Although this result applies for any random sample, in this case, we do not require that [math]\sigma^{2}[/math] exists, such that it may not converge in quadratic mean.