Full Lecture 7

From Significant Statistics
Jump to navigation Jump to search

Random Sample

Let [math]X=\left(X_{1}..X_{n}\right)[/math] be an n-dimensional random vector. The random variables [math]X_{1}..X_{n}[/math] constitute a random sample if they are (mutually) independent and have identical (marginal) distributions.

We usually refer to such variables as being i.i.d.: Independent and Identically Distributed. To reiterate, these variables share the same distribution, and are not correlated.

It follows that if [math]X[/math] is a random sample from distribution [math]F\left(\cdot\right)[/math], then

[math]F_{X_{1}..X_{n}}\left(x_{1}..x_{n}\right)\underset{(independence)}{\underbrace{=}}\Pi_{i=1}^{n}F_{X_{i}}\left(x_{i}\right)\underset{(F_{X_{i}}=F,\,\forall i)}{\underbrace{=}}\Pi_{i=1}^{n}F\left(x_{i}\right)[/math].

Also, note that the multiplicative result also applies to the pmd and pdf.


Statistics

Let [math]X_{1}..X_{n}[/math] be a random sample and let [math]T:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}[/math] be a function (for some [math]k\gt 1[/math]).

The random variable [math]Y=T\left(X_{1}..X_{n}\right)[/math] is called a statistic, and its distribution is called the sampling distribution of [math]Y[/math].

Some Examples

  • The sample mean is [math]\overline{X}=\frac{1}{n}\sum_{i=1}^{n}X_{i}[/math].
  • The sample variance is [math]s^{2}=\frac{1}{n-1}\sum_{i=1}^{n}\left(X_{i}-\overline{X}\right)^{2}[/math].
  • The sample standard deviation is [math]s=\sqrt{s^{2}}[/math].

Notice that each of the statistics above is a random variable. Each random sample of [math]X[/math]s will yield a slightly different sample mean, sample variance, etc.

At this point you may be wondering about the [math]\frac{1}{n-1}[/math] factor in the formula for the sample variance. We will explain that shortly.

The statistics above are random variables in their own right. They too have moments. Here are a few:

Expected Sample Mean

  • [math]E\left(\overline{X}\right)=E\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\right)=\frac{1}{n}\sum_{i=1}^{n}\underset{=\mu}{\underbrace{E\left(X_{i}\right)}}=\frac{n\mu}{n}=\mu.[/math]

Variance of the Sample Mean

  • [math]Var\left(\overline{X}\right)=Var\left(\frac{1}{n}\sum_{i=1}^{n}X_{i}\right)=\frac{1}{n^{2}}Var\left(\sum_{i=1}^{n}X_{i}\right)=\frac{1}{n^{2}}\sum_{i=1}^{n}Var\left(X_{i}\right)=\frac{n\sigma^{2}}{n^{2}}=\frac{\sigma^{2}}{n}[/math].

The variance result is interesting and intuitive: As we increase the sample size, the variance of the mean decreases. For example, suppose you’d take 100 draws of [math]X_{i}[/math] many times, and each time, calculated the mean. (For example, in Excel, each column would contain 100 draws of [math]X_{i}[/math], and the final row calculates the means across all columns). The variance of the means decreases with the number of draws (in our case, 100). If we increased the number of draws to 1,000,000, then the means of each column would probably be very similar, and so the variance of those means would be further reduced.

The result on [math]Var\left(\overline{X}\right)[/math] tells us the specific rate at which the variance of the mean decreases with [math]n[/math].

Consider also the following well-known result, which we provide a lot of detail for:

Expectation of [math]s^{2}[/math]

  • [math]E\left(s^{2}\right)=E\left[\frac{1}{n-1}\sum_{i=1}^{n}\left(X_{i}-\overline{X}\right)^{2}\right]=\frac{1}{n-1}E\left[\sum_{i=1}^{n}\left(X_{i}^{2}+\overline{X}^{2}-2X_{i}\overline{X}\right)\right]=\frac{1}{n-1}E\left[n\overline{X}^{2}+\sum_{i=1}^{n}\left(X_{i}^{2}\right)-2\overline{X}\sum_{i=1}^{n}\left(\frac{n}{n}X_{i}\right)\right]=\frac{1}{n-1}E\left[n\overline{X}^{2}+\sum_{i=1}^{n}\left(X_{i}^{2}\right)-2n\overline{X}\sum_{i=1}^{n}\left(\frac{X_{i}}{n}\right)\right][/math][math]=\frac{1}{n-1}E\left[n\overline{X}^{2}+\sum_{i=1}^{n}\left(X_{i}^{2}\right)-2n\overline{X}^{2}\right]=\frac{1}{n-1}E\left[\sum_{i=1}^{n}\left(X_{i}^{2}\right)-n\overline{X}^{2}\right]=\frac{1}{n-1}\left(nE\left(X_{i}^{2}\right)-nE\left(\overline{X}^{2}\right)\right)=\frac{1}{n-1}\left(n\left(\mu^{2}+\sigma^{2}\right)-n\left(\mu^{2}+\frac{\sigma^{2}}{n}\right)\right)=\frac{n\sigma^{2}-\sigma^{2}}{n-1}=\sigma^{2}[/math]

We have used the fact that [math]E\left(X_{i}^{2}\right)=Var\left(X_{i}\right)+E\left(X_{i}\right)^{2}[/math] and [math]E\left(\overline{X}_{i}^{2}\right)=Var\left(\overline{X}_{i}\right)+E\left(\overline{X}_{i}\right)^{2}[/math].

It may be surprising that [math]E\left(s^{2}\right)=\sigma^{2}[/math], given the difference in their denominators. The reason the denominator [math]\frac{1}{n-1}[/math] in [math]s^{2}[/math] yields an expectation of [math]\sigma^{2}[/math] is that the draws of [math]X_{i}[/math] will be closer to their average ([math]\overline{X}[/math]) than to the sample mean, [math]E\left(X\right)[/math]. As a result, we are required to use a lower denominator than if we knew the true population mean.


Order Statistics

Let [math]X_{1}..X_{n}[/math] be a random sample. The order statistics are the sample values placed in ascending order, i.e.,

[math]X_{\left(1\right)}=\min_{i\leq n}X_{i}\leq X_{\left(2\right)}\leq...\leq X_{\left(n\right)}=\max_{i\leq n}X_{i}[/math]

This is a maybe unexpected, but often useful statistic. We can ask what the is distribution of the maximum of a random sample.

For example, if we drew many sets of 30 draws each of [math]X\sim N\left(0,1\right)[/math], what would be the distribution of the maximum (across the samples)?

Distribution of the Maximum

The distribution of the maximum of a random sample with cdf [math]F\left(\cdot\right)[/math] equals

[math]F_{X_{\left(n\right)}}\left(x\right)=P\left(X_{\left(n\right)}\leq x\right)=P\left(X_{1}\leq x,X_{2}\leq x,...,X_{n}\leq x\right)=P\left(X_{1}\leq x\right)P\left(X_{2}\leq x\right)...P\left(X_{n}\leq x\right)=F\left(x\right)^{n}[/math].

The distribution for the lowest order statistic can also be calculated via a similar method.

Distribution of Order Statistics

In general, the distribution of the k-th order statistic is given by

[math]F_{X_{\left(r\right)}}\left(x\right)=P\left(X_{\left(r\right)}\leq x\right)=\sum_{j=r}^{n}\left(\begin{array}{c} n\\ j \end{array}\right)F\left(x\right)^{j}\left(1-F\left(x\right)\right)^{n-j}[/math]

where the binomial structure is apparent.

For each value of [math]j[/math], starting at [math]r[/math], we sum the probability of observing [math]j[/math] values below [math]x[/math] and [math]n-j[/math] values above.

For example, consider the case where [math]n=30[/math]. Then, [math]P\left(X_{\left(r\right)}\leq x\right)=P\left(X_{\left(r\right)}\leq x\wedge X_{\left(r+1\right)}\gt x\right)+\left(X_{\left(r+1\right)}\leq x\wedge X_{\left(r+2\right)}\gt x\right)+...[/math]: The sum over the binomials is simply the sum of the probabilities of the cases that satisfy to [math]X_{\left(r\right)}\leq x[/math].


Statistical Inference

This point marks the end of the introduction of the probability tools needed. Our goal now shifts, from situations where distributions are known and outcomes are unknown, to situations where we observed the outcomes but not the distributions (up to some parameters). We will keep denoting random variables by capital letters, and will denote outcomes by lowercase letters. Some examples:

  • We may observe [math]x_{1}...x_{n}[/math] where [math]X_{i}\sim Ber\left(p\right)[/math] where [math]p\in\left(0,1\right)[/math] is unknown.
  • We may observe [math]x_{1}...x_{n}[/math] where [math]X_{i}\sim U\left(0,\theta\right)[/math] where [math]\theta\gt 0[/math] is unknown.
  • We may observe [math]x_{1}...x_{n}[/math] where [math]X_{i}\sim N\left(\mu,\sigma^{2}\right)[/math] where [math]\mu\in\mathbb{R}[/math] and/or [math]\sigma^{2}\gt 0[/math] are/is unknown.

We will consider three types of statistical inference:

  • Point Estimation
    • In this case, we want to single out one distribution (specifically, the parameters of the distribution).
  • Hypothesis Testing
    • In this case, we want to evaluate a specific theory (for example, that [math]\mu=0[/math]).
  • Interval Estimation
    • In this case, we want to isolate which values of [math]\theta[/math] are plausible.