Lecture 7. D) Statistical Inference
Jump to navigation
Jump to search
Statistical Inference
This point marks the end of the introduction of the probability tools needed. Our goal now shifts, from situations where distributions are known and outcomes are unknown, to situations where we observed the outcomes but not the distributions (up to some parameters). We will keep denoting random variables by capital letters, and will denote outcomes by lowercase letters. Some examples:
- We may observe [math]x_{1}...x_{n}[/math] where [math]X_{i}\sim Ber\left(p\right)[/math] where [math]p\in\left(0,1\right)[/math] is unknown.
- We may observe [math]x_{1}...x_{n}[/math] where [math]X_{i}\sim U\left(0,\theta\right)[/math] where [math]\theta\gt 0[/math] is unknown.
- We may observe [math]x_{1}...x_{n}[/math] where [math]X_{i}\sim N\left(\mu,\sigma^{2}\right)[/math] where [math]\mu\in\mathbb{R}[/math] and/or [math]\sigma^{2}\gt 0[/math] are/is unknown.
We will consider three types of statistical inference:
- Point Estimation
- In this case, we want to single out one distribution (specifically, the parameters of the distribution).
- Hypothesis Testing
- In this case, we want to evaluate a specific theory (for example, that [math]\mu=0[/math]).
- Interval Estimation
- In this case, we want to isolate which values of [math]\theta[/math] are plausible.