Lecture 16. A) Bayesian Inference

From Significant Statistics
Jump to navigation Jump to search
.

Bayesian Inference

In the classical approach, a probability is regarded as a long run frequency/propensity. We take random samples from a population of infinite size, and parameter [math]\theta[/math] is fixed.

In the Bayesian approach, a probability is a subjective degree of belief. Parameters themselves are regarded as random, and beliefs are updated based on data.

Ingredients

  • Model for the data given some known parameters, [math]f_{\left.X\right|p}\left(\left.x\right|p\right)[/math].
  • Prior distribution of parameters, [math]f_{p}\left(p\right)[/math].
  • Bayes Theorem: [math]\underset{\text{posterior distribution}}{\underbrace{f_{\left.p\right|X}\left(\left.p\right|x\right)}}=\frac{f_{X,p}\left(X,p\right)}{f_{X}\left(x\right)}=\frac{\overset{\text{likelihood function}}{\overbrace{f_{\left.X\right|p}\left(\left.x\right|p\right)}}.\overset{\text{prior distribution}}{\overbrace{f_{p}\left(p\right)}}}{\int f_{\left.X\right|p}\left(\left.x\right|p\right)f_{p}\left(p\right)dp}[/math]

In this framework, estimating a parameter means finding [math]f_{\left.p\right|X}[/math] for some data [math]x_{1}..x_{n}[/math]. Unlike in the classical approach, parameters are now distributed according to function [math]f_{p}\left(\cdot\right)[/math], although we’ll keep writing them in lowercase. In actual estimation, this function may encompass the researcher’s past experience or data from previous experiments. In practice, the function is often chosen to be relatively uninformative.

The posterior distribution is the updated distribution of the parameters, conditional on data. We may use the distribution to generate point estimates (by taking expectations of the parameters, for example).