Full Lecture 5

From Significant Statistics
Jump to navigation Jump to search

Parametric Families of Distributions

When working in statistics, it is often useful to draw conclusions that apply to multiple distributions. We now define classes of distributions, often referred to as families.

Exponential Family

The set of pmfs/pdfs [math]\left\{ f\left(\left.\cdot\right|\theta\right):\theta\in\Theta\right\}[/math] is the exponential family if [math]f\left(\left.x\right|\theta\right)=h\left(x\right)c\left(\theta\right)\exp\left[\sum_{i=1}^{K}\omega_{i}\left(\theta\right)t_{i}\left(x\right)\right],\,x\in\mathbb{R},\theta\in\Theta[/math]

where

[math]h:\mathbb{R}\rightarrow\mathbb{R}_{+},c:\Theta\rightarrow\mathbb{R}_{++},\omega_{i}:\Theta\rightarrow\mathbb{R}\,\forall i,t_{i}:\mathbb{R}\rightarrow\mathbb{R}\,\forall i[/math] and some [math]K\geq1[/math].

Normal Distribution

The normal distribution is part of the exponential family, as we now show:

[math]\begin{aligned} f\left(\left.x\right|\mu,\sigma^{2}\right) & =\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\left(x-\mu\right)^{2}}{2\sigma^{2}}\right)\\ & \frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{1}{2\sigma^{2}}\left(x^{2}+\mu^{2}-2\mu x\right)\right)\\ & =\underset{h\left(x\right)}{\underbrace{1}}.\underset{c\left(\mu,\sigma^{2}\right)}{\underbrace{\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\mu^{2}}{2\sigma^{2}}\right)}}\underset{\exp\left[\sum_{i=1}^{K}\omega_{i}\left(\theta\right)t_{i}\left(x\right)\right]}{\underbrace{\exp\left(-\frac{x^{2}}{2\sigma^{2}}+\frac{\mu}{\sigma^{2}}x\right)}}\end{aligned}[/math]

where

[math]\omega_{1}\left(\mu,\sigma^{2}\right)=-\frac{1}{2\sigma^{2}};t_{1}\left(x\right)=x^{2};\omega_{2}\left(\mu,\sigma^{2}\right)=\frac{\mu}{\sigma^{2}};t_{2}\left(x\right)=x[/math].

Bernoulli Distribution

The Bernoulli also belongs to the exponential family(!):

[math]\begin{aligned} f\left(\left.x\right|p\right) & =\begin{cases} p, & x=1\\ 1-p, & x=0\\ 0, & otherwise \end{cases}\\ & =\begin{cases} p^{x}\left(1-p\right)^{x}, & x\in\left\{ 0,1\right\} \\ 0, & otherwise \end{cases}\\ & =1\left(x\in\left\{ 0,1\right\} \right)p^{x}\left(1-p\right)^{1-x}\end{aligned}[/math] where [math]1\left(\cdot\right)[/math] is the indicator function.

Factorization yields:

[math]\begin{aligned} f\left(\left.x\right|p\right) & =1\left(x\in\left\{ 0,1\right\} \right)p^{x}\left(1-p\right)^{1-x}\\ & =1\left(x\in\left\{ 0,1\right\} \right)p^{x}\left(1-p\right)\left(1-p\right)^{-x}\\ & =1\left(x\in\left\{ 0,1\right\} \right)\left(1-p\right)\left(\frac{p}{1-p}\right)^{x}\\ & =\underset{h\left(x\right)}{\underbrace{1\left(x\in\left\{ 0,1\right\} \right)}}\underset{c\left(p\right)}{\underbrace{\left(1-p\right)}}\exp\left(\underset{\omega_{1}}{\underbrace{\log\left(\frac{p}{1-p}\right)}}\underset{t_{1}}{\underbrace{x}}\right)\end{aligned}[/math]

Remarks

  • Factor [math]c\left(\theta\right)[/math] is the normalizing constant of the pmf/pdf. This means that it can always be obtained, since it is there to ensure that the functions add up to 1.
  • The support of pmfs/pdfs of members of the exponential family does not depend on [math]\theta[/math], i.e., [math]S_{X}=\left\{ x\in\mathbb{R}:f\left(\left.x\right|\theta\right)\gt 0\right\} =\left\{ x\in\mathbb{R}:h\left(x\right)\gt 0\right\}[/math]. Otherwise, it is impossible to produce [math]h\left(x\right)c\left(\theta\right)[/math]. For example, the uniform distribution with pdf [math]f_{X}\left(\left.x\right|a,b\right)=\frac{1}{b-a}1\left(a\leq x\leq b\right)[/math] does not belong to the exponential family, since it is impossible to separate [math]a[/math] and [math]b[/math] from [math]x[/math] in the indicator function.

Location-Scale Family

A (parametric) family [math]\mathcal{F}[/math] of pdfs is a location-scale family is given by

[math]\mathcal{F}=\left\{ \frac{1}{\sigma}f\left(\frac{\cdot-\mu}{\sigma}\right):\mu\in\mathbb{R},\sigma\gt 0\right\}[/math]

where [math]f\left(\cdot\right)[/math] is the standard pdf of the family, [math]\mu[/math] is the location parameter and [math]\sigma[/math] is the scale parameter. The idea is that [math]\frac{1}{\sigma}f\left(\frac{\cdot-\mu}{\sigma}\right)[/math] is the pdf of [math]\mu+\sigma\widetilde{X}[/math] where [math]\widetilde{X}[/math] has pdf [math]f\left(\cdot\right)[/math].

Clearly, r.v.s with pdf [math]N\left(\mu,\sigma^{2}\right)[/math] belong to the location-scale family of [math]N\left(0,1\right)[/math]. Similarly, r.v.s with pdf [math]U\left(a,b\right)[/math] belong to the location-scale family of [math]U\left(0,1\right)[/math].

Functions that differ from the standard pdf only in their location or scale parameter belong to the pdf’s location and scale family, respectively.



Chebychev's Inequality

This inequality establishes a bound on how far the values of an r.v. can be from its mean. Suppose [math]X[/math] is an r.v.. Then, for any [math]r\gt 0[/math] and any [math]g:\mathbb{R}\rightarrow\mathbb{R}_{++}[/math],

[math]P\left(g\left(X\right)\geq r\right)\leq\frac{E\left(g\left(X\right)\right)}{r}[/math].

Proof

The proof is relatively simple. Note that

[math]\begin{aligned} & \forall x\in\mathbb{R},\,r1\left(g\left(x\right)\geq r\right)\leq g\left(x\right)\\ \Leftrightarrow & \forall x\in\mathbb{R},1\left(g\left(x\right)\geq r\right)\leq\frac{g\left(x\right)}{r}\\ \Rightarrow & \underset{=P\left(g\left(X\right)\geq r\right)}{\underbrace{E\left[1\left(g\left(X\right)\geq r\right)\right]}}\leq E\left(\frac{g\left(X\right)}{r}\right)\end{aligned}[/math]

Implications

From this result, we can derive some popular implications:

  • [math]P\left(\left|X\right|\geq r\right)\leq\frac{E\left(\left|X\right|\right)}{r},\,\forall r\gt 0[/math] - here, [math]g\left(x\right)=\left|X\right|[/math]. This is often referred to as Markov’s inequality.
  • [math]P\left(\left|X-\mu\right|\geq r\right)\leq\frac{E\left(\left|X-\mu\right|\right)}{r},\,\forall r\gt 0[/math] - here, [math]g\left(x\right)=\left|X-\mu\right|[/math].
  • From the previous identity, we can obtain an expression that relates mean to variance:

[math]\begin{aligned} & P\left(\left|X-\mu\right|\geq\varepsilon\right)\\ = & P\left(\left(X-\mu\right)^{2}\geq\varepsilon^{2}\right)\leq\frac{E\left(\left(X-\mu\right)^{2}\right)}{\varepsilon^{2}}=\frac{Var\left(X\right)}{\varepsilon^{2}}\end{aligned}[/math] which implies that [math]P\left(\left|X-\mu\right|\geq\varepsilon\right)\leq\frac{Var\left(X\right)}{\varepsilon^{2}}[/math].

We have established a bound on how much [math]X[/math] can vary around its mean, as function of its variance.



Multiple Random Variables

An n-dimensional vector [math]X=\left(X_{1}..X_{n}\right)'[/math] is a random vector if [math]X_{1}..X_{n}[/math] are random variables (defined on the same probability space).

We will mostly discuss bivariate distributions, but the results are mostly generalizable for the n-case.

Joint CDF

The joint cdf of random vector [math]\left(X,Y\right)[/math] is the function [math]F_{X,Y}:\mathbb{R}^{2}\rightarrow\left[0,1\right][/math], given by

[math]F_{X,Y\left(x,y\right)}=P\left(X\leq x,Y\leq y\right),\forall\left(x,y\right)'\in\mathbb{R}^{2}.[/math]

Joint PMF/PDF

  • [math]\left(X,Y\right)'[/math] is discrete if [math]\exists f_{X,Y}:\mathbb{R}^{2}\rightarrow[0,1][/math] s.t. [math]F_{X,Y}\left(x,y\right)=\sum_{s\leq x}\sum_{t\leq y}f_{X,Y}\left(s,t\right),\,\forall\left(x,y\right)'\in\mathbb{R}^{2}.[/math]
  • [math]\left(X,Y\right)'[/math] is continuous if [math]\exists f_{X,Y}:\mathbb{R}^{2}\rightarrow\mathbb{R}_{+}[/math] s.t. [math]F_{X,Y}\left(x,y\right)=\int_{-\infty}^{x}\int_{-\infty}^{y}f_{X,Y}\left(s,t\right)dtds,\,\forall\left(x,y\right)'\in\mathbb{R}^{2}.[/math]

The remaining properties we discussed in the univariate case extend: Functions with applicable domains and codomains that ‘sum up to one’ are pmfs/pdfs. In addition, expectations extend intuitively.

Let [math]g\left(x,y\right):\mathbb{R}^{2}\rightarrow\mathbb{R}[/math]. Its expected value is equal to

[math]E\left(g\left(x,y\right)\right)=\begin{cases} \sum_{s,t\in\mathbb{R^{2}}}g\left(s,t\right)f_{X,Y}\left(s,t\right), & \text{if}\,\left(X,Y\right)'\,\text{is discrete }\\ \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}g\left(s,t\right)f_{X,Y}\left(s,t\right)dtds & \text{if}\,\left(X,Y\right)'\,\text{is continuous} \end{cases}[/math]