# Full Lecture 4

There exist many distributions. You can create your own right now. However, over time, some distributions have revealed themselves as particularly useful, and so it's good to keep track of them. We only discuss univariate distributions in this lecture. We start by presenting a few discrete distributions, and then discuss three continuous distributions.

# Bernoulli Distribution

An r.v. $X$ has a Bernoulli distribution with parameter $p\in\left[0,1\right]$ if $X$ is discrete with p.m.f. $f_{X}\left(x\right)=\begin{cases} p, & if\,x=1\\ 1-p, & if\,x=0\\ 0 & otherwise \end{cases}$

The Bernoulli distribution captures the outcome of a binary experiment. For example, one could throw a biased coin, with probability of Heads equal to $p$.

## Mean

$E\left(X\right)=\sum_{x\in S}xf_{X}\left(\left.x\right|p\right)=0.f_{X}\left(\left.0\right|p\right)+1.f_{X}\left(\left.1\right|p\right)=p,\,$ where $S=\left\{ x:f_{X}\left(\left.x\right|p\right)\gt 0\right\}$ is the support of $X$.

## Variance

$Var\left(X\right)=E\left[\left(X-E\left(X\right)\right)^{2}\right]=E\left(X^{2}\right)-E\left(X\right)^{2}\underset{0^{2}.f_{X}\left(\left.0\right|p\right)+1^{2}.f_{X}\left(\left.1\right|p\right)}{\underbrace{p}-p^{2}}=p\left(1-p\right)$

## MGF

\begin{aligned} M_{X}\left(t\right) & =E\left(\exp\left(Xt\right)\right)=\sum_{x\in S}\exp\left(tX\right)f_{X}\left(\left.x\right|p\right)\\ & =\exp\left(t0\right)f_{X}\left(\left.0\right|p\right)+\exp\left(t1\right)f_{X}\left(\left.1\right|p\right)\\ & =1-p+\exp\left(t\right)p\end{aligned}

# Binomial Distribution

An r.v. $X$ follows a Binomial distribution with parameters $n\in\mathbb{N}$, $p\in\left[0,1\right]$if $X$ is discrete with pmf

$f_{X}\left(\left.x\right|n,p\right)=\begin{cases} \left(\begin{array}{c} n\\ x \end{array}\right)p^{x}\left(1-p\right)^{n-x}, & if\,x\in\left\{ 0,1,...,n\right\} \\ 0, & otherwise \end{cases}$

where $\left(\begin{array}{c} n\\ x \end{array}\right)$ is called the binomial coefficient, and is defined by $\left(\begin{array}{c} n\\ x \end{array}\right)=\frac{n!}{x!\left(n-x\right)!}$

The Binomial Distribution characterizes the number of successes in a binary (Bernoulli) experiment repeated $n$ times. Parameter $n$ is the number of trials, $p$ is the probability of success, and $x$ is the realized number of successes. If $X\sim Bin\left(1,p\right)$, then $X\sim Ber\left(p\right)$.

## Mean

$E\left(X\right)=np$

## Variance

$Var\left(X\right)=np\left(1-p\right)$

## MGF

$M_{X}\left(t\right)=\left(1-p+p\exp\left(t\right)\right)^{n}$

Notice that the expressions above are clearly related to their Bernoulli analogues.

# Poisson Distribution

An r.v. $X$ follows a Poisson distribution with parameter $\lambda\gt 0$, if $X$ is discrete with pmf $f_{X}\left(x\right)=\begin{cases} \exp\left(-\lambda\right)\frac{\lambda^{x}}{x!}, & x\in\mathbb{N}_{0}\\ 0, & otherwise \end{cases}$

The Poisson distribution characterizes a process with constant arrival rate, $\lambda$ (expressed as number of arrivals per unit of time).

Fun fact: $Bin\left(n,p\right)\simeq Pois\left(np\right)$ for $n$ large and $np$ small.

## Mean

$E\left(X\right)=\sum_{x=0}^{^{\infty}}xf_{X}\left(x\right)=\sum_{x=0}^{^{\infty}}x\exp\left(-\lambda\right)\frac{\lambda^{x}}{x!}=\sum_{x=1}^{^{\infty}}\exp\left(-\lambda\right)\frac{\lambda^{x}}{\left(x-1\right)!}=\lambda\sum_{x=1}^{^{\infty}}\underset{f_{X}\left(\left.x-1\right|\lambda\right)}{\underbrace{\exp\left(-\lambda\right)\frac{\lambda^{x-1}}{\left(x-1\right)!}}}=\lambda\underset{=1}{\underbrace{\sum_{t=1}^{^{\infty}}f_{X}\left(\left.t\right|\lambda\right)}}=\lambda$.

## Variance

$Var\left(X\right)=\lambda$ since $\underset{2nd\,factorial\,moment\,X}{\underbrace{E\left(X\left(X-1\right)\right)}}=\sum_{x=0}^{^{\infty}}x\left(x-1\right)\exp\left(-\lambda\right)\frac{\lambda^{x}}{x!}=\sum_{x=2}^{^{\infty}}\exp\left(-\lambda\right)\frac{\lambda^{x}}{\left(x-2\right)!}=\lambda^{2}\sum_{x=2}^{^{\infty}}\exp\left(-\lambda\right)\frac{\lambda^{x-2}}{x!}=\lambda^{2}$ and $Var\left(X\right)=E\left(X^{2}\right)-E\left(X\right)^{2}=E\left(X\left(X-1\right)\right)+E\left(X\right)-E\left(X\right)^{2}=\lambda$.

## MGF

$M_{X}\left(t\right)=\exp\left(\lambda\left(\exp\left(t\right)-1\right)\right)$

# Uniform Distribution on $\left[a,b\right]$

An r.v. $X$ follows a uniform distribution $U\left(a,b\right)$ if $X$ is continuous with pdf $f_{X}\left(X\right)=\begin{cases} \frac{1}{b-a}, & x\in\left[a,b\right]\\ 0, & otherwise \end{cases}$

Under the Uniform distribution, all values in $\left[a,b\right]$ are “equally likely.”

Notice that if $X\sim U\left(a,b\right)$, then $X=\left(b-a\right)\widetilde{X}+a$ where $\widetilde{X}\sim U\left(0,1\right)$, and $f_{\widetilde{X}}\left(x\right)=1\left(x\in\left[0,1\right]\right)$.

## Mean

$E\left(\widetilde{X}\right)=\int_{0}^{1}xdx=\frac{1}{2}$.

So, $E\left(X\right)=E\left(\left(b-a\right)\widetilde{X}+a\right)=\left(b-a\right)E\left(\widetilde{X}\right)+a=\frac{a+b}{2}$

## Variance

$Var\left(\widetilde{X}\right)=E\left(\widetilde{X}^{2}\right)-E\left(\widetilde{X}\right)^{2}=\int_{0}^{1}x^{2}dx-\left(\frac{1}{2}\right)^{2}=\frac{1}{3}-\frac{1}{4}=\frac{1}{12}$.

So, $Var\left(X\right)=Var\left(\left(b-a\right)\widetilde{X}+a\right)=\left(b-a\right)^{2}Var\left(\widetilde{X}\right)=\frac{\left(b-a\right)^{2}}{12}$.

## MGF

$M_{X}\left(t\right)=\exp\left(at\right)M_{\widetilde{X}}\left(\left(b-a\right)t\right)=...=\frac{\exp\left(bt\right)-\exp\left(at\right)}{\left(b-a\right)t}$

# Gamma Distribution

An r.v. $X$ follows a Gamma distribution with parameters $\alpha,\beta\gt 0$ if $X$ has continuous pdf $f_{X}\left(x\right)=\begin{cases} \frac{1}{\Gamma\left(\alpha\right)\beta^{\alpha}}x^{\alpha-1}\exp\left(-\frac{x}{\beta}\right), & x\gt 0\\ 0, & otherwise \end{cases}$ where $\Gamma\left(x\right)$ is the gamma function, and is given by $\Gamma\left(x\right)=\int_{0}^{\infty}t^{\alpha-1}\exp\left(-t\right)dt,\,\alpha\gt 0$. The Gamma function is a natural extension of the factorial operation, because $\Gamma\left(\alpha+1\right)=\Gamma\left(\alpha\right)$ and $\Gamma\left(1\right)=1$ which implies that $\Gamma\left(n\right)=n!\,\forall n\in\mathbb{N}.$ The Gamma distribution is especially useful for Bayesian estimation, which we will cover later.

This is a good time to describe a common property of pdfs. Notice that over its support, function $\frac{1}{\Gamma\left(\alpha\right)\beta^{\alpha}}x^{\alpha-1}\exp\left(-\frac{x}{\beta}\right)$ has some factors that depend on $x$, and others that do not: $f_{X}\left(x\right)=\frac{1}{\Gamma\left(\alpha\right)\beta^{\alpha}}x^{\alpha-1}\exp\left(-\frac{x}{\beta}\right)=\underset{normalizing\,constant}{\underbrace{\frac{1}{\Gamma\left(\alpha\right)\beta^{\alpha}}}}.\underset{kernel\,of\,pdf}{\underbrace{x^{\alpha-1}\exp\left(-\frac{x}{\beta}\right)}}$ The normalizing constant does not depend on $x$. It is there simply to make sure that the function integrates to one. From this, we immediately learn that $\int_{0}^{\infty}x^{\alpha-1}\exp\left(-\frac{x}{\beta}\right)dx=\Gamma\left(\alpha\right)\beta^{\alpha}.$

## Mean

Consider first r.v. $\widetilde{X}\sim Gam\left(\alpha,1\right)$, with the information that $X=\beta\widetilde{X}\sim Gam\left(\alpha,\beta\right)$. Then,

$E\left(\widetilde{X}\right)=\int_{0}^{\infty}xf_{X}\left(\left.x\right|\alpha,1\right)dx=\int_{0}^{\infty}x\frac{1}{\Gamma\left(\alpha\right)}x^{\alpha-1}\exp\left(-x\right)dx=\frac{1}{\Gamma\left(\alpha\right)}\int_{0}^{\infty}x^{\alpha}\exp\left(-x\right)dx\frac{\Gamma\left(\alpha+1\right)}{\Gamma\left(\alpha+1\right)}=\frac{\Gamma\left(\alpha+1\right)}{\Gamma\left(\alpha\right)}\underset{\int_{0}^{\infty}f_{\widetilde{X}}\left(\left.x\right|\alpha+1,1\right)dx=1}{\underbrace{\int_{0}^{\infty}\frac{1}{\Gamma\left(\alpha+1\right)}x^{\alpha}\exp\left(-x\right)dx}}=\alpha$.

So, $E\left(X\right)=E\left(\beta\widetilde{X}\right)=\beta E\left(\widetilde{X}\right)=\alpha\beta$.

## Variance

$Var\left(\widetilde{X}\right)=E\left(\widetilde{X}^{2}\right)-E\left(\widetilde{X}\right)^{2}=...=\alpha$, so $Var\left(X\right)=\alpha\beta^{2}$.

## MGF

$M_{X}\left(t\right)=\left(1-\frac{t}{\beta}\right)^{-\alpha},t\lt \beta$

# Normal Distribution

Recall: Random variable $X$ follows a normal distribution $N\left(\mu,\sigma^{2}\right)$ if it is continuous with pdf $f_{X}\left(\left.x\right|\mu,\sigma^{2}\right)=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\left(x-\mu\right)^{2}}{2\sigma^{2}}\right),x\in\mathbb{R}$

The normal distribution is by far the most important continuous distribution. Its main claim to fame is that it can be shown (as we will, later) that averages of a large number of random variables, under some conditions, are normally distributed. This result is called the central limit theorem.

Note that if $X\sim N\left(\mu,\sigma^{2}\right)$, $X=\mu+\sigma\widetilde{X}$ where $\widetilde{X}\sim N\left(0,1\right)$.

## CDF

The cdf of the normal distribution does not admit a closed-form representation. However, we do use a short hand that relies on the $N\left(0,1\right)$ distribution:

$F_{X}\left(x\right)=P\left(X\leq x\right)=P\left(\mu+\sigma\widetilde{X}\leq x\right)=P\left(\widetilde{X}\leq\frac{x-\mu}{\sigma}\right)=\Phi\left(\frac{x-\mu}{\sigma}\right)$, where $\Phi\left(\cdot\right)$ is the standard normal cdf., i.e.,

$\Phi\left(x\right)=\int_{-\infty}^{x}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right)dx,x\in\mathbb{R}$.

# Dirac delta function

The Dirac delta function is not really a pdf. However, it can sometimes be useful when working with mass points. It is defined as $\delta:\mathbb{R}\rightarrow\mathbb{R}\cup\infty$, s.t.,

$\delta\left(x\right)=\begin{cases} +\infty, & x=0\\ 0, & otherwise \end{cases}$

and

$\int_{-\infty}^{\infty}\delta\left(x\right)dx=1$

This is a valid pdf, except it is not a function (its counterdomain includes infinity).

## Sifting Property

This property is especially useful. It states:

$\int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-\alpha\right)dx=f\left(\alpha\right)1\left(\underline{x}\leq a\leq\overline{x}\right)$

as long as $f\left(\cdot\right)$ is continuous at $a$.

## Sketch of the Proof

Let

$g\left(x\right)=\begin{cases} \frac{1}{2\Delta}, & -\Delta\leq x\leq\Delta\\ 0, & otherwise \end{cases}$

and notice that $g\left(x\right)$ is a pdf with support $\left[-\Delta,\Delta\right]$.

In addition, notice that $\lim_{\Delta\rightarrow0}\,g\left(x\right)=\delta\left(x\right).$

Then,

\begin{aligned} \int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-\alpha\right)dx & =\int_{-\infty}^{\infty}\lim_{\Delta\rightarrow0}\,f\left(x\right)g\left(x-\alpha\right)dx\\ & =\lim_{\Delta\rightarrow0}\int_{-\infty}^{\infty}f\left(x\right)g\left(x-\alpha\right)dx\\ & =\lim_{\Delta\rightarrow0}\,\frac{1}{2\Delta}\int_{-\infty}^{\infty}f\left(x\right)1\left[-\Delta\leq x-\alpha\leq\Delta\right]dx\\ & =\lim_{\Delta\rightarrow0}\,\frac{1}{2\Delta}\int_{\alpha-\Delta}^{\alpha+\Delta}f\left(x\right)dx\\ & =\lim_{\Delta\rightarrow0}\,\frac{F\left(\alpha+\Delta\right)-F\left(\alpha-\Delta\right)}{2\Delta}\\ & =f\left(\alpha\right)\end{aligned}

Clearly, some conditions are needed for the steps above to be valid. Also, when facing a proper integral, it is possible to show that $\int_{\underline{x}}^{\overline{x}}f\left(x\right)\delta\left(x-\alpha\right)dx=f\left(\alpha\right)1\left(\underline{x}\leq a\leq\overline{x}\right).$

## Example

Let

$Y=\begin{cases} 1, & \text{w.p. }\alpha\\ U\left(0,1\right) & \text{w.p. }1-\alpha \end{cases}$

The distribution of $Y$ is neither continuous nor discrete. It has a mass point at 1; otherwise, it is a uniform distribution on the $\left[0,1\right]$ support.

The “pdf” of $Y$ can be written as

$f_{Y}=\alpha\delta\left(y-1\right)+\left(1-\alpha\right)1\left(y\in\left[0,1\right]\right)$

The expectation of $Y$ can be calculated as:

\begin{aligned} E\left(Y\right) & =\int_{-\infty}^{\infty}y\left(\alpha\delta\left(y-1\right)+\left(1-\alpha\right)1\left(y\in\left[0,1\right]\right)\right)dy\\ & =\alpha\int_{-\infty}^{\infty}y\delta\left(y-1\right)dy+\left(1-\alpha\right)\int_{0}^{1}ydy\\ & =\alpha.1+\left(1-\alpha\right)\left.\frac{y^{2}}{2}\right|_{0}^{1}\\ & =\alpha+\frac{1-\alpha}{2}\\ & =\frac{1+\alpha}{2}\end{aligned}

The result makes sense: when $\alpha$ approaches 1, $Y$ converges to a mass point on $1$, and $E\left(Y\right)=1$. When $\alpha$ approaches zero, $Y$ converges to a standard uniform distribution, and $E\left(Y\right)=\frac{1}{2}$.