.

# Correspondence Theorem

Let $P_{X}\left(\cdot\right)$ and $P_{Y}\left(\cdot\right)$ be probability functions, defined on $\mathcal{B}\left(\mathbf{R}\right)$ and let $F_{X}\left(\cdot\right)$ and $F_{Y}\left(\cdot\right)$ be associated cdfs. Then,

$P_{X}\left(\cdot\right)=P_{Y}\left(\cdot\right)$ iff $F_{X}\left(\cdot\right)=F_{Y}\left(\cdot\right)$.

The correspondence theorem assures us that we can restrict ourselves to cdfs. Relying on these won’t restrict us in any way, when compared to using probability functions.

## CDFs

Function $F:\mathbf{R}\rightarrow\left[0,1\right]$ is a cdf if it satisfies the following conditions:

• $\lim_{x\rightarrow-\infty}F\left(x\right)=0$
• $\lim_{x\rightarrow+\infty}F\left(x\right)=1$
• $F\left(\cdot\right)$ is non-decreasing
• $F\left(\cdot\right)$ is right-continuous (this can be shown by using probability functions of intervals)

## Nature of RVs

We now define the nature of a random variable:

Random variable $X$ is discrete if $\exists f_{X}:\mathbf{R}\rightarrow\left[0,1\right]$ s.t. $F_{X}\left(x\right)=\sum_{t\leq x}f_{X}\left(t\right),x\in\mathbf{R}$

Function $f_{X}$ is called the probability mass function (pmf).

Random variable $X$ is continuous if $\exists f_{X}:\mathbf{R}\rightarrow\mathbf{R}_{+}$ s.t. $F_{X}\left(x\right)=\int_{-\infty}^{x}f_{X}\left(t\right)dt,x\in\mathbf{R}$

Any such $f_{X}$ is called a probability density function (pdf). Notice that unlike pmfs, multiple pdfs are consistent with a given cdf. This occurs as long as the pdfs differ only on a set of (probability) measure-zero events.

Another interesting remark is that the probability of any specific value of a continuous variable is zero, i.e., $P\left(\left\{ x\right\} \right)=0,\forall x\in\mathbf{R}$.

## Examples

### Coin tossing

$F_{X}\left(x\right)=\begin{cases} 0, & x\lt 0\\ \frac{1}{2}, & 0\leq x\lt 1\\ 1, & x\geq1 \end{cases}$ In this case, $X$ is discrete and $F_{X}$ is a step function (this always occurs for discrete r.v.s). The probability mass function is equal to $f_{X}\left(x\right)=\begin{cases} \frac{1}{2}, & x\in\left\{ 0,1\right\} \\ 0, & otherwise \end{cases}$

### Uniform distribution on (0,1)

$F_{X}\left(x\right)=\begin{cases} 0, & x\lt 0\\ x, & 0\leq x\lt 1\\ 1, & x\geq1 \end{cases}$ where $X$ is continuous.

Moreover, both $f_{X}\left(x\right)=\begin{cases} 1, & x\in\left[0,1\right]\\ 0, & otherwise \end{cases}$ and $f_{X}\left(x\right)=\begin{cases} 1, & x\in\left(0,1\right)\\ 0, & otherwise \end{cases}$ are consistent pdfs.

### Normal distribution

A r.v. $X$ has a standard normal distribution, $X\sim N\left(0,1\right)$, if it is continuous with pdf $f_{X}\left(x\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}},x\in\mathbf{R}$

## PMFs and PDFs

Notice that pmfs and, in a sense pdfs, ‘add up’ to one. There is a theorem that states the result can apply in both directions. For the pmf,

$f:\mathbf{R}\rightarrow\left[0,1\right]$ is the pmf of a discrete r.v. iff $\sum_{x\in\mathbf{R}}f\left(x\right)=1$

And for the pdf,

$f:\mathbf{R}\rightarrow\mathbf{R}_{+}$ is the pdf of a continuous r.v. iff $\int_{-\infty}^{\infty}f\left(x\right)dx=1$

It’s clear from the examples above, that one can specify the distribution of a random variable by specifying its distribution function, or its probability mass/density function. Sometimes, however, it is advantageous to specify the distribution of a random variable by a transformation. For example, suppose $Y$ is defined as a random variable that follows $X^{2}$, where $X\sim N\left(0,1\right)$. This takes us to discussing transformations of random variables.