# Correspondence Theorem

Let $P_{X}\left(\cdot\right)$ and $P_{Y}\left(\cdot\right)$ be probability functions, defined on $\mathcal{B}\left(\mathbf{R}\right)$ and let $F_{X}\left(\cdot\right)$ and $F_{Y}\left(\cdot\right)$ be associated cdfs. Then,

$P_{X}\left(\cdot\right)=P_{Y}\left(\cdot\right)$ iff $F_{X}\left(\cdot\right)=F_{Y}\left(\cdot\right).$

The correspondence theorem assures us that we can restrict ourselves to cdfs. Relying on these won’t restrict us in any way, when compared to using probability functions.

## CDFs

Function $F:\mathbf{R}\rightarrow\left[0,1\right]$ is a cdf if it satisfies the following conditions:

• $\lim_{x\rightarrow-\infty}F\left(x\right)=0.$
• $\lim_{x\rightarrow+\infty}F\left(x\right)=1.$
• $F\left(\cdot\right)$ is non-decreasing.
• $F\left(\cdot\right)$ is right-continuous (this can be shown by using probability functions of intervals).

## Nature of RVs

We now define the natures of random variables:

• Random variable $X$ is discrete if

$\exists f_{X}:\mathbf{R}\rightarrow\left[0,1\right]$ s.t. $F_{X}\left(x\right)=\sum_{t\leq x}f_{X}\left(t\right),x\in\mathbf{R}.$

Function $f_{X}$ is called the probability mass function (pmf).

• Random variable $X$ is continuous if

$\exists f_{X}:\mathbf{R}\rightarrow\mathbf{R}_{+}$ s.t. $F_{X}\left(x\right)=\int_{-\infty}^{x}f_{X}\left(t\right)dt,x\in\mathbf{R}.$

Any such $f_{X}$ is called a probability density function (pdf). Notice that unlike pmfs, multiple pdfs are consistent with a given cdf. This occurs as long as the pdfs differ only on a set of (probability) measure-zero events.

Another interesting remark is that the probability of any specific value of a continuous variable is zero, i.e., $P\left(\left\{ x\right\} \right)=0,\forall x\in\mathbf{R}$.

## Examples

### Coin tossing

$F_{X}\left(x\right)=\begin{cases} 0, & x\lt 0\\ \frac{1}{2}, & 0\leq x\lt 1\\ 1, & x\geq1 \end{cases}$

In this case, $X$ is discrete and $F_{X}$ is a step function (this always occurs for discrete r.v.s).

The probability mass function is equal to $f_{X}\left(x\right)=\begin{cases} \frac{1}{2}, & x\in\left\{ 0,1\right\} \\ 0, & otherwise \end{cases}$.

### Uniform distribution on (0,1)

$F_{X}\left(x\right)=\begin{cases} 0, & x\lt 0\\ x, & 0\leq x\lt 1\\ 1, & x\geq1 \end{cases}$ where $X$ is continuous.

Moreover, both $f_{X}\left(x\right)=\begin{cases} 1, & x\in\left[0,1\right]\\ 0, & otherwise \end{cases}$ and $f_{X}\left(x\right)=\begin{cases} 1, & x\in\left(0,1\right)\\ 0, & otherwise \end{cases}$ are consistent pdfs.

### Normal distribution

A r.v. $X$ has a standard normal distribution, $X\sim N\left(0,1\right)$, if it is continuous with pdf

$f_{X}\left(x\right)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}},x\in\mathbf{R}$

## PMFs and PDFs

Notice that pmfs and, in a sense pdfs, ‘add up’ to one. There is a theorem that states the result applies in both directions.

For the pmf,

$f:\mathbf{R}\rightarrow\left[0,1\right]$ is the pmf of a discrete r.v. iff $\sum_{x\in\mathbf{R}}f\left(x\right)=1.$

And for the pdf,

$f:\mathbf{R}\rightarrow\mathbf{R}_{+}$ is the pdf of a continuous r.v. iff $\int_{-\infty}^{\infty}f\left(x\right)dx=1.$

## Remark

It’s clear from the examples above, that one can specify the distribution of a random variable by specifying its distribution function, or its probability mass/density function. Sometimes, however, it is advantageous to specify the distribution of a random variable by a transformation. For example, suppose $Y$ is defined as a random variable that follows the distribution of $X^{2}$, where $X\sim N\left(0,1\right)$. This takes us to discussing transformations of random variables. But first, we'll see a very useful result.

# Leibniz Rule

This rule can be useful in a series of domains. It states that

$\frac{d}{dx}\int_{a\left(x\right)}^{b\left(x\right)}f\left(x,t\right)dt=\int_{a\left(x\right)}^{b\left(x\right)}\frac{\partial}{\partial x}f\left(x,t\right)dt+f\left(x,b\left(x\right)\right)b'\left(x\right)-f\left(x,a\left(x\right)\right)a'\left(x\right).$

This means that the derivative of an integral can written as the integral of a derivative, plus functions of the integrands and of the integration limits. The case where $b\left(x\right)$ and $a\left(x\right)$ are constant follows immediately.

For this rule to apply, we require $f\left(x,t\right)$ and its partial derivative w.r.t. $x$ to be continuous, and that both limits of integration are continuously differentiable. We also note that this rule can be derived from the chain rule of differentiation (see this proof).

## Improper Integrals

When the integral is improper (i.e., one of the limits is infinity), Leibniz rule may fail despite all relevant functions being otherwise well-behaved.

At the crux of this problem is whether $\lim_{h\rightarrow0}\int_{0}^{\infty}\frac{f\left(x+h,t\right)-f\left(x,t\right)}{h}dt=\int_{0}^{\infty}\lim_{h\rightarrow0}\frac{f\left(x+h,t\right)-f\left(x,t\right)}{h}dt$.

Consider the example of the following function:

$f\left(x,t\right)=\frac{\sin\left(tx\right)}{t},$

plotted below:

First, notice that by calculating the expression of interest directly, $\frac{d}{dx}\int_{0}^{\infty}f\left(x,t\right)dt$, we learn how the area under $f\left(x,t\right)$ along the $t$ axis changes when $x$ is moved slightly. In order to see this, consider the following plot of $\frac{\sin\left(tx\right)}{t}$, shown now only for specific values of $t$ and $x$.

For now, we will focus on the blue sections, along which $x$ is fixed. We can think of $\frac{d}{dx}\int_{0}^{\infty}f\left(x,t\right)dt$ as first calculating the area under each of the blue curves, and then calculating how those areas change as a function of $x$. By calculating this expression directly, we obtain

$\frac{d}{dx}\int_{0}^{\infty}f\left(x,t\right)dt=\frac{d}{dx}\frac{\pi}{2}sign\left(x\right),$

which equals zero at $x\neq0$ and is infinite at $x=0$. This is the correct answer: As $x$ changes slightly, the area under $f\left(x,t\right)$ remains constant, except at $x=0$, where it changes at an infinite rate.

Now, consider the alternative calculation, $\int_{0}^{\infty}\frac{\partial}{\partial x}f\left(x,t\right)dt$. In this case, we first calculate how much the function changes with small increments in $x$ for generic values of $t$. For example, we could be calculating the vertical differences in the endpoints of the orange lines of the plot above. Then, we add up these differences along $t$, by applying the integral.

The integrand of this expression is given by: $\frac{\partial}{\partial x}f\left(x,t\right)=\frac{\partial}{\partial x}\frac{\sin\left(tx\right)}{t}=\text{cos}\left(tx\right)$. We have learned that the slope of $f$ along the $x$-axis is periodic. Function $\text{cos}\left(tx\right)$ represents the information about the slopes, which we represent below through small line segments, along $x=5$:

A property of the cosine (and other elementary trigonometric functions) is that, for a given $x$, the area 'underneath' is also periodic and does not vanish as we approximate infinity. This is a problem: When we take an integral from zero to infinity, the area under $\text{cos}\left(tx\right)$ does not converge.

Intuitively, the integral adds up the slopes in the $x$ direction, which keep rotating forever. If these slopes stabilized at some point (for example, if they all approximated zero when $t$ was large), then the integral would also converge. However, because the slopes keep rotating as $t$ changes, the integral does not converge and Leibniz rule fails. This issue can only arise when at least one of the limits of integration is infinity.

When this issue does not apply (i.e., the integral of the partial derivative converges), then result

$\frac{d}{dx}\int_{a\left(x\right)}^{b\left(x\right)}f\left(x,t\right)dt=\int_{a\left(x\right)}^{b\left(x\right)}\frac{\partial}{\partial x}f\left(x,t\right)dt+f\left(x,b\left(x\right)\right)b'\left(x\right)-f\left(x,a\left(x\right)\right)a'\left(x\right)$

is valid. If $a\left(x\right)=\infty$ or $b\left(x\right)=\infty$, we apply identity $\frac{d\infty}{dx}=0$, thus ignoring the latter terms of the expression.

# Transformations of random variables

Suppose $Y=g\left(X\right)$, where $g:\mathbf{R}\rightarrow\mathbf{R}$ is a function and $X$ is an r.v. with cdf $F_{X}$.

Clearly, $Y$ is also a random variable. Its induced probability function is equal to $P_{Y}\left(\cdot\right)=P_{X}\circ g^{-1}$. When $X$ is discrete, it is usually simple to obtain the distribution of $Y$. This becomes more complicated in the continuous case.

We consider the cases of strictly monotone transformations here. When transformations are not strictly monotone, the same procedure applies in a piecewise fashion (i.e., one needs to apply it repeatedly to different monotone sections of the transformation).

## Affine Transformations: CDF

• Suppose $Y=g\left(X\right)=aX+b,a\gt 0,b\in\mathbf{R}$.

In order to deduce $F_{Y}$, we use the probability functions of $X$ and $Y$. Notice first that $F_{Y}\left(y\right)=P\left(Y\leq y\right)$. This probability statement can be used to relate the cdf of $Y$ to the cdf of $X$:

$P\left(Y\leq y\right)=P\left(aX+b\leq y\right)=P\left(X\leq\frac{y-b}{a}\right)=F_{X}\left(\frac{y-b}{a}\right).$

This is a very useful result: we have related the cdf of a transformed r.v. $Y$ to the cdf of the transformed variable $X$. We have learned that the distribution of $Y$ is given by the distribution of $X$, evaluated at a transformed value of the function's argument.

• Now, suppose $Y=aX+b$ where $a\lt 0$. In this case, we obtain

$F_{Y}\left(y\right)=P\left(Y\leq y\right)=P\left(aX+b\leq y\right)=P\left(X\geq\frac{y-b}{a}\right)=1-P\left(X\leq\frac{y-b}{a}\right)=1-F_{X}\left(\frac{y-b}{a}\right)$.

## Affine Transformations: PDF

• Let $a\gt 0$ and $Y=aX+b$.

We know $F_{Y}\left(y\right)=F_{X}\left(\frac{y-b}{a}\right)$, and that $f_{Y}\left(y\right)=\frac{d}{dy}F_{Y}\left(y\right).$

By applying Leibniz rule, we obtain

$f_{Y}\left(y\right)=\frac{d}{dy}F_{X}\left(\frac{y-b}{a}\right)=f_{X}\left(\frac{y-b}{a}\right)\frac{d}{dy}\frac{y-b}{a}=f_{X}\left(\frac{y-b}{a}\right)\frac{1}{a}$.

• If, on the other hand, $a\lt 0$, we would have $F_{Y}\left(y\right)=1-F_{X}\left(\frac{y-b}{a}\right)$, and applying Leibniz rule yields $f_{Y}\left(y\right)=-f_{X}\left(\frac{y-b}{a}\right)\frac{1}{a}.$

We can write down both of these cases simultaneously, as

$f_{Y}\left(y\right)=f_{X}\left(\frac{y-b}{a}\right)\left|\frac{1}{a}\right|$, when $Y=aX+b$ and $a\neq 0$.

In general, as long as the transformation $Y=g\left(X\right)$ is monotonic, then

$f_{Y}\left(y\right)=f_{X}\left(g^{-1}\left(y\right)\right)\left|\frac{d}{dy}g^{-1}\left(y\right)\right|.$

When it is not, then one can simply apply the formula separately for each monotonic region.

Also, notice that the role of $g^{-1}\left(y\right)$ is to ensure that the result is expressed as a function of the argument of interest, $y$, rather than $x$.

There also exists a formula for transformations of multiple random variables. In this case, rather than multiplying the pdf by a single derivative, one uses the absolute value of the determinant of the Jacobian matrix of the transformations.