Lecture 4. H) Dirac delta function

From Significant Statistics
Jump to navigation Jump to search

Dirac delta function

The Dirac delta function is not really a pdf. However, it can sometimes be useful when working with mass points. It is defined as [math]\delta:\mathbb{R}\rightarrow\mathbb{R}\cup\infty[/math], s.t.,

[math]\delta\left(x\right)=\begin{cases} +\infty, & x=0\\ 0, & otherwise \end{cases}[/math]

and

[math]\int_{-\infty}^{\infty}\delta\left(x\right)dx=1[/math]

This is a valid pdf, except it is not a function (its counterdomain includes infinity).

Sifting Property

This property is especially useful. It states:

[math]\int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-a\right)dx=f\left(a\right)[/math]

as long as [math]f\left(\cdot\right)[/math] is continuous at [math]a[/math].

Sketch of the Proof

Let

[math]g\left(x\right)=\begin{cases} \frac{1}{2\Delta}, & -\Delta\leq x\leq\Delta\\ 0, & otherwise \end{cases}[/math]

and notice that [math]g\left(x\right)[/math] is a pdf with support [math]\left[-\Delta,\Delta\right][/math].

In addition, notice that [math]\lim_{\Delta\rightarrow0}\,g\left(x\right)=\delta\left(x\right).[/math]

Then,

[math]\begin{aligned} \int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-a\right)dx & =\int_{-\infty}^{\infty}\lim_{\Delta\rightarrow0}\,f\left(x\right)g\left(x-a\right)dx\\ & =\lim_{\Delta\rightarrow0}\int_{-\infty}^{\infty}f\left(x\right)g\left(x-a\right)dx\\ & =\lim_{\Delta\rightarrow0}\,\frac{1}{2\Delta}\int_{-\infty}^{\infty}f\left(x\right)1\left[-\Delta\leq x-a\leq\Delta\right]dx\\ & =\lim_{\Delta\rightarrow0}\,\frac{1}{2\Delta}\int_{a-\Delta}^{a+\Delta}f\left(x\right)dx\\ & =\lim_{\Delta\rightarrow0}\,\frac{F\left(a+\Delta\right)-F\left(a-\Delta\right)}{2\Delta}\\ & =f\left(a\right)\end{aligned}[/math]

Clearly, some conditions are needed for the steps above to be valid.

Also, when facing a proper integral, it is possible to show that [math]\int_{\underline{x}}^{\overline{x}}f\left(x\right)\delta\left(x-a\right)dx=f\left(a\right)1\left(\underline{x}\leq a\leq\overline{x}\right).[/math]

Example

Let

[math]Y=\begin{cases} 1, & \text{w.p. }\alpha\\ U\left(0,1\right) & \text{w.p. }1-\alpha \end{cases}[/math]

The distribution of [math]Y[/math] is neither continuous nor discrete. It has a mass point at 1; otherwise, it is a uniform distribution on the [math]\left[0,1\right][/math] support.

The “pdf” of [math]Y[/math] can be written as

[math]f_{Y}=\alpha\delta\left(y-1\right)+\left(1-\alpha\right)1\left(y\in\left[0,1\right]\right)[/math]

The expectation of [math]Y[/math] can be calculated as:

[math]\begin{aligned} E\left(Y\right) & =\int_{-\infty}^{\infty}y\left(\alpha\delta\left(y-1\right)+\left(1-\alpha\right)1\left(y\in\left[0,1\right]\right)\right)dy\\ & =\alpha\int_{-\infty}^{\infty}y\delta\left(y-1\right)dy+\left(1-\alpha\right)\int_{0}^{1}ydy\\ & =\alpha.1+\left(1-\alpha\right)\left.\frac{y^{2}}{2}\right|_{0}^{1}\\ & =\alpha+\frac{1-\alpha}{2}\\ & =\frac{1+\alpha}{2}\end{aligned}[/math]

The result makes sense: when [math]\alpha[/math] approaches 1, [math]Y[/math] converges to a mass point on [math]1[/math], and [math]E\left(Y\right)=1[/math]. When [math]\alpha[/math] approaches zero, [math]Y[/math] converges to a standard uniform distribution, and [math]E\left(Y\right)=\frac{1}{2}[/math].