Lecture 12. J) Neyman-Pearson Lemma
Neyman-Pearson Lemma
Let [math]X[/math] be a random sample with pmf/pdf [math]f\left(\left.\cdot\right|\theta\right)[/math], and consider the problem [math]H_{0}:\theta=\theta_{0}\,vs.\,H_{1}:\theta=\theta_{1}.[/math]
The test that rejects [math]H_{0}[/math] if and only if
[math]f\left(\left.X\right|\theta_{1}\right)\gt k.f\left(\left.X\right|\theta_{0}\right)[/math] for some [math]k\gt 0[/math]
is a UMP level [math]\alpha[/math] test of [math]H_{0}[/math] where
[math]\alpha=P\left(f\left(\left.X\right|\theta_{1}\right)\gt k.f\left(\left.X\right|\theta_{0}\right)\right).[/math]
This lemma shows that if one considers the case of a simple null vs. alternative hypothesis, then it is possible to obtain the UMP test.
It also shows that if a test is UMP, then it can be obtained by the form above.
At this point, you may wonder why we care about UMP tests. After all, for any candidate test, one can always find a constant that yields a given probability of a type 1 error at [math]\theta=\theta_{0}.[/math]
The reason we care is that the UMP test dominates all others in terms of type 2 errors, i.e., for all values [math]\theta\in\Theta_{1}[/math]: The UMP test minimizes type 2 errors uniformly.
Relationship with LRT
Consider the case [math]k\geq1[/math]. We can rewrite the test above as
[math]f\left(\left.X\right|\theta_{1}\right)\gt k.f\left(\left.X\right|\theta_{0}\right)\Rightarrow\frac{\max_{\theta\in\left\{ \theta_{0},\theta_{1}\right\} }\,f\left(\left.X\right|\theta\right)}{f\left(\left.X\right|\theta_{0}\right)}\gt k,\text{ for }k\geq1.[/math]
where the second expression is the LRT. Notice that when the left-hand side equation is satisfied, so is the right-hand side equation, and vice-versa, for any [math]k\geq1[/math].
So, for simple tests, the LRT yields the UMP. For more complicated tests, the UMP may not exist, but we will still be able to apply some optimality concept.