# Theorem: Berstein von-Mises

Let $\widehat{\theta}_{B}$ be a point estimator for Bayesian inference (i.e., $\widehat{\theta}_{B}=E\left(\left.\theta\right|X\right)$) and $\widehat{\theta}_{ML}$ be the MLE. Then,

$\sqrt{n}\left(\widehat{\theta}_{B}-\theta_{0}\right)\overset{d}{\rightarrow}N\left(0,I\left(\theta_{0}\right)^{-1}\right),\text{ where }\theta_{0}\text{ is the true value of }\theta.$

and

$\sqrt{n}\left(\widehat{\theta}_{B}-\widehat{\theta}_{ML}\right)\overset{p}{\rightarrow}0$

The second result is relatively striking: it tells us that even after scaling by $\sqrt{n}$, the ML and Bayes estimators converge in probability.

In practice, researchers can use an estimate of $I\left(\theta\right)^{-1}$ based on the variance implied by $f_{\left.\theta\right|X}$ for hypothesis testing. In relatively complicated cases, priors need not belong to conjugate families, in which case numerical methods are used, including taking draws from distributions via the Gibbs sampler and the Metropolis Hastings algorithm.