TransWikia.com

Convergence in probability from the central limit theorem

Mathematics Asked by secondrate on February 7, 2021

My question comes from the proof of the Delta method. One of the conditions states that $sqrt{n}(Y_n – theta) rightarrow N(0,sigma^2)$ in distribution for some sequence of random variables $Y_n$. The proof then uses the consequence that $Y_n rightarrow theta$ in probability. It is not clear to me how to prove this. This is my attempt:

By definition we want to show that for $epsilon > 0$, $lim_{n rightarrow infty} P(|Y_n – theta| > epsilon) = 0$.

We simplify this as $P(|Y_n – theta| > epsilon) = 1 – (P(sqrt{n}|Y_n – theta| < sqrt{n} epsilon) – P(sqrt{n}|Y_n – theta| < -sqrt{n}epsilon))$.

I know that for $t in mathbb{R}$, $lim_{nrightarrow infty} P(sqrt{n}|Y_n -theta| < t) = Phi(t)$ where $Phi$ is the standard normal pdf. But how do I proceed if $t$ is also a function of $n$?

It is tempting to say that if $F_n$ is the CDF of $sqrt{n}|Y_n-theta|$, then $F_n(sqrt{n} epsilon) rightarrow 1$ as $n rightarrow infty$ due to properties of the CDF but where was the fact that the limiting distribution is normal is used when this could apply to any distribution?

2 Answers

If you know Slutsky's Theorem, which combines convergence in probability with convergence in distribution, you can just say that $$ Y_{n} = frac{1}{sqrt{n}} sqrt{n} (Y_{n}-theta) + theta. $$ Now

  • $1/sqrt{n}$, as a sequence of constants which are just boring random variables, converges in probability to $0$
  • $sqrt{n}(Y_{n}-theta) stackrel{d}{rightarrow} N(0,sigma^{2})$ (given)
  • $theta$, as a constant, "converges" in probability to $theta$

you can put it all together to get convergence of $Y_{n}$ in the weaker (convergence in distribution) sense: $$ Y_{n} = frac{1}{sqrt{n}} sqrt{n} (Y_{n}-theta) + theta stackrel{d}{rightarrow} 0 cdot N(0,sigma^{2}) + theta = theta. $$

In general, convergence in distribution does not imply convergence in probability but it does if that convergence is to a constant. So, you can say that $$ Y_{n} stackrel{d}{rightarrow} theta ,,,, Rightarrow ,,,, Y_{n} stackrel{P}{rightarrow} theta. $$

Answered by Rubarb on February 7, 2021

Let $epsilon >0$ and $M in (0,infty)$. Then $| (Y_n -theta)| >epsilon$ implies that $sqrt n|Y_n -theta| > sqrt n epsilon >M$ provide $n$ is large that $sqrt n epsilon >M$. Hence $lim sup P(|Y_n-theta| >epsilon) leq P(|X| >M)$ where $X sim N(0, sigma^{2})$. You can make $P(|X| >M)$ as small as you wish by choosing $M$ large enough. Can you finish?

Answered by Kavi Rama Murthy on February 7, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP