AnswerBun.com

Max of $2$ independent random variables

Mathematics Asked by zestiria on December 17, 2020

Let :

  • $X_1,X_2$ independent, with same law
  • $ Var(X_1)= sigma^2$, $E(X_1)=0$
  • $G$ is their cumulative function
  • Let $X= max(X_1,X_2)$ with cumulative function $F$

We want to show that $E(X)= int_{-infty}^{infty} [1-G(t)]G(t) dt $


My attempt :

$F(t)=P(X_1<t, X_2 <t)=P(X_1<t)P(X_2 <t)= G(t)^2$ so $E(X)= int_{-infty}^{infty} t 2 g(t) G(t)$ then do an integration by parts.

Or we can use $EZ= int_{0}^{infty}P(Z>t)dt$ but it works only if $Z>0$

$((G-G^2)^{-1})’ =- (g -2gG) frac{1}{ (G-G^2) ^2 } $

2 Answers

Let's prove it backwards:

$$mathbb{E}[T]=int_{-infty}^{infty}G(t)[1-G(t)]dt=int_{-infty}^{infty}G(t)dt-int_{-infty}^{infty}G^2(t)dt=$$

$$tG(t)Bigg|_{-infty}^{infty}-int_{-infty}^{infty}tg(t)dt-tG^2(t)Bigg|_{-infty}^{infty}+int_{-infty}^{infty}2t g(t)G(t)dt=$$

$$underbrace{t[G(t)-G^2(t)]Bigg|_{-infty}^{infty}}_{=0}-underbrace{mathbb{E}[X_1]}_{=0}+int_{-infty}^{infty}2t g(t)G(t)dt$$

  1. To prove that the first addend is zero you have only to calculate the limits...it's easy

  2. the second addend is zero as per initial statement

  3. the third addend is the result. It matches with the same result you correctly calculated by another way...the proof is finished.

Edit: further details on limit calculus:

$$lim_{t rightarrow +infty}frac{t}{frac{1}{G-G^2}} rightarrow-frac{(G-G)^2}{g-2gG}=0$$

Answered by tommik on December 17, 2020

Let $X^+ = max(X,0)$ and $X^- = max(-X,0)$. Then $X^+ = displaystyle int_0^{infty} mathbb{1}(X>t)dt = max(a ge : X ge a)$ and $X^- = displaystyle int_{-infty}^0 mathbb{1}(X le t) dt = min(a le 0: X > a)$.

Now we have: $$mathbb{E}(X) = mathbb{E}(X^+ - X^-) = mathbb{E}(X^+) - mathbb{E}(X^-) = ldots = int_0^{infty} (1-F(x))dx - int_{-infty}^0F(x)dx$$

Now for $F(x) = G^2(x)$ we have:

$$mathbb{E}(X) = mathbb{E}(max(X_1,X_2)) = int_0^{infty}(1-G(x))(1+G(x)) dx - int_{-infty}^0 G^2(x)dx$$

From this point it's easy to get your fact.

$Hint$: we have that $$mathbb{E}(X) = int_0^infty (1-G(x))G(x) dx + int_0^infty (1-G(x))dx -int_0^infty G^2(x)dx +$$ $$+int_{-infty}^0(1-G(x))G(x)dx -int_{-infty}^0(1-G(x))G(x)dx = $$ $$=int_{-infty}^{infty} (1-G(x))G(x)dx + R$$

And you need to prove: $$R = int_0^{infty}(1-G(x) )dx - int_0^{infty} G^2(x)dx-int_{-infty}^{0} (1-G(x))G(x)dx equiv 0$$

Answered by openspace on December 17, 2020

Add your own answers!

Related Questions

Inverse of fourth root function

3  Asked on February 28, 2021 by sean-xie

   

Prove $f$ is not differentiable at $(0,0)$

2  Asked on February 27, 2021 by kelan

 

Math fact family number triangles

1  Asked on February 27, 2021 by aaron-hendrickson

   

Vector equations with trigonometry

1  Asked on February 26, 2021 by elyx

       

Determinant of a certain Toeplitz matrix

2  Asked on February 25, 2021 by srdjan-pesevic

     

Ask a Question

Get help from others!

© 2022 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP