TransWikia.com

Conditional and unconditional expectation for the variance of error term in linear regression

Cross Validated Asked by mcgurck on February 25, 2021

I’m working through the book Introductory Econometrics and stumbled across a statement regarding the variance of the error term, $u$, of a linear regression model, $y = beta_0 + beta_1 x + u$.

To give some context, two assumptions were introduced beforehand:

  1. Zero conditional mean, i.e. $E(u|x) = 0$, and
  2. Homoskedasticity, i.e. $Var(u|x) = sigma^2$.

Then, the argument goes on as follows:

Because $Var(u|x) = E(u^2|x) – [E(u|x)]^2$ and $E(u|x) = 0$, $sigma^2 = E(u^2|x)$, which means $sigma^2$ is also the unconditional expectation of $u^2$.

While I understand the first part of the sentence, I have no idea where the bolded part comes from. It seems to say that because $E(u^2|x)=sigma^2$ (i.e. the conditional expectation of $u^2$), it follows that $E(u^2) = sigma^2$ (i.e. the unconditional expectation of $u^2$).
I might be missing something very basic here, but I can’t figure it out.

One Answer

It follows from the law of iterated expectations: the expected value of the conditional expected value of $u$ given $X$ is the same as the expected value of $u$.

$$E[u^2] = E[E[u^2|X]] = E[sigma^2] = sigma^2$$

Correct answer by Ale on February 25, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP