Cross Validated Asked by Gabriel on July 28, 2020

I’m having trouble proving an intuitive result I found in these lecture notes I’m using for self-study (1.2.14 there).

Suppose $X$ is a $(mathbb{S}, mathcal{S})$-valued random variable (from $(Omega, mathcal{F})$), and furthermore $mathcal{S} = sigma(mathcal{A})$. If $mathcal{F}^X$ is the $sigma$-algebra generated by $X$ in $Omega$, we want to show that $mathcal{F}^X = sigma({X^{-1}(A) : A in mathcal{A}})$.

It’s easy to prove that $mathcal{F}^X supset sigma({X^{-1}(A) : A in mathcal{A}})$, by noticing that (i) $mathcal{F}^X$ is a $sigma$-algebra, and that (ii) it contains ${X^{-1}(A) : A in mathcal{A}}$. But I believe I’m missing the right proof strategy for the other direction. Just appealing to the definitions and the tools developed so far (e.g. the $pi-lambda$ theorem) didn’t take me very far.

I think I get the spirit of the claim. Basically, it says that if you have a set of generators $mathcal{A}$ of $mathcal{S}$, to obtain $mathcal{F}^X$ you can either take the inverse images of *all* sets generated by $mathcal{A}$, or you can take the inverse images of just the sets in $mathcal{A}$ and then use those to generate a $sigma$-algebra. So, the order of the operations of "taking inverse images" and "generating a $sigma$-algebra" doesn’t matter. Is this understanding correct?

Any hint on a direction that might work for the proof would be extremely appreciated!

1 Asked on December 6, 2021

2 Asked on December 6, 2021 by omm-kreate

0 Asked on December 6, 2021 by lakshman-mahto

1 Asked on December 6, 2021

gradient descent neural networks q learning reinforcement learning

1 Asked on December 6, 2021 by rambalachandran

binomial distribution combinatorics mathematical statistics probability

0 Asked on December 6, 2021

0 Asked on December 6, 2021 by user6883405

fishers exact test hypothesis testing multiple comparisons small sample

1 Asked on December 6, 2021

1 Asked on December 5, 2021 by laos

1 Asked on December 5, 2021 by user41710

1 Asked on December 5, 2021 by aae

approximation distributions lognormal distribution moment generating function normal distribution

1 Asked on December 5, 2021

1 Asked on December 5, 2021

attention machine translation natural language neural networks

1 Asked on December 5, 2021

1 Asked on December 5, 2021

cross correlation granger causality macroeconomics time series

1 Asked on December 5, 2021 by b-kenobi

generalized linear model logistic multiple regression r reporting

1 Asked on December 5, 2021

assumptions heteroscedasticity linear multiple regression variance

0 Asked on December 5, 2021

1 Asked on December 5, 2021 by emberbillow

2 Asked on December 5, 2021

Get help from others!

Recent Answers

- Lex on Does Google Analytics track 404 page responses as valid page views?
- Jon Church on Why fry rice before boiling?
- haakon.io on Why fry rice before boiling?
- Joshua Engel on Why fry rice before boiling?
- Peter Machado on Why fry rice before boiling?

© 2022 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP, SolveDir