Mathematics Asked by Andrew Shedlock on January 3, 2022

A stochastic process $X = {X_t}$ on is Wiener Process if the following properties hold

- $X_0 = 0$
- $X$ has independent increments: for any $ninmathbb{N}$ and any $0 < t_0<ldots < t_m$ we have that $(X_{t_1}-X_{t_0}), (X_{t_2}-X_{t_1}), ldots (X_{t_n}-X_{t_n-1})$ are independent
- $X_t – X_s sim mathcal{N}(0, t-s)$ where $sleq t$
- $tto X_t(omega)$ is continuous for almost all sample paths $omega$

I am looking for a proof that these properties imply that the finite-dimensional distributions are given by the following formula. Letting $0leq t_1 < ldots < t_n$ and any finite collection of Borel sets $F_1,ldots F_n$ that

$$P(X_1in F_1, ldots X_nin F_n) = int_{F_1timesldots times F_n}p(t_1, 0, x_1)p(t_2 – t_2, x_1, x_2)ldots p(t_n-t_{n-1}, x_{n-1}, x_n) dx_1ldots dx_n $$

where $p(t, x, y) = frac{1}{sqrt{2pi t}}exp(-frac{(x-y)^2}{2t})$ when $t > 0$ and $p(0,x,y) = delta_x(y)$? We know from Kolmogorov’s Extension Theorem that the family of all these finite-dimensional distributions will give us a Wiener process, I am curious if the properties go the other way and for a proof.

The easiest way to prove this `Chapman-Kolmogorov' relation is by noting that it suffices to prove that it is equivalent to the fact that $(X_{t_1},dots,X_{t_n})$ is multinomial normal distributed with mean $0$ and convariance matrix $C_{i,j}=t_{iwedge j}$. For this observe that by 2. and 3., $(X_{t_1},X_{t_2}-X_{t_1},dots,X_{t_n}-X_{t_{n-1}})$ is multinomial normal distributed with mean $0$ and covariance matrix $$ bar{C}=mathrm{diag}(t_1,t_2-t_1,dots,t_n-t_{n-1}). $$ Now $$ begin{pmatrix} X_{t_1}\vdots\X_{t_n} end{pmatrix}=begin{pmatrix} 1 & 0 &cdots & 0\ 1 & 1 & cdots & 0\ vdots& & ddots & vdots\ 1 & 1 & cdots & 1 end{pmatrix}begin{pmatrix} X_{t_1}\X_{t_2}-X_{t_1}\vdots\X_{t_n}-X_{t_{n-1}} end{pmatrix} $$ and therefore $$ C=begin{pmatrix} 1 & 0 &cdots & 0\ 1 & 1 & cdots & 0\ vdots& & ddots & vdots\ 1 & 1 & cdots & 1 end{pmatrix}bar{C}begin{pmatrix} 1 & 1 &cdots & 1\ 0 & 1 & cdots & 1\ vdots& & ddots & vdots\ 0 & 0 & cdots & 1 end{pmatrix} $$ and you can easily check that this indeed gives you the required covariance matrix.

Answered by julian on January 3, 2022

1 Asked on December 27, 2021 by roberto-faedda

1 Asked on December 27, 2021 by redundant-aunt

1 Asked on December 27, 2021

2 Asked on December 27, 2021

0 Asked on December 27, 2021 by tom-collinge

lebesgue integral real analysis riemann integration solution verification

0 Asked on December 27, 2021

0 Asked on December 27, 2021

1 Asked on December 27, 2021 by alpmu

combinatorics extremal combinatorics extremal graph theory graph theory

1 Asked on December 27, 2021 by r-srivastava

2 Asked on December 27, 2021

3 Asked on December 27, 2021

3 Asked on December 27, 2021 by possible

1 Asked on December 27, 2021

determinant discrete optimization linear algebra matrices optimization

2 Asked on December 27, 2021

Get help from others!

Recent Answers

- Jon Church on Why fry rice before boiling?
- Lex on Does Google Analytics track 404 page responses as valid page views?
- Peter Machado on Why fry rice before boiling?
- Joshua Engel on Why fry rice before boiling?
- haakon.io on Why fry rice before boiling?

Recent Questions

- How Do I Get The Ifruit App Off Of Gta 5 / Grand Theft Auto 5
- Iv’e designed a space elevator using a series of lasers. do you know anybody i could submit the designs too that could manufacture the concept and put it to use
- Need help finding a book. Female OP protagonist, magic
- Why is the WWF pending games (“Your turn”) area replaced w/ a column of “Bonus & Reward”gift boxes?
- Does Google Analytics track 404 page responses as valid page views?

© 2023 AnswerBun.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP