TransWikia.com

Strong Data Processing Inequality for capped channels

MathOverflow Asked by Thomas Dybdahl Ahle on December 18, 2021

Let $X$ and $Y$ be two $rho$ correlated Gaussian vectors, such that $X,Ysim N(0,1)^n$ and $E[X_iY_i]=rho$.
Let $M_X = f(X)$ and $M_Y = f(Y)$ be $k$-bit functions of $X$ and $Y$, that is $H(X)=H(Y)=k$.

We then have that $M_X – X – Y – M_Y$ is a Markov Chain.

By the standard Strong Data Processing Inequality (SDPI), we get $I(M_X; Y) le rho^2 I(M_X; X) le rho^2 k$ and likewise $I(M_Y; X) le rho^2 k$.

I would like to show that $$I(M_X ; M_Y) le rho^2 k / 2.$$

This is motivated by the fact that when $M_X$ and $M_Y$ are seen as approximations to $X$ and $Y$, then one will need twice as good approximations when both $X$ and $Y$ are approximated, rather than if only one of them is (the case $I(X ; M_Y)$.)

I’m aware of the general result by Yury Polyanskiy and Yihong Wu, which lets one compute an end-to-end SDPI for a Markov Chain (or graph). However, in that case, it is assumed that each link loses some constant factor, wherein my case the two links $M_X – X$ and $Y – M_Y$ are rate capped rather than noisy.

Do you know any information-theoretical tricks, or known relations, that I’m unaware of which might be useful?

One Answer

What happens if $n = k = 1$ and $X = Y$? In this case $rho = 1$. Let $M_X = 0$ if $X < 0$ and 1 otherwise. Let $M_Y = 0$ if $Y < 0$ and 1 otherwise. Then $I(M_X;M_Y) = H(M_X) - H(M_X|M_Y) = 1$. This seems to contradict your wanted inequality.

Answered by Mattias Andersson on December 18, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP