Intuitive explanation on why stochastic encoding performs better in channel coding

Computer Science Asked by Black Jack 21 on November 10, 2020

I am a little confused about stochastic encoding in channel coding. For example, in the identification problem (R. Ahlswede and G. Dueck, “Identification via channels”), the authors claim that we can achieve nearly 0 probability of error when using $O( log log M)$ bits, where $M$ is the total number of messages. Instead, deterministic encoding would require us to use at least $O(log M)$ bits instead.

Can anyone give an intuitive explanation of why stochastic encoding is so effective? One possible idea which opposes the result is that the stochastic code would result in many possible codes for a single message which should make it more difficult for the decoder as the codes are now more ‘tightly packed’, i.e., there are many more valid codes possible for the same number of bits transmitted.

One Answer

Well, I think you are conflating two things. It is correct that in a lot of settings probabilistic encoding (e.g., Polar Codes, Convolutional Codes, LDPC Codes) [assuming you know a good model for the noise process of the channel] does much better than algebraic coding, intuitively because channels have memory and block codes ignore that memory.

However, Ahlswede's Identification Problem is quite different. It is NOT a traditional channel coding problem There is a good review here

The author states

In the problem of identification via channels, introduced by Ahlswede and Dueck the receiver is only interested in testing whether a particular message was sent, but the encoder does not know which message the decoder wants. Errors are now considered in terms of false alarm and missed identification. It turns out that for this case, one can design systems such that the number of different messages one can identify grows doubly exponentially with the blocklength. The trick is that each message can map into a list of codewords and the encoder selects one randomly. As long as the fraction of the pairwise overlap of these lists is small, the error probabilities will be small

Answered by kodlu on November 10, 2020

Add your own answers!

Related Questions

What is Big O of a loop with square root inside?

0  Asked on October 21, 2021 by lukasz-dynowski


can it be said that NL^NL = NL?

2  Asked on October 21, 2021 by yankovs


Clique vs Complete Graph

2  Asked on October 21, 2021 by ninja-bug


Range search in a max-heap

1  Asked on February 27, 2021


Ask a Question

Get help from others!

© 2022 All rights reserved. Sites we Love: PCI Database, MenuIva, UKBizDB, Menu Kuliner, Sharing RPP, SolveDir