TransWikia.com

LDPC for channel coding

Signal Processing Asked on October 24, 2021

I am working on OFDM over harsh channel, so the channel coding is an essential thing for achieving a reliable communication.

According to my reading, LDPC is almost the best channel coding we can use for channel coding, but I have a question regarding that kind of coding.

Assume we are using LDCP with rate $1/2$ which means that communication systems uses the half of data rate for coding and the other half as data.

If we have the OFDM symbol have $1024$ sub-carriers, with modulation order $M=4$ = which gives us a total bits of symbol $2048$. In that case, will the LPDC will take the whole $2048$ together? I mean it will take $128$ bits from the whole bits for coding and the rest for data OR it will take group by group, which means for example group of $6$ bits, and then code them $3$ for data and $3$ for parity or coding.

Which one is right ?

One Answer

According to my reading, LDPC is almost the best channel coding we can use for channel coding, but I have a question regarding that kind of coding.

Mentally, something being "the best" should always instantly raise a mental flag for you, saying "under which conditions, according to which measure".

It is right that iterative LDPC decoders can achieve maximum likelihood performance (i.e. being the best possible decoder), but only when

  1. The code is large and
  2. there's an infinite number of iterations.

While 2. is never fulfilled, there's often a number of iterations after which gains are small enough for people to simply stop and limit complexity.

The first condition, however, is often a pretty complicated to fulfill one:

will the LPDC will take the whole 2048 (bits) together?

That is a small LDPC code (128 bits would really be tiny, and I don't think I've seen any sensible OFDM application do that; least-rate IoT modes might be interested in doing that on the uplink, but that doesn't match the OFDM approach).

At a small size like 2048 bits, less complex codes and decoders might be comparable or even better suited for your use case and error model. (PS: Have an error model before deciding on the code you use! There's an awesome website, http://pretty-good-codes.org/ , which is sadly offline now, which compares many codes with metrics.)

Try to put more bits into a single codeword if you want to harness the powers of LDPC codes. For example, in DVB-T2 (which is an OFDM system), the short codewords are 16000 bits long, the normal ones are 64800 bits long.

I mean it will take bits from the whole bits for coding and the rest for data OR it will take group by group, which means for example group of bits, and then code them for data and for parity or coding.

Neither. You take blocksize · rate (so, in your proposed system, 2048·1/2=1024, but really, use larger blocks and established LDPC codes) information bits and encode them as one. These are not systematic, usually, so you don't get separate redundancy bit. You get a code word that is blocksize bits long, and doesn't contain the original bits in any structured way, usually. (Systematic LDPCs are usually undesirable.)

You will need to use a decoder to get the original bits from the codeword.

Answered by Marcus Müller on October 24, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP