LDPC Codes‎ > ‎

LDPC Decoder Examples

The following three examples are found in Sarah Johnson's paper Introducing Low-Density Parity-Check Codes on pages 21-39.  The discussions below go through the examples at a very high level without going into details about the mathematics involved or how computations are done within each decoding function.  It is helpful to understand the ideas at a high level first.  Details can be filled in later.

Although most LDPC decoder implementations use iterations, it is easier to understand how the decoders work by looking at fully-parallel, non-iterative designs such as those shown below.  Fully-parallel, non-iterative algebraic decoders are also easiest to understand although most implementations also use iterations.

The first example is an erasure correcting LDPC decoder, the second a bit-flipping decoder and the third a soft-decision decoder.


Example 1: An Erasure Correcting LDPC Decoder

Assume

001011   was sent and
001xxx   was received.

For an erasure channel the decoder knows the 0's and 1's it receives are correct with 100% certainty but the x's are unknown or erased.  Sometimes the letter "e" or "?" are used instead of "x" to indicate an erased bit.  (The erasure channel model is not very realistic because no real decoder can know with 100% certainty that the 1's and 0's it receives are correct. However, the erasure channel model is useful as a first step in understanding how LDPC decoders work.)

The decoder also knows that certain subsets of received bits should sum to 0 modulo 2.

The following drawing is a block diagram for a parallelized and potentially pipelined LDPC erasure decoder following the example in Sarah Johnson's paper starting on page 21.  This type of a diagram is easier for me to understand than the Tanner graph.




In the above example each  Check Nodes have 3 inputs - 3 values from 3 different positions that should sum to 0.  The Check Nodes "predict" what the value in each position should be based on the values it receives in the other two positions.  Using the methodology in Sarah Johnson's paper, the value of each output bit is the modulo 2 sum of the other bits if they are known.   If some of the other bits are unknown, then the output bit is also an unknown.  In this case the third bit can be determined to be a 1.  The other bits are unknown.

The Bit Nodes estimate what each bit should be based on its inputs.  In this simple example, if an Bit Node has a 0 or a 1 input, it is passed through since they are known to be correct with 100% certainty.  If a Bit Node has both 0's and 1's as inputs, it has detected a decoding failure.  If a Check Node has both 0 and 1 inputs or if all of unknown x's have not been eliminated after a number of stages, then decoding has failed.

The following drawing is an alternative way of designing a flow-through LDPC decoder for the erasure channel.  In this decoder, the Check Nodes are designed so that if a 0 or 1 is presented to an input, that 0 or 1 is passed through to the corresponding output because it is known to be correct with 100% certainty otherwise the output is predicted based on the other inputs.





Example 2:  A Bit-Flipping LDPC Decoder

Assume

001011  was sent and
011011  was received.

For the bit-flipping channel, 0's can be flipped to 1's and 1's to 0's.

In the following block diagram, the two decoding functions are again called "Check Nodes" and "Bit Nodes".  The Check Nodes predict what the values should be and the Bit Nodes estimate what the new updated values should be.  If the majority of Check Node values are opposite from the value received from the channel, then the value received from the channel is flipped by the Bit Node as shown in the following drawing.





Example 3:  A Soft-Decision LDPC Decoder

Soft-decision decoders receive a sequence of probabilities rather than a sequence of bits which indicate the probability of a bit in that position being a 1 (or 0).  Probability values range from 0 to 1.  Log Likelihood Ratio values range from minus infinity to plus infinity.  Conventional probability values can be mapped to LLR values and vice versa as indicated below.  The mapping of probability values to LLR values is a Loge function which allows probabilities to be multiplied by adding LLRs.  LLRs are also nice because a value of 0 indicates no confidence that the value is a 0 or a 1 and the more positive or negative the value is, the more confidence there is that the value is either 1 or 0.  The sign of the LLR value indicates whether the value is more likely a 1 or more likely a 0.




The above mapping is normally done in practice as shown below.



LDPC decoders that operate on conventional probability values or on LLR values are almost identical to the bit-flipping decoder shown above except the functions are more complicated mathematical functions and the data paths are wider as illustrated with wider, gray lines below.

A block diagram for a LDPC decoder operating on conventional probability values is shown below.




Converters convert probabilities to bits and final parity checkers for non-iterative decoders check if the final output is a codeword.  Parity checks can also be done in each stage for early termination of the decoding process.

The next step is to describe LDPC decoders which use Log-Likelihood Ratios (LLR values) instead of p values.  Using LLR's is supposed to simplify the implementation of LDPC decoders, but, at the present time, I do not see how they simplify the design.  I appears to me that using LLR values complicates the design because they use hyperbolic tangent functions in the decoding process and products are still required so it's unclear to me how that simplifies things.  

A decoder operating on LLRs is shown below.




Alternatively, an LLR decoder could be designed as shown below...




The only difference between the above two drawings are the connections after the first row of bit nodes.

The argument in favor of the second, above decoder is that, unless the channel is ultra-noisy, the outputs from the check nodes should be correct more often than they are incorrect so that including a check node’s own “opinion” in the LLR that is sent back to it would increase the accuracy the LLR more often than not.

It might also be possible to design a decoder as shown below which follows the same pattern as the alternative erasure decoder shown above.  If the confidence level for any LLR input to any Check Node is above some threshold, the LLR presented at the input could be directly outputted otherwise the LLR outputted for that bit would be computed based on the LLRs inputted to that Check Node.  Or, the LLR with the greatest confidence level would be chosen to be outputted.




Probability Value Widths

If the width of a probability value is 1 bit, then the probability becomes a hard decision which can only take on the values of 1 or 0.  If we increase the width to two bits, then the values can be 00, 01, 10 and 11.  00 would be 0 with more confidence than 01 and 11 would be 1 with more confidence than 10.  Notice that there is no "on the fence", "middle of the road" value - it is either a 1 or a 0 with higher or lower confidence as illustrated below.  It would be interesting to see what effect the width of the probability value has on the outcomes of decoding.









Subpages (2): Bit Nodes Check Nodes