You are on page 1of 11

Low Complexity Soft-Input Soft-Output Decoding

Algorithm for LDPC Codes Based on Euclidean

Distance

P. G. Farrell, L. J. Arnone, and J. Castiñeira Moreira

Abstract— A new antilog-sum algorithm for decoding error-correcting codes is

described. The soft-input, soft-output algorithm uses squared Euclidean distance as the

metric, does not require knowledge of the signal-to-noise ratio of the received signal,

and is less complex to implement than other soft-input, soft-output algorithms. The

results of simulations show that the performance is very close to that of the sum-product

algorithm.

Index Terms—Euclidean metric, LDPC codes, SISO decoding, soft-decision.

Introduction: Binary and non-binary error-control codes are now very widely used to

protect information transmitted over telecommunications links and networks. Soft-

distance metrics, such as Euclidean distance, are used to provide maximum likelihood

(ML) decoding of these codes, thus giving the user the best possible estimate of the

transmitted block or frame of information. ML decoders are soft-input, hard-output

1
(SIHO) devices, since no confidence estimate is provided of the individual output bit or

symbol values. There are many communication scenarios, however, where it is

necessary to have the best possible soft estimate of the individual bits or symbols. This

is maximum a posteriori probability (MAP) decoding, and is a soft-input, soft-output

(SISO) process. SISO decoding is required, for example, when using concatenated

coding or iterative decoding schemes, where the decoders must interchange soft

information in order to maximise the overall performance. SISO decoders use either bit

or symbol probability metrics, such as in a posteriori probability (APP) decoders, or more

often log-probability metrics, as in log-likelihood ratio (LLR) decoders.

Until recently, it was not known how to use soft distance directly as a metric in a SISO

decoder. However, it has now been shown [1] that soft distances can be combined by

taking minus the log of the sum of the antilogs of their negative values. The resulting

antilog-sum algorithm can then be applied to the trellis or factor (Tanner) graph of a

code to achieve SISO decoding. This novel soft-distance (SD) algorithm processes

information in a similar way to algorithms based on LLR techniques [2]-[4], but does not

require knowledge of the signal-to-noise ratio (SNR) of the received signal. This may in

some cases slightly reduce the performance of the new algorithm, but has advantages

when the noise distribution is not known or the SNR is difficult to determine. Also, a

modified form of the new algorithm, the simplified soft distance (SSD) algorithm, offers a

potential complexity advantage without loss of decoding performance.

Soft Distance (SD) Iterative Decoding Algorithm: The algorithm can be applied on either

the trellis or the factor (Tanner) graph [3] of a binary (or non-binary) error-correcting

code. Here we describe the algorithm as applied to the graph of a low-density parity-

check code. As usual, the graph has n symbol nodes and m parity nodes, with edges,

2
as defined by the parity-check matrix of the code, connecting the symbol and parity

nodes. Distance metric information will be passed

from symbol nodes to parity nodes (the horizontal step in the algorithm), and then from

parity nodes to symbol nodes (the vertical step), in an iterative manner.

Assuming the transmission of coded binary information c  ( c1 c2  c j  cn )

in normalized polar format with signals of amplitudes  1 , and for a received vector

r  ( r1 r2  r j  rn ) we calculate the vectors:

d 02 ( j )  r j  1 , d12 ( j )  r j  1 ,
2 2
j  1,2 ,..., n

(1)

These first estimates of the code symbols are used to initialise the algorithm by setting

the following coefficients qij0 and q1ij at each symbol node:

qij0  d 02 ( j ) , q1ij  d12 ( j ) , j  1,2 ,..., n , i  1,2 ,..., m

(2)

These are then passed to the parity-check nodes (first horizontal step) connected to

each symbol node, where the following coefficients are computed:

   q ikx      q ikx  


   
rij0   log 2  2  kN ( i )\ j   , rij1   log 2  2  kN ( i )\ j   x  0 ,1
s : s j 0
  s : s j 1
 
   

(3)

Here N ( i ) \ j is the set of indexes of the symbol nodes connected to the parity node

hi , excluding the symbol node s j . The inner summations add together the q -values

corresponding to all the possible 0-state or 1-state configurations of the N ( i ) \ j code

symbols connected to parity check node hi , and the negative antilogs of these

3
coefficient sums are then added in the outer summation. rij0 and rij1 are then passed

back (first vertical step) to symbol node s j . At each symbol node s j the initial values of

qij0 and q1ij are added to the r -values passed from each node connected to it, to form a

second symbol estimate at each node:

qij0  d 02 (j)  r 0
kj , q1ij  d12 (j)  r 1
kj (4)
k M(j)\ i k M(j)\ i

Here M ( j ) \ i is the set of indexes of all the parity nodes connected to the symbol

node s j , excluding parity node hi . These horizontal and vertical steps are then iterated

an appropriate number of times to give a final estimate for each received symbol, given

by:

 
d̂ 2j  arg min d x2 (j)   rkjx  , x  0 ,1 (5)
x  k M(j) 

This decoding algorithm and the original sum-product (SP) algorithm [2, 3] have a

similar processing structure, but the novelty of the SD algorithm lies in its use of soft

(Euclidean) distance as the metric.

Simplified Soft Distance (SSD) Algorithm: The SD antilog-sum algorithm described in

the previous Section can be simplified and improved in three main ways. Firstly, the

operations in expression (3) can be computed by using the following relation:

  
log 2 2 | |  2 |  |  min(|  |, |  |)  log 2 1  2 || | |  ||  (6)

Thus direct calculation of the logarithmic term in (6) can be avoided by using a look-up

table for a suitable range of  and  values. It is also possible to neglect the

logarithmic term, but the performance would be significantly degraded.

Secondly, determining the number of state configurations required to calculate the

4
expressions in (3) is a complex task. This problem can be controlled by adopting a

technique originally proposed by MacKay and Neal [3]. This involves working with sums

and differences of distance antilogs, as the following expressions for the horizontal step

indicate. We define:

 qijx  rijx
Qijx  2 , Rijx  2 (7)

a bij
and Qij  Qij0  Qij1  2 ij , Qij  Qij0  Qij1   1s 2 , ij

(8)

where aij  log 2 Qij  , bij  log 2 Qij , and if ( qij0 )  ( q1ij ) then s ij  0

else s ij  1

which lead to:

Rij0  R1ij   Q ik , Rij0  Rij1   Q ik


k N ( i )\ j k N ( i )\ j

(9)

1  1 
and Rij0    Qik   Qik  ,
  Rij1    Qik   Qik 
 
2  k N ( i )\ j k N ( i )\ j  2  k N ( i )\ j k N ( i )\ j 

(10)

Similar expressions apply to the vertical step. Using these transformations it is no

longer necessary to determine the actual number of state configurations; all that is

required is to know whether the number of edges connected to a parity node is odd or

even, which significantly reduces the complexity. As seen above, decoding can then be

completed by using only comparisons and look-up tables.

Thirdly, though working with the sums and differences of the distance antilogs helps to

reduce overflow and underflow problems, we have found that overflow problems still

remain when using a relatively large number of decoding iterations. In order to avoid

5
this, Rijx values are normalized in the following way after each iteration:

 2 
RNx ,ij  Rijx  0
 R  R  n ,ij
1

 , r x   log 2 RNx ,ij  (11)
 ij ij 

As result of the above modifications, the SSD decoding algorithm is significantly less

complex than the basic SD algorithm, as shown in the next Section, but with no loss in

decoding performance.

Complexity Aspects: In order to do an analysis of the complexity of the proposed

algorithm, we define t  M ( j )av as the average number of ones per column, and

v  N( i )av as the average number of ones per row, in the parity-check matrix of the

code. Typically we have v  Nt / M , such that for a 1 / 2 -rate LDPC code, M  N / 2 ,

and v  2 t . Table 1 shows a comparison in terms of complexity aspects for the

different algorithms used in this Letter (for t  3 and v  6 ). The SSD algorithm has

the lowest complexity in this comparison.

Simulation Results: Figure 1 shows results obtained by doing simulations with the

LDPC code C LDPC (1008 ,504 ) [5] decoded using either the proposed SSD algorithm or

the LogSP algorithm [6], with 16, 40 and 400 iterations. Additional simulations were

done for two shorter LDPC codes, of different rates, the CLDPC ( 273 ,191 ) and

CLDPC ( 96 ,32 ) codes [5], both decoded using either the proposed SSD algorithm or the

LogSP algorithm with 16 iterations. Our normalization procedure avoids the overflow

problems that usually appear for a large number of iterations, a problem that could result

in a slight degradation of the BER performance with respect to the LogSP algorithm.

Normalization makes the BER performance be independent, as it were, of the number of

6
iterations. Therefore the BER performance of the proposed SSD decoding algorithm is

the same as that of the LogSP algorithm for any number of iterations.

Conclusions: In this paper a squared Euclidean distance SISO decoder has been

described, with simulation results for a LDPC code. The algorithm has the advantage of

significantly reducing the complexity of the original sum-product decoding algorithm [2].

The reduction in complexity is because the proposed algorithm uses only additions,

comparisons, and a look-up table, avoiding the use of quotients and products,

operations that are of high complexity in practical implementations, especially using

Field Programmable Logic Arrays (FPGA) technology. Another important advantage of

the proposed decoder is that it does not require knowledge of the distribution and value

of the noise variance; that is, it does not require an estimate of the channel SNR.

The simplified version of the Soft Distance algorithm (SSD) offers an even lower

complexity than the logarithmic version of the SP algorithm (LogSP), but with essentially

identical performance. Therefore the SDD algorithm appears to be a very suitable

Euclidean metric SISO iterative decoding algorithm, for use in a wide range of practical

applications.

References:

1. FARRELL, P. G.: ‘Decoding Error-Control Codes with Soft Distance as the


th
Metric’, Workshop on Mathematical Techniques in Coding Theory, 24 April,

2008. Edinburgh, UK.

2. MACKAY, D. J. C. and NEAL, R. M.: ‘Near Shannon limit performance of low

density parity check codes’, Electronics Letters. 1997, 33, (6), pp. 457-458.

7
3. MACKAY , D. J. C. : ‘Good Error-Correcting Codes based on Very Sparse

Matrices’, IEEE Trans Information Theory, 1999, IT-45, (2), pp. 399-431.

4. HAGENAUER J. and PAPKE, L.: ‘Iterative Decoding of Binary Block and

Convolutional Codes’, IEEE Transactions on Information Theory, 1996, 42 (2),

pp 429-45.

5. http://www.inference.phy.cam.ac.uk/mackay/CodesGallager.html.

6. ARNONE, L. J., GAYOSO, A., GONZÁLEZ, C. M. and CASTIÑEIRA MOREIRA,

J.: ‘Sum-subtract fixed point LDPC logarithmic decoder’, Proceedings of the XI

RPIC, September 2005, Rio Cuarto, Argentina. (1). Pp. 8-9.

Authors’ affiliations:

P. G. Farrell (Lancaster University, United Kingdom). E-mail: Paddy.Farrell@virgin.net.

L. J. Arnone, and J. Castiñeira Moreira (corresponding author) Electronics Department,

Engineering School, Mar del Plata University, Mar del Plata, Argentina.

(leoarn@fi.mdp.edu.ar, casti@fi.mdp.edu.ar) (corresponding author) .

8
Figure captions:

Fig. 1. Simulation of the BER performance of the CLDPC (1008 ,504 ) , CLDPC ( 273 ,191 )

and CLDPC ( 96 ,32 ) LDPC codes, decoded using the SSD or LogSP algorithms, for

different numbers of iterations.

Table 1. Complexity of the SD, SSD, SP and LogSP algorithms.

9
Figure 1

0
10
Uncoded Transmission
SSD LDPC (1008,504) 16 it
LogSP LDPC (1008,504) 16 it
SSD LDPC (273,191) 16 it
LogSP LDPC (273,191) 16 it
-1 SSD LDPC (96,32) 16 it
10 LogSP LDPC (96,32) 16 it
LogSP LDPC (1008,504) 40 it
SSD LDPC (1008,504) 40 it
SSD LDPC (1008,504) 400 it

-2
10
Pbe

-3
10

-4
10

-5
10
1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
Eb/No [dB]

10
Table 1

Algorithm SD SSD SP (MacKay-Neal) LogSP

products - - 36N -

quotients - - 6N -

Sums 402N 72N 15N 78N

comparisons 90N 21N - 24N

Look-up tab. 18N 12N - 21N

11

You might also like