NOLD: A Neural-Network Optimized Low-Resolution Decoder for LDPC Codes

Lei Chu, Huanyu He, Ling Pei, and Robert C. Qiu



Abstract :The min-sum (MS) algorithm can decode Low-density parity-check (LDPC) codes with low computational complexity at the cost of slight performance loss. It is an effective way to realize hardware implementation of the min-sum decoder by quantizing the floating belief messages (i.e., check-to-variable messages and variable-to-check messages) into low-resolution (i.e., 2–4 bits) versions. However, such a way can lead to severe performance degradation due to the finite precision effect. In this paper, we propose a neural-network optimized low-resolution decoding (NOLD) algorithm for LDPC codes to deal with the problem. Specifically, the optimization of decoding parameters (i.e., scaling factors and quantization step) is achieved in a hybrid way, in which we concatenate a NOLD decoder with a customized neural network. All learnable parameters associated with the decoding parameters are assigned to each neuron in the proposed method. What’s more, we design a new activation function whose outputs are close to the employed quantizer ones when network parameters are finally optimized off-line. Finally, the performance of the proposed method is verified by numerous experiments. For the case of 2-bit decoding, the proposed approach significantly outperforms several conventional decoders at the expense of slightly increased off-line training time. Besides, the proposed method with 4-bit quantization incurs only 0.1 dB performance loss compared with the floating min-sum decoder at the coded bit-error-rate of 10−5 . Moreover, we show that the proposed NOLD decoder works over a wide range of channel conditions for regular and irregular LDPC codes. Simulation code for reproductive results is publicly available1. 

Index terms :