The search functionality is under construction.

Keyword Search Result

[Keyword] low-density parity-check code(39hit)

1-20hit(39hit)

  • LDPC Codes for Communication Systems: Coding Theoretic Perspective Open Access

    Takayuki NOZAKI  Motohiko ISAKA  

     
    INVITED SURVEY PAPER-Fundamental Theories for Communications

      Pubricized:
    2022/02/10
      Vol:
    E105-B No:8
      Page(s):
    894-905

    Low-density parity-check (LDPC) codes are widely used in communication systems for their high error-correcting performance. This survey introduces the elements of LDPC codes: decoding algorithms, code construction, encoding algorithms, and several classes of LDPC codes.

  • Parameters Estimation of Impulse Noise for Channel Coded Systems over Fading Channels

    Chun-Yin CHEN  Mao-Ching CHIU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2021/01/18
      Vol:
    E104-B No:7
      Page(s):
    903-912

    In this paper, we propose a robust parameters estimation algorithm for channel coded systems based on the low-density parity-check (LDPC) code over fading channels with impulse noise. The estimated parameters are then used to generate bit log-likelihood ratios (LLRs) for a soft-inputLDPC decoder. The expectation-maximization (EM) algorithm is used to estimate the parameters, including the channel gain and the parameters of the Bernoulli-Gaussian (B-G) impulse noise model. The parameters can be estimated accurately and the average number of iterations of the proposed algorithm is acceptable. Simulation results show that over a wide range of impulse noise power, the proposed algorithm approaches the optimal performance under different Rician channel factors and even under Middleton class-A (M-CA) impulse noise models.

  • Fast Converging ADMM Penalized Decoding Method Based on Improved Penalty Function for LDPC Codes

    Biao WANG  

     
    LETTER-Coding Theory

      Pubricized:
    2020/05/08
      Vol:
    E103-A No:11
      Page(s):
    1304-1307

    For low-density parity-check (LDPC) codes, the penalized decoding method based on the alternating direction method of multipliers (ADMM) can improve the decoding performance at low signal-to-noise ratios and also has low decoding complexity. There are three effective methods that could increase the ADMM penalized decoding speed, which are reducing the number of Euclidean projections in ADMM penalized decoding, designing an effective penalty function and selecting an appropriate layered scheduling strategy for message transmission. In order to further increase the ADMM penalized decoding speed, through reducing the number of Euclidean projections and using the vertical layered scheduling strategy, this paper designs a fast converging ADMM penalized decoding method based on the improved penalty function. Simulation results show that the proposed method not only improves the decoding performance but also reduces the average number of iterations and the average decoding time.

  • A Fast Iterative Check Polytope Projection Algorithm for ADMM Decoding of LDPC Codes by Bisection Method Open Access

    Yan LIN  Qiaoqiao XIA  Wenwu HE  Qinglin ZHANG  

     
    LETTER-Information Theory

      Vol:
    E102-A No:10
      Page(s):
    1406-1410

    Using linear programming (LP) decoding based on alternating direction method of multipliers (ADMM) for low-density parity-check (LDPC) codes shows lower complexity than the original LP decoding. However, the development of the ADMM-LP decoding algorithm could still be limited by the computational complexity of Euclidean projections onto parity check polytope. In this paper, we proposed a bisection method iterative algorithm (BMIA) for projection onto parity check polytope avoiding sorting operation and the complexity is linear. In addition, the convergence of the proposed algorithm is more than three times as fast as the existing algorithm, which can even be 10 times in the case of high input dimension.

  • Spatially Coupled Low-Density Parity-Check Codes on Two-Dimensional Array Erasure Channel

    Gou HOSOYA  Hiroyuki YASHIMA  

     
    PAPER-Coding theory and techniques

      Vol:
    E101-A No:12
      Page(s):
    2008-2017

    In this study, spatially coupled low-density parity-check (SC-LDPC) codes on the two-dimensional array erasure (2DAE) channel are devised, including a method for generating new SC-LDPC codes with a restriction on the check node constraint. A density evolution analysis confirms the improvement in the threshold of the proposed two-dimensional SC-LDPC code ensembles over the one-dimensional SC-LDPC code ensembles. We show that the BP threshold of the proposed codes can approach the corresponding maximum a posteriori (MAP) threshold of the original residual graph on the 2DAE channel. Moreover, we show that the rates of the residual graph of the two-dimensional LDPC block code ensemble are smaller than those of the one-dimensional LDPC block code ensemble. In other words, a high performance can be obtained by choosing the two-dimensional SC-LDPC codes.

  • Joint Iterative Decoding of Spatially Coupled Low-Density Parity-Check Codes for Position Errors in Racetrack Memories Open Access

    Ryo SHIBATA  Gou HOSOYA  Hiroyuki YASHIMA  

     
    PAPER-Coding theory and techniques

      Vol:
    E101-A No:12
      Page(s):
    2055-2063

    Racetrack memory (RM) has attracted much attention. In RM, insertion and deletion (ID) errors occur as a result of an unstable reading process and are called position errors. In this paper, we first define a probabilistic channel model of ID errors in RM with multiple read-heads (RHs). Then, we propose a joint iterative decoding algorithm for spatially coupled low-density parity-check (SC-LDPC) codes over such a channel. We investigate the asymptotic behaviors of SC-LDPC codes under the proposed decoding algorithm using density evolution (DE). With DE, we reveal the relationship between the number of RHs and achievable information rates, along with the iterative decoding thresholds. The results show that increasing the number of RHs provides higher decoding performances, although the proposed decoding algorithm requires each codeword bit to be read only once regardless of the number of RHs. Moreover, we show the performance improvement produced by adjusting the order of the SC-LDPC codeword bits in RM.

  • Efficient Early Termination Criterion for ADMM Penalized LDPC Decoder

    Biao WANG  Xiaopeng JIAO  Jianjun MU  Zhongfei WANG  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:3
      Page(s):
    623-626

    By tracking the changing rate of hard decisions during every two consecutive iterations of the alternating direction method of multipliers (ADMM) penalized decoding, an efficient early termination (ET) criterion is proposed to improve the convergence rate of ADMM penalized decoder for low-density parity-check (LDPC) codes. Compared to the existing ET criterion for ADMM penalized decoding, the proposed method can reduce the average number of iterations significantly at low signal-to-noise ratios with negligible performance degradation.

  • An Efficient Weighted Bit-Flipping Algorithm for Decoding LDPC Codes Based on Log-Likelihood Ratio of Bit Error Probability

    Tso-Cho CHEN  Erl-Huei LU  Chia-Jung LI  Kuo-Tsang HUANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2017/05/29
      Vol:
    E100-B No:12
      Page(s):
    2095-2103

    In this paper, a weighted multiple bit flipping (WMBF) algorithman for decoding low-density parity-check (LDPC) codes is proposed first. Then the improved WMBF algorithm which we call the efficient weighted bit-flipping (EWBF) algorithm is developed. The EWBF algorithm can dynamically choose either multiple bit-flipping or single bit-flipping in each iteration according to the log-likelihood ratio of the error probability of the received bits. Thus, it can efficiently increase the convergence speed of decoding and prevent the decoding process from falling into loop traps. Compared with the parallel weighted bit-flipping (PWBF) algorithm, the EWBF algorithm can achieve significantly lower computational complexity without performance degradation when the Euclidean geometry (EG)-LDPC codes are decoded. Furthermore, the flipping criterion does not require any parameter adjustment.

  • Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channel

    Shunsuke HORII  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E99-A No:12
      Page(s):
    2170-2178

    In this study, we develop a new algorithm for decoding binary linear codes for symbol-pair read channels. The symbol-pair read channel was recently introduced by Cassuto and Blaum to model channels with higher write resolutions than read resolutions. The proposed decoding algorithm is based on linear programming (LP). For LDPC codes, the proposed algorithm runs in time polynomial in the codeword length. It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance dfp of the code, which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to ⌈dfp/2⌉-1 errors.

  • A 5.83pJ/bit/iteration High-Parallel Performance-Aware LDPC Decoder IP Core Design for WiMAX in 65nm CMOS

    Xiongxin ZHAO  Zhixiang CHEN  Xiao PENG  Dajiang ZHOU  Satoshi GOTO  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E96-A No:12
      Page(s):
    2623-2632

    In this paper, we propose a synthesizable LDPC decoder IP core for the WiMAX system with high parallelism and enhanced error-correcting performance. By taking the advantages of both layered scheduling and fully-parallel architecture, the decoder can fully support multi-mode decoding specified in WiMAX with the parallelism much higher than commonly used partial-parallel layered LDPC decoder architecture. 6-bit quantized messages are split into bit-serial style and 2bit-width serial processing lines work concurrently so that only 3 cycles are required to decode one layer. As a result, 12∼24 cycles are enough to process one iteration for all the code-rates specified in WiMAX. Compared to our previous bit-serial decoder, it doubles the parallelism and solves the message saturation problem of the bit-serial arithmetic, with minor gate count increase. Power synthesis result shows that the proposed decoder achieves 5.83pJ/bit/iteration energy efficiency which is 46.8% improvement compared to state-of-the-art work. Furthermore, an advanced dynamic quantization (ADQ) technique is proposed to enhance the error-correcting performance in layered decoder architecture. With about 2% area overhead, 6-bit ADQ can achieve the error-correcting performance close to 7-bit fixed quantization with improved error floor performance.

  • Hybrid Message-Passing Algorithm and Architecture for Decoding Cyclic Non-binary LDPC Codes

    Yichao LU  Gang HE  Guifen TIAN  Satoshi GOTO  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E96-A No:12
      Page(s):
    2652-2659

    Recently, non-binary low-density parity-check (NB-LDPC) codes starts to show their superiority in achieving significant coding gains when moderate codeword lengths are adopted. However, the overwhelming decoding complexity keeps NB-LDPC codes from being widely employed in modern communication devices. This paper proposes a hybrid message-passing decoding algorithm which consumes very low computational complexity. It achieves competitive error performance compared with conventional Min-max algorithm. Simulation result on a (255,174) cyclic code shows that this algorithm obtains at least 0.5dB coding gain over other state-of-the-art low-complexity NB-LDPC decoding algorithms. A partial-parallel NB-LDPC decoder architecture for cyclic NB-LDPC codes is also developed based on this algorithm. Optimization schemes are employed to cut off hard decision symbols in RAMs and also to store only part of the reliability messages. In addition, the variable node units are redesigned especially for the proposed algorithm. Synthesis results demonstrate that about 24.3% gates and 12% memories can be saved over previous works.

  • Design and Implementation of Long High-Rate QC-LDPC Codes and Its Applications to Optical Transmission Systems

    Norifumi KAMIYA  Yoichi HASHIMOTO  Masahiro SHIGIHARA  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E96-B No:6
      Page(s):
    1402-1411

    In this paper, we present a novel class of long quasi-cyclic low-density parity-check (QC-LDPC) codes. Each of the codes in this class has a structure formed by concatenating single-parity-check codes and QC-LDPC codes of shorter lengths, which allows for efficient, high throughput encoder/decoder implementations. Using a code in this class, we design a forward error correction (FEC) scheme for optical transmission systems and present its high throughput encoder/decoder architecture. In order to demonstrate its feasibility, we implement the architecture on a field programmable gate array (FPGA) platform. We show by both FPGA-based simulations and measurements of an optical transmission system that the FEC scheme can achieve excellent error performance and that there is no significant performance degradation due to the constraint on its structure while getting an efficient, high throughput implementation is feasible.

  • Iterative Decoding for the Davey-MacKay Construction over IDS-AWGN Channel

    Xiaopeng JIAO  Jianjun MU  Rong SUN  

     
    LETTER-Coding Theory

      Vol:
    E96-A No:5
      Page(s):
    1006-1009

    Turbo equalization is an iterative equalization and decoding technique that can achieve impressive performance gains for communication systems. In this letter, we investigate the turbo equalization method for the decoding of the Davey-MacKay (DM) construction over the IDS-AWGN channels, which indicates a cascaded insertion, deletion, substitution (IDS) channel and an additive white Gaussian noise (AWGN) channel. The inner decoder for the DM construction can be seen as an maximum a-posteriori (MAP) detector. It receives the beliefs generated by the outer LDPC decoder when turbo equalization is used. Two decoding schemes with different kinds of inner decoders, namely hard-input inner decoder and soft-input inner decoder, are investigated. Simulation results show that significant performance gains are obtained for both decoders with respect to the insertion/deletion probability at different SNR values.

  • A Numerical Evaluation of Entanglement Sharing Protocols Using Quantum LDPC CSS Codes

    Masakazu YOSHIDA  Manabu HAGIWARA  Takayuki MIYADERA  Hideki IMAI  

     
    PAPER-Information Theory

      Vol:
    E95-A No:9
      Page(s):
    1561-1569

    Entangled states play crucial roles in quantum information theory and its applied technologies. In various protocols such as quantum teleportation and quantum key distribution, a good entangled state shared by a pair of distant players is indispensable. In this paper, we numerically examine entanglement sharing protocols using quantum LDPC CSS codes. The sum-product decoding method enables us to detect uncorrectable errors, and thus, two protocols, Detection and Resending (DR) protocol and Non-Detection (ND) protocol are considered. In DR protocol, the players abort the protocol and repeat it if they detect the uncorrectable errors, whereas in ND protocol they do not abort the protocol. We show that DR protocol yields smaller error rate than ND protocol. In addition, it is shown that rather high reliability can be achieved by DR protocol with quantum LDPC CSS codes.

  • Weight Distributions of Non-binary LDPC Codes

    Kenta KASAI  Charly POULLIAT  David DECLERCQ  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E94-A No:4
      Page(s):
    1106-1115

    In this paper, we study the average symbol and bit-weight distributions for ensembles of non-binary low-density parity-check codes defined on GF(2p). Moreover, we derive the asymptotic exponential growth rate of the weight distributions in the limit of large codelength. Interestingly, we show that the normalized typical minimum distance does not monotonically increase with the size of the field.

  • Weight Distributions of Multi-Edge Type LDPC Codes

    Kenta KASAI  Tomoharu AWANO  David DECLERCQ  Charly POULLIAT  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E93-A No:11
      Page(s):
    1942-1948

    The multi-edge type LDPC codes, introduced by Richardson and Urbanke, present the general class of structured LDPC codes. In this paper, we derive the average weight distributions of the multi-edge type LDPC code ensembles. Furthermore, we investigate the asymptotic exponential growth rate of the average weight distributions and investigate the connection to the stability condition of the density evolution.

  • Continuous BP Decoding Algorithm for a Low-Density Parity-Check Coded Hybrid ARQ System

    Sangjoon PARK  Sooyong CHOI  Seung-Hoon HWANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E93-B No:4
      Page(s):
    993-996

    A continuous belief propagation (BP) decoding algorithm for a hybrid automatic repeat request (ARQ) system is proposed in this paper. The proposed continuous BP decoding algorithm utilizes the extrinsic information generated in the last iteration of the previous transmission for a continuous progression of the decoding through retransmissions. This allows the continuous BP decoding algorithm to accelerate the decoding convergence for codeword determination, especially when the number of retransmissions is large or a currently combined packet has punctured nodes. Simulation results verify the effectiveness of the proposed continuous BP decoding algorithm.

  • A Hybrid ARQ Scheme Based on Rate-Compatible Low-Density Parity-Check Codes by Shortening and Extending

    Toshihiko OKAMURA  

     
    PAPER-Coding Theory

      Vol:
    E92-A No:11
      Page(s):
    2883-2890

    Incremental Redundancy Hybrid ARQ (IR-HARQ) based on rate-compatible punctured low-density parity-check (LDPC) codes can achieve high throughput over a wide range of SNRs. One drawback of such IR-HARQ schemes is high computational complexity of decoding for early transmission at high rates. In order to overcome this problem, a HARQ scheme based on rate-compatible LDPC codes by shortening and extending is presented in this paper. In the HARQ scheme, a high-rate mother code is transmitted at first, and parity-bits of a shortened code are transmitted for early retransmission requests. With a low-complexity decoder of the high-rate mother code, this shortened-code approach would result in low computational complexity of decoding, but it causes smaller length and larger number of shortened codes to be decoded as retransmission repeats. To prevent the resultant degradation of performance and complexity, extending is efficiently applied to the shortened codes after predetermined retransmission-times. A multi-edge type code-design is employed to construct irregular LDPC codes that meet the requirement of the HARQ scheme. Simulation results show that the HARQ scheme can achieve lower computational complexity of decoding than a conventional IR-HARQ scheme with good throughput over a wide range of SNRs.

  • Adaptive Decoding Algorithms for Low-Density Parity-Check Codes over the Binary Erasure Channel

    Gou HOSOYA  Hideki YAGI  Manabu KOBAYASHI  Shigeichi HIRASAWA  

     
    PAPER-Coding Theory

      Vol:
    E92-A No:10
      Page(s):
    2418-2430

    Two decoding procedures combined with a belief-propagation (BP) decoding algorithm for low-density parity-check codes over the binary erasure channel are presented. These algorithms continue a decoding procedure after the BP decoding algorithm terminates. We derive a condition that our decoding algorithms can correct an erased bit which is uncorrectable by the BP decoding algorithm. We show by simulation results that the performance of our decoding algorithms is enhanced compared with that of the BP decoding algorithm with little increase of the decoding complexity.

  • Complexity-Reducing Algorithm for Serial Scheduled Min-Sum Decoding of LDPC Codes

    Hironori UCHIKAWA  Kohsuke HARADA  

     
    PAPER-Coding Theory

      Vol:
    E92-A No:10
      Page(s):
    2411-2417

    We propose a complexity-reducing algorithm for serial scheduled min-sum decoding that reduces the number of check nodes to process during an iteration. The check nodes to skip are chosen based on the reliability, a syndrome and a log-likelihood-ratio (LLR) value, of the incoming messages. The proposed algorithm is evaluated by computer simulations and shown to reduce the decoding complexity about 20% compared with a conventional serial scheduled min-sum decoding with small fractional decibel degradation in error correction performance.

1-20hit(39hit)