1-12hit |
Akira SHIOZAKI Masashi KISHIMOTO Genmon MARUOKA
This letter proposes extended single parity check product codes and presents their empirical performances on a Gaussian channel by belief propagation (BP) decoding algorithm. The simulation results show that the codes can achieve close-to-capacity performance in high coding rate. The code of length 9603 and of rate 0.96 is only 0.77 dB away from the Shannon limit for a BER of 10-5.
Dae-Ki HONG Hyun-Seo OH Bub-Joo KANG
In this letter, a simple product code is proposed for constant-amplitude biorthogonal multicode (CABM) modulation. In CABM modulation, vertical redundant bits are used for constant amplitude coding. The proposed product code can be constructed by using additional horizontal redundant bits. The hardware complexity of the encoder and decoder pair is very low. Simulation results show that the bit error rate performance of the system with the proposed coding scheme is improved as compared with conventional CABM demodulation.
Hitoshi TOKUSHIGE Marc FOSSORIER Tadao KASAMI
This letter deals with an iterative decoding algorithm (IDA) for product codes. In the IDA, a soft-input and output iterative bounded-distance and encoding-based decoding algorithm is used for the component codes. Simulation results over an AWGN channel with BPSK modulation is presented and show the effectiveness of the IDA.
Morteza HIVADI Morteza ESMAEILI
Stopping distance and stopping redundancy of product binary linear block codes is studied. The relationship between stopping sets in a few parity-check matrices of a given product code C and those in the parity-check matrices for the component codes is determined. It is shown that the stopping distance of a particular parity-check matrix of C, denoted Hp, is equal to the product of the stopping distances of the associated constituent parity-check matrices. Upper bounds on the stopping redundancy of C is derived. For each minimum distance d=2r, r≥ 1, a sequence of [n,k,d] optimal stopping redundancy binary codes is given such k/n tends to 1 as n tends to infinity.
The history of forward error correction in optical communications is reviewed. The various types of FEC are classified as belonging to three generations. The first generation FEC represents the first to be successful in submarine systems, when the use of RS(255, 239) became widespread as ITU-T G.975, and also as G.709 for terrestrial systems. As WDM systems matured, a quest began for a stronger second generation FEC. Several types of concatenated code were proposed for this, and were installed in commercial systems. The advent of third-generation FEC opened up new vistas for the next generation of optical communication systems. Thanks to soft decision decoding and block turbo codes, a net coding gain of 10.1 dB has been demonstrated experimentally. That brought us a number of positive impacts on existing systems. Each new generation of FEC was compared in terms of the ultimate coding gain. The Shannon limit was discussed for hard or soft decision decoding. Several functionalities employing the FEC framing were introduced, such as overall wrapping by the FEC frame enabling the asynchronous multiplexing of different clients' data. Fast polarization scrambling with FEC was effective in mitigating polarization mode dispersion, and the error monitor function proved useful for the adaptive equalization of both chromatic dispersion and PMD.
Sooyoung KIM Jae Moung KIM Sung Pal LEE
Rate compatible (RC) codes are used for adaptive coding schemes or hybrid ARQ schemes in order to adapt varying channel conditions. This can improve overall service quality or the system throughput. Conventional RC codes have usually been designed on the basis of convolutional codes. This letter proposes an efficient RC code based on block codes. We use a high dimensional product code and divide it into the information block and a number of parity blocks. We form RC product codes using various combinations of these blocks. Because we can decode the RC product codes iteratively, these result in block turbo codes and they can be used efficiently for hybrid ARQ schemes.
Zongwang LI Youyun XU Wentao SONG
This paper presents an iterative algorithm for decoding product codes based on syndrome decoding of component codes. This algorithm is devised to achieve an effective trade-off between error performance and decoding complexity. A simplified list decoding algorithm, which uses a modified syndrome decoding method, for linear block codes is devised to deliver soft outputs for iterative decoding of product codes. By adjusting the size of the list, the decoder can achieve a proper trade-off between decoding complexity and performance. Compared to the other iterative decoding algorithms for product codes, the proposed algorithm has lower complexity while offers at least the same performance, which is demonstrated by analyses and simulations. The proposed algorithm has been simulated for BPSK and 16-QAM modulations over both the additive white Gaussian noise (AWGN) and Raleigh fading channels. This paper also presents an efficient scheme for applying product codes and their punctured versions. This scheme can be implemented with variable packet size and channel data block.
We examine a concatenated code which consists of a rate 1/2, 4-state turbo code (an inner code) and a single-parity-check product code (an outer code), and discuss the decoding structure called a double concatenated decoding scheme. From our Monte Carlo simulation trials, we show the advantage of the concatenated codes over turbo codes alone. Specifically, when we use an interleaver of 4096 bits, the Eb/No to obtain a BER of 10-6 is about 1.45 dB for the concatenated code. On the other hand, it is more than 2.5 dB for the turbo code alone. So, the Eb/No improvement can be achieved by about 1 dB. This improvement in Eb/No was also obtained for the interleavers of 8192 and 2048 bits. Therefore, the concatenated codes using a double concatenated decoding scheme can solve the problem of the BER flattening in decoding of turbo codes.
Nadine CHAPALAIN Nathalie Le HENO Damien CASTELAIN Ramesh Mahendra PYNDIAH
In this paper, the iterative decoding of BCH product codes also called Block Turbo Codes (BTC) is evaluated for the HIPERLAN/2 OFDM system. Simulations show that expurgated BCH codes should be chosen as constituent codes in order to outperform the specified convolutional code. We also show that the bit-by-bit frequency interleaver has a big impact on the behaviour of the turbo decoding process and that increasing its size together with time diversity lead to good performance when compared to the convolutional code.
Toshiyuki SHOHON Yoshihiro SOUTOME Haruo OGIWARA
Simple computation method of soft value, that is used in iterative soft decision decoding, is proposed. For the product code composed of BCH(63, 57) and that composed of BCH(63, 45), computation time with the proposed method is 1/15-1/6 as that with a method based on the Chase algorithm. Bit error rate (BER) performance with the proposed method is within 0.8 [dB] inferior to that with the method based on the Chase algorithm at BER=10-5.
Katsumi SAKAKIBARA Masao KASAHARA
Two types of multicast error control protocols based on a product code structure with or without interleaving are considered. The performances of these protocols are analyzed on burst error channels modeled by Gilbert's two-state Markov chain. The numerical results reveal that the interleaving does not always succeed in improving the performance of the protocol proposed in Part .
Katsumi SAKAKIBARA Masao KASAHARA
A multicast error control protocol proposed by Metzner is generalized and the performance of the proposed protocol on random error channels (binary symmetric channels) is analyzed. The proposed protocol adopts an encoding procedure based on a product code structure, whith enables each destined user terminal to decode the received frames with the Reddy-Robinson algorithm. As a result, the performance degradation due to the re-broadcasting of the replicas of the previously transmitted frames can be circumvented. The numerical results for the analysis and the simulation indicate that the proposed protocol yields higher throughput and less degradation of throughput with an increase of the number of destined terminals.