Turbo codes suffer from high decoding latency which hinders their utilization in many communication systems. Parallel decodable turbo codes (PDTCs) are suitable for parallel decoding and hence have low latency. In this article, we analyze the worst case minimum distance of parallel decodable turbo codes with both S-random interleaver and memory collision free Row-Column S-random interleaver. The effect of minimum distance on code performance is determined through computer simulations.
This study shows a fast simulation method of turbo codes over slow Rayleigh fading channels. The reduction of the simulation time is achieved by applying importance sampling (IS). The conventional IS method of turbo codes over Rayleigh fading channels focuses only on modification of additive white Gaussian noise (AWGN) sequences. The proposed IS method biases not only AWGNs but also channel gains of the Rayleigh fading channels. The computer runtime of the proposed method is about 1/5 of that of the conventional IS method on the evaluation of a frame error rate of 10-6. When we compare with the Monte Carlo simulation method, the proposed method needs only 1/100 simulation runtime under the condition of the same accuracy of the estimator.
Over a correlated flat fading channel, multiple-symbol differential detection can enhance the performance of coded differential phase shift keying (DPSK) systems but with exponential complexity. For iterative decoding schemes, the soft-input soft-output (SISO) multiple-symbol differential sphere decoding (MSDSD) can offer suboptimal performance and its complexity is quadratic with detection length. To further reduce the complexity, this paper proposes a Forward/Backward MSDSD (FB-MSDSD) for coded DPSK systems. The key idea is that the detection interval is split into two subintervals which are processed in the forward and backward directions respectively. Simulation results show that the proposed scheme has almost the same performance and lower complexity when compared with the SISO-MSDSD scheme with the same detection length.
This study shows a fast simulation method for turbo codes over an additive white class A noise (AWAN) channel. The reduction of the estimation time is achieved by applying importance sampling (IS) which is one of the variance reduction simulation methods. In order to adapt the AWAN channel, we propose a design method of a simulation probability density function (PDF) utilized in IS. The proposed simulation PDF is related to the Bhattacharyya bound to evaluate wider area of the signal space than the conventional method. Since the mean translation method, which is a conventional design method of the simulation PDF used in IS, is optimized for an additive white Gaussian noise channel, the evaluation time of the error performance of turbo codes over the AWAN channel can not be reduced. To evaluate BER of 10-8, the simulation time of the proposed method can be reduced to 1/104 under the condition of the same accuracy of the traditional Monte Carlo simulation method.
In order to reduce the iterative decoding delay of convolutional turbo codes, this paper presents a concurrent decoding algorithm for the hardware implementation of turbo convolutional decoders. Different than a general turbo code, the hardware turbo decoder based on the proposed algorithm can update the priori information of message for each component code in a bit-by-bit manner as soon as it is generated by the other component code. The two component codes in a turbo code can thus be decoded concurrently, by using a single MAP decoder, subsequently reducing the decoding latency by approximately half while maintaining the bit error rate performance and a comparable hardware complexity, as a general turbo decoder.
Kentaro KOBAYASHI Takaya YAMAZATO Masaaki KATAYAMA
We propose an iterative channel decoding scheme for two or more multiple correlated sources. The correlated sources are separately turbo encoded without knowledge of the correlation and transmitted over noisy channels. The proposed decoder exploits the correlation of the multiple sources in an iterative soft decision decoding manner for joint detection of each of the transmitted data. Simulation results show that achieved performance for the more than two sources is also close to the Shannon and Slepian-Wolf limit and large additional SNR gain is obtained in comparison with the case of two sources. We also verify through simulation that no significant penalty results from the estimation of the source correlation in the decoding process and the code with a low error floor achieves good performance for a large number of the correlated sources.
In this letter, we propose a two-bit representation method for turbo decoder extrinsic information based on bit error count minimization and parameter reset. We show that the performance of the proposed system approaches that of the full precision decoder within 0.17 dB and 0.48 dB at 1 % packet error rate for packet lengths of 500 and 10,000 information bits. The idea of parameter reset we introduce can be used not only in turbo decoder but also in many other iterative algorithms.
A switching type-II hybrid ARQ scheme with rate compatible punctured turbo (RCPT) codes is proposed in this letter. The proposed scheme combines three retransmission schemes by minimizing a cost function that yields a compromise between throughput and delay time. The performance of the proposed algorithm is evaluated by computer simulations. Compared with conventional hybrid ARQ algorithms, the proposed algorithm can offer almost the same throughput performance with smaller time delay.
Chang-Rae JEONG Hyo-Yol PARK Kwang-Soon KIM Keum-Chan WHANG
In this paper, an efficient partial incremental redundancy (P-IR) scheme is proposed for an H-ARQ using block type low density parity check (B-LDPC) codes. The performance of the proposed P-IR scheme is evaluated in an HSDPA system using IEEE 802.16e B-LDPC codes. Simulation results show that the proposed H-ARQ using IEEE 802.16e B-LDPC codes outperforms the H-ARQ using 3GPP turbo codes.
Chunlong BAI Bartosz MIELCZAREK Ivan J. FAIR Witold A. KRZYMIE
Wireless communication systems usually employ a concatenated error control coding scheme consisting of an outer error detection code and an inner error correction code. Traditionally, these two codes are decoded separately. When the sub-block structure is used, each data block (input sequence) at the inner encoder consists of several sub-blocks and each of these sub-blocks is protected with the error detection code. The sub-block structure is used in the Wideband CDMA (WCDMA) system specified by the 3rd Generation Partnership Project (3GPP). In this paper, a sub-block recovery scheme is proposed for this concatenated error control coding scheme to utilize the error detection capability introduced by the outer code in the decoding of the inner code. We demonstrate that, if the inner code is a turbo code with a highly structured interleaver and iterative sub-optimal decoding is used, the sub-block recovery scheme is helpful in correcting a typical error pattern, which helps to improve the block error rate performance. We analyze the decoding performance when sub-block recovery is used together with the maximum likelihood (ML) algorithm as well as the log maximum-a-posteriori probability (Log-MAP) and the soft output Viterbi algorithm (SOVA) and demonstrate gains introduced by the sub-block recovery in the latter two cases using computer simulations.
Sook Min PARK Jaeyoung KWAK Do-Sik YOO Kwyro LEE
A method is presented that can substantially reduce the memory requirements of non-binary turbo decoders by efficient representation of the extrinsic information. In the case of the duo-binary turbo decoder employed by the IEEE 802.16e standard, the extrinsic information memory can be reduced by about 43%, which decreases the total decoder complexity by 18%. We also show that the proposed algorithm can be implemented by simple hardware architecture.
Soft-decision decoding techniques are applied to asynchronous frequency-hop/spread-spectrum multiple-access (FH/SSMA) networks, where M-ary frequency shift keying (MFSK) is employed to transmit one modulated symbol per hop. Coding schemes using soft-decision decoded binary convolutional codes or turbo codes are considered, both with or without bit-interleaving. Performances of several soft metrics are examined for each coding scheme. It is shown that when multiple access interference is the main source of errors, the product metric offers the best performance among the soft metrics considered for all coding schemes. Furthermore, the application of soft-decision decoded convolutional codes or turbo codes without bit-interleaving is shown to allow for a much larger number of simultaneously transmitting users than hard-decision decoded Reed-Solomon codes. Finally, it is observed that when soft-decision decoding techniques are employed, synchronous networks attain better performance than asynchronous networks.
We propose an incremental redundancy (IR)-hybrid ARQ (HARQ) scheme which uses double binary turbo codes for error correction. The proposed HARQ scheme provides a higher throughput at all Es/N0 than the binary turbo IR-HARQ scheme. An extra coding gain is also attained by using the proposed HARQ scheme over turbo codes only.
In this paper, it is shown that the bit erasure probability of turbo codes with iterative decoding in the waterfall region is nonlinearly scaled by the information blocklength. This result can be used to predict efficiently the bit erasure probability of the finite-length turbo codes over the binary erasure channel.
Ning WEI Zhongpei ZHANG Shaoqian LI
Recently, a versatile user cooperation method called coded cooperation diversity has been introduced, in which the codewords of each mobile are partitioned and transmitted through independent fading channels instead of simple repetition relay, to achieve remarkable gains over a conventional (non-cooperation) system, while maintaining the same information rate and transmission power. In this paper we present an adaptive space-time (AST) coded cooperation scheme based on the decoding operation on the first partition of the codeword at the base station and enables practical adaptive arrangement of resources to adopt the channel condition. Performance analysis and simulation results have proved that the proposed scheme greatly improves error rate performance and system throughput, compared with the previous framework.
Naoto KOBAYASHI Daiki KOIZUMI Toshiyasu MATSUSHIMA Shigeichi HIRASAWA
We propose a new fixed-rate error correction system with a feedback channel. In our system, the receiver transmits a list of positions of unreliable information bits based on the log a-posteriori probability ratios by outputs of a soft-output decoder to the transmitter. This method is just like that of the reliability-based hybrid ARQ scheme. To dynamically select an appropriate interleaving function with feedback information is a key feature of our system. By computer simulations, we show that the performance of a system with a feedback channel is improved by dynamically selecting an appropriate interleaving function.
In this paper, we define a stopping set of turbo codes with the iterative decoding in the binary erasure channel. Based on the stopping set analysis, we study the block and bit erasure probabilities of turbo codes and the performance degradation of the iterative decoding against the maximum-likelihood decoding. The error floor performance of turbo codes with the iterative decoding is dominated by the small stopping sets. The performance degradation of the iterative decoding is negligible in the error floor region, so the error floor performance is asymptotically dominated by the low weight codewords.
Hye-Mi CHOI Ji-Hoon KIM In-Cheol PARK
As turbo decoding is a highly memory-intensive algorithm consuming large power, a major issue to be solved in practical implementation is to reduce power consumption. This paper presents an efficient reverse calculation method to lower the power consumption by reducing the number of memory accesses required in turbo decoding. The reverse calculation method is proposed for the Max-log-MAP algorithm, and it is combined with a scaling technique to achieve a new decoding algorithm, called hybrid log-MAP, that results in a similar BER performance to the log-MAP algorithm. For the W-CDMA standard, experimental results show that 80% of memory accesses are reduced through the proposed reverse calculation method. A hybrid log-MAP turbo decoder based on the proposed reverse calculation reduces power consumption and memory size by 34.4% and 39.2%, respectively.
We research on an importance sampling (IS) simulation to estimate a low error probability of turbo codes. The simulation time reduction in IS depends on another probability density function (p.d.f.) called simulation p.d.f. The previous IS simulation method can not evaluate the error probability on the low SNR and waterfall region. We derive the optimal simulation p.d.f. which gives the perfect estimator. A new simulation p.d.f. design, which is related to the optimal one, is proposed to overcome the problem of the previous IS method. The proposed IS simulation can evaluate all possible error patterns. Finally, some computer simulations show that the proposed method can evaluate the error probability on the low SNR, waterfall, and error floor regions. At the evaluation of the BER of 10-7, the simulation time of the proposed method is about 1/350 times as short as that of the Monte-Carlo simulation. When the BER is less than 710-8, the proposed method requires shorter simulation time than the conventional IS method.
The maximum a posteriori (MAP) algorithm is the optimum solution for decoding concatenated codes, such as turbo codes. Since the MAP algorithm is computationally complex, more efficient algorithms, such as the Max-Log-MAP algorithm and the soft-output Viterbi algorithm (SOVA), can be used as suboptimum solutions. Especially, the Max-Log-MAP algorithm is widely used, due to its near-optimum performance and lower complexity compared with the MAP algorithm. In this paper, we propose an efficient algorithm for decoding concatenated codes by modifying the Max-Log-MAP algorithm. The efficient implementation of the backward recursion and the log-likelihood ratio (LLR) update in the proposed algorithm improves its computational efficiency. Memory is utilized more efficiently if the sliding window algorithm is adopted. Computer simulations and analysis show that the proposed algorithm requires a considerably lower number of computations compared with the Max-Log-MAP algorithm, while providing the same overall performance.