The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] codes(537hit)

101-120hit(537hit)

  • Spatially “Mt. Fuji” Coupled LDPC Codes

    Yuta NAKAHARA  Shota SAITO  Toshiyasu MATSUSHIMA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E100-A No:12
      Page(s):
    2594-2606

    A new type of spatially coupled low density parity check (SCLDPC) code is proposed. This code has two benefits. (1) This code requires less number of iterations to correct the erasures occurring through the binary erasure channel in the waterfall region than that of the usual SCLDPC code. (2) This code has lower error floor than that of the usual SCLDPC code. Proposed code is constructed as a coupled chain of the underlying LDPC codes whose code lengths exponentially increase as the position where the codes exist is close to the middle of the chain. We call our code spatially “Mt. Fuji” coupled LDPC (SFCLDPC) code because the shape of the graph representing the code lengths of underlying LDPC codes at each position looks like Mt. Fuji. By this structure, when the proposed SFCLDPC code and the original SCLDPC code are constructed with the same code rate and the same code length, L (the number of the underlying LDPC codes) of the proposed SFCLDPC code becomes smaller and M (the code lengths of the underlying LDPC codes) of the proposed SFCLDPC code becomes larger than those of the SCLDPC code. These properties of L and M enables the above reduction of the number of iterations and the bit error rate in the error floor region, which are confirmed by the density evolution and computer simulations.

  • DNA Codes with Constant GC-Content Constructed from Hadamard Matrices

    Young-Sik KIM  Hosung PARK  Sang-Hyo KIM  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:11
      Page(s):
    2408-2415

    To construct good DNA codes based on biologically motivated constraints, it is important that they have a large minimum Hamming distance and the number of GC-content is kept constant. Also, maximizing the number of codewords in a DNA code is required for given code length, minimum Hamming distance, and number of GC-content. In most previous works on the construction of DNA codes, quaternary constant weight codes were directly used because the alphabet of DNA strands is quaternary. In this paper, we propose new coding theoretic constructions of DNA codes based on the binary Hadamard matrix from a binary sequence with ideal autocorrelation. The proposed DNA codes have a greater number of codewords than or the equal number to existing DNA codes constructed from quaternary constant weight codes. In addition, it is numerically shown that for the case of codes with length 8 or 16, the number of codewords in the proposed DNA code sets is the largest with respect to the minimum reverse complementary Hamming distances, compared to all previously known results.

  • On Locality of Some Ternary Linear Codes of Dimension 6

    Ruipan YANG  Ruihu LI  Luobin GUO  Qiang FU  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:10
      Page(s):
    2172-2175

    Locally repairable code (LRC) can recover any codeword symbol failure by accessing a small number of other symbols, which can increase the efficiency during the repair process. In a distributed storage system with locally repairable codes, any node failure can be rebuilt by accessing other fixed nodes. It is a promising prospect for the application of LRC. In this paper, some methods of constructing matrices which can generate codes with small locality will be proposed firstly. By analyzing the parameters, we construct the generator matrices of the best-known ternary linear codes of dimension 6, using methods such as shortening, puncturing and expansion. After analyzing the linear dependence of the column vectors in the generator matrices above, we find out the locality of the codes they generate. Many codes with small locality have been found.

  • Re-Polarization Processing in Extended Polar Codes

    Yu-Ming HUANG  Hsie-Chia CHANG  Hsiang-Pang LI  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2017/03/13
      Vol:
    E100-B No:10
      Page(s):
    1765-1777

    In this paper, extended polar codes based on re-polarization technique are proposed. The presented schemes extend a conventional polar code of length N to length N+q, which stand in contrast to known length-compatible schemes such as puncturing and shortening techniques that reduce the length from N to N-q. For certain specific lengths, the waterfall region performance of our extended polar code is superior to that of other length-compatible polar codes. It provides better reliability and reduces the management overhead in several storage devices and communications systems. In essence, extended polar codes are created by re-polarizing the q least reliable nonfrozen bit-channels with the help of q additional frozen bit-channels. It is proved that this re-polarization enhances the reliability of these bits. Moreover, the extended schemes can be not only modified to improve decoding performance, but generalized as a m-stage scheme to improve throughput significantly. With parallel operation, the throughput is improved around 2m-1 times when q is small. Compared to a shortened polar code with length 1536, the encoding and decoding complexities of an extended polar code are only 50% and 60.5%, respectively.

  • Fast Parameter Estimation for Polyphase P Codes Modulated Radar Signals

    Qi ZHANG  Pei WANG  Jun ZHU  Bin TANG  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:10
      Page(s):
    2162-2166

    A fast parameter estimation method with a coarse estimation and a fine estimation for polyphase P coded signals is proposed. For a received signal with N sampling points, the proposed method has an improved performance when the signal-to-noise ratio (SNR) is larger than 2dB and a lower computational complexity O(N logs N) compared with the latest time-frequency rate estimation method whose computational complexity is O(N2).

  • Two Classes of Optimal Constant Composition Codes from Zero Difference Balanced Functions

    Bing LIU  Xia LI  Feng CHENG  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:10
      Page(s):
    2183-2186

    Constant composition codes (CCCs) are a special class of constant-weight codes. They include permutation codes as a subclass. The study and constructions of CCCs with parameters meeting certain bounds have been an interesting research subject in coding theory. A bridge from zero difference balanced (ZDB) functions to CCCs with parameters meeting the Luo-Fu-Vinck-Chen bound has been established by Ding (IEEE Trans. Information Theory 54(12) (2008) 5766-5770). This provides a new approach for obtaining optimal CCCs. The objective of this letter is to construct two classes of ZDB functions whose parameters not covered in the literature, and then obtain two classes of optimal CCCs meeting the Luo-Fu-Vinck-Chen bound from these new ZDB functions.

  • Multipermutation Codes Correcting a Predetermined Number of Adjacent Deletions

    Peng ZHAO  Jianjun MU  Xiaopeng JIAO  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:10
      Page(s):
    2176-2179

    In this letter, three types of constructions for multipermutation codes are investigated by using interleaving technique and single-deletion permutation codes to correct a predetermined number of adjacent deletions. The decoding methods for the proposed codes are provided in proofs and verified with examples. The rates of these multipermutation codes are also compared.

  • Reduced-Complexity Belief Propagation Decoding for Polar Codes

    Jung-Hyun KIM  Inseon KIM  Gangsan KIM  Hong-Yeop SONG  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:9
      Page(s):
    2052-2055

    We propose three effective approximate belief propagation decoders for polar codes using Maclaurin's series, piecewise linear function, and stepwise linear function. The proposed decoders have the better performance than that of existing approximate belief propagation polar decoders, min-sum decoder and normalized min-sum decoder, and almost the same performance with that of original belief propagation decoder. Moreover, the proposed decoders achieve such performance without any optimization process according to the code parameters and channel condition unlike normalized min-sum decoder, offset min-sum decoder, and their variants.

  • On Binary Cyclic Locally Repairable Codes with Locality 2

    Yi RAO  Ruihu LI  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:7
      Page(s):
    1588-1591

    Locally repairable codes have recently been applied in distributed storage systems because of their excellent local erasure-correction capability. A locally repairable code is a code with locality r, where each code symbol can be recovered by accessing at most r other code symbols. In this paper, we study the existence and construction of binary cyclic codes with locality 2. An overview of best binary cyclic LRCs with length 7≤n≤87 and locality 2 are summarized here.

  • On the Single-Parity Locally Repairable Codes

    Yanbo LU  Jie HAO  Shu-Tao XIA  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:6
      Page(s):
    1342-1345

    Locally repairable codes (LRCs) have attracted much interest recently due to their applications in distributed storage systems. In an [n,k,d] linear code, a code symbol is said to have locality r if it can be repaired by accessing at most r other code symbols. An (n,k,r) LRC with locality r for the information symbols has minimum distance d≤n-k-⌈k/r⌉+2. In this letter, we study single-parity LRCs where every repair group contains exactly one parity symbol. Firstly, we give a new characterization of single-parity LRCs based on the standard form of generator matrices. For the optimal single-parity LRCs meeting the Singleton-like bound, we give necessary conditions on the structures of generator matrices. Then we construct all the optimal binary single-parity LRCs meeting the Singleton-like bound d≤n-k-⌈k/r⌉+2.

  • Some Constructions for Fractional Repetition Codes with Locality 2

    Mi-Young NAM  Jung-Hyun KIM  Hong-Yeop SONG  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:4
      Page(s):
    936-943

    In this paper, we examine the locality property of the original Fractional Repetition (FR) codes and propose two constructions for FR codes with better locality. For this, we first derive the capacity of the FR codes with locality 2, that is the maximum size of the file that can be stored. Construction 1 generates an FR code with repetition degree 2 and locality 2. This code is optimal in the sense of achieving the capacity we derived. Construction 2 generates an FR code with repetition degree 3 and locality 2 based on 4-regular graphs with girth g. This code is also optimal in the same sense.

  • Iterative Channel Estimation and Symbol Level Reed-Solomon Decoding Receivers for OFDM Systems

    Olayinka O. OGUNDILE  Daniel J. VERSFELD  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2016/10/17
      Vol:
    E100-B No:4
      Page(s):
    500-509

    Iterative channel estimation and decoding receivers have evolved over the years, most especially with Turbo and LPDC codes. Nevertheless, few works have determined the performance of symbol level Reed-Solomon (RS) codes in iterative receiver structures. The iterative channel estimation and symbol level RS decoding receiver structure found in literature concentrate on M-QAM systems over flat Rayleigh fading channels. In this paper, attention is focused on the performance of RS codes in iterative channel estimation and decoding receiver structures for Orthogonal Frequency Division Multiplexing (OFDM) systems on frequency-selective Rayleigh fading channels. Firstly, the paper extends the Koetter and Vardy (KV) RS iterative receiver structure over flat Rayleigh fading channels to frequency-selective Rayleigh fading channels. In addition, the paper develops a symbol level RS iterative receiver structure for OFDM systems on frequency-selective Rayleigh fading channels based on the Parity-check matrix Transformation Algorithm (PTA). The performance of the RS-KV and RS-PTA iterative receiver structures for OFDM systems are documented through computer simulation. The simulation results verify that both iterative receiver structures are suitable for real time RS OFDM wireless applications. The results also show that the developed RS-PTA iterative receiver structure is a low complexity and high performance alternative to the RS-KV iterative receiver structure.

  • Self-Dual Cyclic Codes over $mathbb{Z}_4+umathbb{Z}_4$

    Rong LUO  Udaya PARAMPALLI  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:4
      Page(s):
    969-974

    In this paper we study the structure of self-dual cyclic codes over the ring $Lambda= Z_4+uZ_4$. The ring Λ is a local Frobenius ring but not a chain ring. We characterize self-dual cyclic codes of odd length n over Λ. The results can be used to construct some optimal binary, quaternary cyclic and self-dual codes.

  • Index ARQ Protocol for Reliable Contents Distribution over Broadcast Channels

    Takahiro OSHIMA  Tadashi WADAYAMA  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:3
      Page(s):
    832-838

    In the present paper, we propose a broadcast ARQ protocol based on the concept of index coding. In the proposed scenario, a server wishes to transmit a finite sequence of packets to multiple receivers via a broadcast channel with packet erasures until all of the receivers successfully receive all of the packets. In the retransmission phase, the server produces a coded packet as a retransmitted packet based on the side-information sent from the receivers via feedback channels. A notable feature of the proposed protocol is that the decoding process at the receiver side has low decoding complexity because only a small number of addition operations are needed in order to recover an intended packet. This feature may be preferable for reducing the power consumption of receivers. The throughput performance of the proposed protocol is close to that of the ideal FEC throughput performance when the erasure probability is less than 0.1. This implies that the proposed protocol provides almost optimal throughput performance in such a regime.

  • An Error Correction Method for Neighborhood-Level Errors in NAND Flash Memories

    Shohei KOTAKI  Masato KITAKAMI  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:2
      Page(s):
    653-662

    Rapid process scaling and the introduction of the multilevel cell (MLC) concept have lowered costs of NAND Flash memories, but also degraded reliability. For this reason, the memories are depending on strong error correcting codes (ECCs), and this has enabled the memories to be used in wide range of storage applications, including solid-state drives (SSDs). Meanwhile, too strong error correcting capability requires excessive decoding complexity and check bits. In NAND Flash memories, cell errors to neighborhood voltage levels are more probable than those to distant levels. Several ECCs reflecting this characteristics, including limited-magnitude ECCs which correct only errors with a certain limited magnitude and low-density parity check (LDPC) codes, have been proposed. However, as most of these ECCs need the multiple bits in a cell for encoding, they cannot be used with multipage programing, a high speed programming method currently employed in the memories. Also, binary ECCs with Gray codes are no longer optimal when multilevel voltage shifts (MVSs) occur. In this paper, an error correction method reflecting the error characteristic is presented. This method detects errors by a binary ECC as a conventional manner, but a nonbinary value or whole the bits in a cell, are subjected to error correction, so as to be corrected into the most probable neighborhood value. The amount of bit error rate (BER) improvement is depending on the probability of the each error magnitude. In case of 2bit/cell, if only errors of magnitude 1 and 2 can occur and the latter occupies 5% of cell errors, acceptable BER is improved by 4%. This is corresponding to extending 2.4% of endurance. This method needs about 15% longer average latency, 19% longer maximum latency, and 15% lower throughput. However, with using the conventional method until the memories' lifetime number of program/erase cycling, and the proposed method after that, BER improvement can be utilized for extending endurance without latency and throughput degradation until the switch of the methods.

  • Further Results on the Minimum and Stopping Distances of Full-Length RS-LDPC Codes

    Haiyang LIU  Hao ZHANG  Lianrong MA  

     
    LETTER-Coding Theory

      Vol:
    E100-A No:2
      Page(s):
    738-742

    Based on the codewords of the [q,2,q-1] extended Reed-Solomon (RS) code over the finite field Fq, we can construct a regular binary γq×q2 matrix H(γ,q), where q is a power of 2 and γ≤q. The matrix H(γ,q) defines a regular low-density parity-check (LDPC) code C(γ,q), called a full-length RS-LDPC code. Using some analytical methods, we completely determine the values of s(H(4,q)), s(H(5,q)), and d(C(5,q)) in this letter, where s(H(γ,q)) and d(C(γ,q)) are the stopping distance of H(γ,q) and the minimum distance of C(γ,q), respectively.

  • Band Splitting Permutations for Spatially Coupled LDPC Codes Achieving Asymptotically Optimal Burst Erasure Immunity

    Hiroki MORI  Tadashi WADAYAMA  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:2
      Page(s):
    663-669

    It is well known that spatially coupled (SC) codes with erasure-BP decoding have powerful error correcting capability over memoryless erasure channels. However, the decoding performance of SC-codes significantly degrades when they are used over burst erasure channels. In this paper, we propose band splitting permutations (BSP) suitable for (l,r,L) SC-codes. The BSP splits a diagonal band in a base matrix into multiple bands in order to enhance the span of the stopping sets in the base matrix. As theoretical performance guarantees, lower and upper bounds on the maximal burst correctable length of the permuted (l,r,L) SC-codes are presented. Those bounds indicate that the maximal correctable burst ratio of the permuted SC-codes is given by λmax≃1/k where k=r/l. This implies the asymptotic optimality of the permuted SC-codes in terms of burst erasure correction.

  • Fast Reconstruction for Degraded Reads and Recovery Process in Primary Array Storage Systems

    Baegjae SUNG  Chanik PARK  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2016/11/11
      Vol:
    E100-D No:2
      Page(s):
    294-303

    RAID has been widely deployed in disk array storage systems to manage both performance and reliability simultaneously. RAID conducts two performance-critical operations during disk failures known as degraded reads/writes and recovery process. Before the recovery process is complete, reads and writes are degraded because data is reconstructed using data redundancy. The performance of degraded reads/writes is critical in order to meet stipulations in customer service level agreements (SLAs), and the recovery process affects the reliability of a storage system considerably. Both operations require fast data reconstruction. Among the erasure codes for fast reconstruction, Local Reconstruction Codes (LRC) are known to offer the best (or optimal) trade-off between storage overhead, fault tolerance, and the number of disks involved in reconstruction. Originally, LRC was designed for fast reconstruction in distributed cloud storage systems, in which network traffic is a major bottleneck during reconstruction. Thus, LRC focuses on reducing the number of disks involved in data reconstruction, which reduces network traffic. However, we observe that when LRC is applied to primary array storage systems, a major bottleneck in reconstruction results from uneven disk utilization. In other words, underutilized disks can no longer receive I/O requests as a result of the bottleneck of overloaded disks. Uneven disk utilization in LRC is due to its dedicated group partitioning policy to achieve the Maximally Recoverable property. In this paper, we present Distributed Reconstruction Codes (DRC) that support fast reconstruction in primary array storage systems. DRC is designed with group shuffling policy to solve the problem of uneven disk utilization. Experiments on real-world workloads show that DRC using global parity rotation (DRC-G) improves degraded performance by as much as 72% compared to RAID-6 and by as much as 35% compared to LRC under the same reliability. In addition, our study shows that DRC-G reduces the recovery process completion time by as much as 52% compared to LRC.

  • A Bit-Write-Reducing and Error-Correcting Code Generation Method by Clustering ECC Codewords for Non-Volatile Memories

    Tatsuro KOJO  Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2398-2411

    Non-volatile memories are paid attention to as a promising alternative to memory design. Data stored in them still may be destructed due to crosstalk and radiation. We can restore the data by using error-correcting codes which require extra bits to correct bit errors. Further, non-volatile memories consume ten to hundred times more energy than normal memories in bit-writing. When we configure them using error-correcting codes, it is quite necessary to reduce writing bits. In this paper, we propose a method to generate a bit-write-reducing code with error-correcting ability. We first pick up an error-correcting code which can correct t-bit errors. We cluster its codeswords and generate a cluster graph satisfying the S-bit flip conditions. We assign a data to be written to each cluster. In other words, we generate one-to-many mapping from each data to the codewords in the cluster. We prove that, if the cluster graph is a complete graph, every data in a memory cell can be re-written into another data by flipping at most S bits keeping error-correcting ability to t bits. We further propose an efficient method to cluster error-correcting codewords. Experimental results show that the bit-write-reducing and error-correcting codes generated by our proposed method efficiently reduce energy consumption. This paper proposes the world-first theoretically near-optimal bit-write-reducing code with error-correcting ability based on the efficient coding theories.

  • Probabilistic Analysis of the Network Reliability Problem on Random Graph Ensembles

    Akiyuki YANO  Tadashi WADAYAMA  

     
    PAPER-Networks and Network Coding

      Vol:
    E99-A No:12
      Page(s):
    2218-2225

    In the field of computer science, the network reliability problem for evaluating the network failure probability has been extensively investigated. For a given undirected graph G, the network failure probability is the probability that edge failures (i.e., edge erasures) make G unconnected. Edge failures are assumed to occur independently with the same probability. The main contributions of the present paper are the upper and lower bounds on the expected network failure probability. We herein assume a simple random graph ensemble that is closely related to the Erds-Rényi random graph ensemble. These upper and lower bounds exhibit the typical behavior of the network failure probability. The proof is based on the fact that the cut-set space of G is a linear space over F2 spanned by the incident matrix of G. The present study shows a close relationship between the ensemble analysis of the expected network failure probability and the ensemble analysis of the error detection probability of LDGM codes with column weight 2.

101-120hit(537hit)