The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] probabilistic decoding(2hit)

1-2hit
  • Performance Evaluation of Finite Sparse Signals for Compressed Sensing Frameworks

    Jin-Taek SEONG  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2017/11/06
      Vol:
    E101-D No:2
      Page(s):
    531-534

    In this paper, we consider to develop a recovery algorithm of a sparse signal for a compressed sensing (CS) framework over finite fields. A basic framework of CS for discrete signals rather than continuous signals is established from the linear measurement step to the reconstruction. With predetermined priori distribution of a sparse signal, we reconstruct it by using a message passing algorithm, and evaluate the performance obtained from simulation. We compare our simulation results with the theoretic bounds obtained from probability analysis.

  • A Comparison between "Most-Reliable-Basis Reprocessing" Strategies

    Antoine VALEMBOIS  Marc FOSSORIER  

     
    PAPER-Coding Theory

      Vol:
    E85-A No:7
      Page(s):
    1727-1741

    In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.