In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Antoine VALEMBOIS, Marc FOSSORIER, "A Comparison between "Most-Reliable-Basis Reprocessing" Strategies" in IEICE TRANSACTIONS on Fundamentals,
vol. E85-A, no. 7, pp. 1727-1741, July 2002, doi: .
Abstract: In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e85-a_7_1727/_p
Copy
@ARTICLE{e85-a_7_1727,
author={Antoine VALEMBOIS, Marc FOSSORIER, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Comparison between "Most-Reliable-Basis Reprocessing" Strategies},
year={2002},
volume={E85-A},
number={7},
pages={1727-1741},
abstract={In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.},
keywords={},
doi={},
ISSN={},
month={July},}
Copy
TY - JOUR
TI - A Comparison between "Most-Reliable-Basis Reprocessing" Strategies
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1727
EP - 1741
AU - Antoine VALEMBOIS
AU - Marc FOSSORIER
PY - 2002
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E85-A
IS - 7
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - July 2002
AB - In this semi-tutorial paper, the reliability-based decoding approaches using the reprocessing of the most reliable information set are investigated. This paper somehow homogenizes and compares former different studies, hopefully improving the overall transparency, and completing each one with tricks provided by the others. A couple of sensible improvements are also suggested. However, the main goal remains to integrate and compare recent works based on a similar general approach, which have unfortunately been performed in parallel without much efforts of comparison up to now. Their respective (dis)advantages, especially in terms of average or maximum complexity are elaborated. We focus on suboptimum decoding while some works to which we refer were developed for maximum likelihood decoding (MLD). No quantitative error performance analysis is provided, although we are in a position to benefit from some qualitative considerations, and to compare different strategies in terms of higher or lower expected error performances for a same complexity. With simulations, however, it turns out that all considered approaches perform very closely to each other, which was not especially obvious at first sight. The simplest strategy proves also the fastest in terms of CPU-time, but we indicate ways to implement the other ones so that they get very close to each other from this point of view also. On top of relying on the same intuitive principle, the studied algorithms are thus also unified from the point of view of their error performances and computational cost.
ER -