The search functionality is under construction.

Author Search Result

[Author] Satoshi TAKABE(2hit)

1-2hit
  • Proximal Decoding for LDPC Codes

    Tadashi WADAYAMA  Satoshi TAKABE  

     
    PAPER-Coding Theory and Techniques

      Pubricized:
    2022/09/01
      Vol:
    E106-A No:3
      Page(s):
    359-367

    This paper presents a novel optimization-based decoding algorithm for LDPC codes. The proposed decoding algorithm is based on a proximal gradient method for solving an approximate maximum a posteriori (MAP) decoding problem. The key idea of the proposed algorithm is the use of a code-constraint polynomial to penalize a vector far from a codeword as a regularizer in the approximate MAP objective function. A code proximal operator is naturally derived from a code-constraint polynomial. The proposed algorithm, called proximal decoding, can be described by a simple recursive formula consisting of the gradient descent step for a negative log-likelihood function corresponding to the channel conditional probability density function and the code proximal operation regarding the code-constraint polynomial. Proximal decoding is experimentally shown to be applicable to several non-trivial channel models such as LDPC-coded massive MIMO channels, correlated Gaussian noise channels, and nonlinear vector channels. In particular, in MIMO channels, proximal decoding outperforms known massive MIMO detection algorithms, such as an MMSE detector with belief propagation decoding. The simple optimization-based formulation of proximal decoding allows a way for developing novel signal processing algorithms involving LDPC codes.

  • Convergence Acceleration via Chebyshev Step: Plausible Interpretation of Deep-Unfolded Gradient Descent

    Satoshi TAKABE  Tadashi WADAYAMA  

     
    PAPER-Numerical Analysis and Optimization

      Pubricized:
    2022/01/25
      Vol:
    E105-A No:8
      Page(s):
    1110-1120

    Deep unfolding is a promising deep-learning technique, whose network architecture is based on expanding the recursive structure of existing iterative algorithms. Although deep unfolding realizes convergence acceleration, its theoretical aspects have not been revealed yet. This study details the theoretical analysis of the convergence acceleration in deep-unfolded gradient descent (DUGD) whose trainable parameters are step sizes. We propose a plausible interpretation of the learned step-size parameters in DUGD by introducing the principle of Chebyshev steps derived from Chebyshev polynomials. The use of Chebyshev steps in gradient descent (GD) enables us to bound the spectral radius of a matrix governing the convergence speed of GD, leading to a tight upper bound on the convergence rate. Numerical results show that Chebyshev steps numerically explain the learned step-size parameters in DUGD well.