The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Masao YAMAGISHI(6hit)

1-6hit
  • Exploiting Group Sparsity in Nonlinear Acoustic Echo Cancellation by Adaptive Proximal Forward-Backward Splitting

    Hiroki KURODA  Shunsuke ONO  Masao YAMAGISHI  Isao YAMADA  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1918-1927

    In this paper, we propose a use of the group sparsity in adaptive learning of second-order Volterra filters for the nonlinear acoustic echo cancellation problem. The group sparsity indicates sparsity across the groups, i.e., a vector is separated into some groups, and most of groups only contain approximately zero-valued entries. First, we provide a theoretical evidence that the second-order Volterra systems tend to have the group sparsity under natural assumptions. Next, we propose an algorithm by applying the adaptive proximal forward-backward splitting method to a carefully designed cost function to exploit the group sparsity effectively. The designed cost function is the sum of the weighted group l1 norm which promotes the group sparsity and a weighted sum of squared distances to data-fidelity sets used in adaptive filtering algorithms. Finally, Numerical examples show that the proposed method outperforms a sparsity-aware algorithm in both the system-mismatch and the echo return loss enhancement.

  • A Deep Monotone Approximation Operator Based on the Best Quadratic Lower Bound of Convex Functions

    Masao YAMAGISHI  Isao YAMADA  

     
    PAPER

      Vol:
    E91-A No:8
      Page(s):
    1858-1866

    This paper presents a closed form solution to a problem of constructing the best lower bound of a convex function under certain conditions. The function is assumed (I) bounded below by -ρ, and (II) differentiable and its derivative is Lipschitz continuous with Lipschitz constant L. To construct the lower bound, it is also assumed that we can use the values ρ and L together with the values of the function and its derivative at one specified point. By using the proposed lower bound, we derive a computationally efficient deep monotone approximation operator to the level set of the function. This operator realizes better approximation than subgradient projection which has been utilized, as a monotone approximation operator to level sets of differentiable convex functions as well as nonsmooth convex functions. Therefore, by using the proposed operator, we can improve many signal processing algorithms essentially based on the subgradient projection.

  • A Robust Canonical Polyadic Tensor Decomposition via Structured Low-Rank Matrix Approximation

    Riku AKEMA  Masao YAMAGISHI  Isao YAMADA  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2021/06/23
      Vol:
    E105-A No:1
      Page(s):
    11-24

    The Canonical Polyadic Decomposition (CPD) is the tensor analog of the Singular Value Decomposition (SVD) for a matrix and has many data science applications including signal processing and machine learning. For the CPD, the Alternating Least Squares (ALS) algorithm has been used extensively. Although the ALS algorithm is simple, it is sensitive to a noise of a data tensor in the applications. In this paper, we propose a novel strategy to realize the noise suppression for the CPD. The proposed strategy is decomposed into two steps: (Step 1) denoising the given tensor and (Step 2) solving the exact CPD of the denoised tensor. Step 1 can be realized by solving a structured low-rank approximation with the Douglas-Rachford splitting algorithm and then Step 2 can be realized by solving the simultaneous diagonalization of a matrix tuple constructed by the denoised tensor with the DODO method. Numerical experiments show that the proposed algorithm works well even in typical cases where the ALS algorithm suffers from the so-called bottleneck/swamp effect.

  • Approximate Simultaneous Diagonalization of Matrices via Structured Low-Rank Approximation

    Riku AKEMA  Masao YAMAGISHI  Isao YAMADA  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2020/10/15
      Vol:
    E104-A No:4
      Page(s):
    680-690

    Approximate Simultaneous Diagonalization (ASD) is a problem to find a common similarity transformation which approximately diagonalizes a given square-matrix tuple. Many data science problems have been reduced into ASD through ingenious modelling. For ASD, the so-called Jacobi-like methods have been extensively used. However, the methods have no guarantee to suppress the magnitude of off-diagonal entries of the transformed tuple even if the given tuple has an exact common diagonalizer, i.e., the given tuple is simultaneously diagonalizable. In this paper, to establish an alternative powerful strategy for ASD, we present a novel two-step strategy, called Approximate-Then-Diagonalize-Simultaneously (ATDS) algorithm. The ATDS algorithm decomposes ASD into (Step 1) finding a simultaneously diagonalizable tuple near the given one; and (Step 2) finding a common similarity transformation which diagonalizes exactly the tuple obtained in Step 1. The proposed approach to Step 1 is realized by solving a Structured Low-Rank Approximation (SLRA) with Cadzow's algorithm. In Step 2, by exploiting the idea in the constructive proof regarding the conditions for the exact simultaneous diagonalizability, we obtain an exact common diagonalizer of the obtained tuple in Step 1 as a solution for the original ASD. Unlike the Jacobi-like methods, the ATDS algorithm has a guarantee to find an exact common diagonalizer if the given tuple happens to be simultaneously diagonalizable. Numerical experiments show that the ATDS algorithm achieves better performance than the Jacobi-like methods.

  • A Unified Design of Generalized Moreau Enhancement Matrix for Sparsity Aware LiGME Models

    Yang CHEN  Masao YAMAGISHI  Isao YAMADA  

     
    PAPER-Digital Signal Processing

      Pubricized:
    2023/02/14
      Vol:
    E106-A No:8
      Page(s):
    1025-1036

    In this paper, we propose a unified algebraic design of the generalized Moreau enhancement matrix (GME matrix) for the Linearly involved Generalized-Moreau-Enhanced (LiGME) model. The LiGME model has been established as a framework to construct linearly involved nonconvex regularizers for sparsity (or low-rank) aware estimation, where the design of GME matrix is a key to guarantee the overall convexity of the model. The proposed design is applicable to general linear operators involved in the regularizer of the LiGME model, and does not require any eigendecomposition or iterative computation. We also present an application of the LiGME model with the proposed GME matrix to a group sparsity aware least squares estimation problem. Numerical experiments demonstrate the effectiveness of the proposed GME matrix in the LiGME model.

  • Nonlinear Acoustic Echo Cancellation by Exact-Online Adaptive Alternating Minimization

    Hiroki KURODA  Masao YAMAGISHI  Isao YAMADA  

     
    PAPER-Digital Signal Processing

      Vol:
    E99-A No:11
      Page(s):
    2027-2036

    For the nonlinear acoustic echo cancellation, we present an algorithm to estimate the threshold of the clipping effect and the room impulse response vector by suppressing their time-varying cost function. A common way to suppress the time-varying cost function of a pair of parameters is to alternatingly minimize the function with respect to each parameter while keeping the other fixed, which we refer to as adaptive alternating minimization. However, since the cost function for the threshold is nonconvex, the conventional methods approximate the exact minimizations by gradient descent updates, which causes serious degradation of the estimation accuracy in some occasions. In this paper, by exploring the fact that the cost function for the threshold becomes piecewise quadratic, we propose to exactly minimize the cost function for the threshold in a closed form while suppressing the cost function for the impulse response vector in an online manner, which we call exact-online adaptive alternating minimization. The proposed method is expected to approximate more efficiently the adaptive alternating minimization strategy than the conventional methods. Numerical experiments demonstrate the efficacy of the proposed method.