The search functionality is under construction.

Author Search Result

[Author] Masahiro YUKAWA(10hit)

1-10hit
  • Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields

    Tatsuya KOYAKUMARU  Masahiro YUKAWA  Eduardo PAVEZ  Antonio ORTEGA  

     
    PAPER-Graphs and Networks

      Pubricized:
    2022/07/01
      Vol:
    E106-A No:1
      Page(s):
    23-34

    This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the l1 norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the l1 norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than l1 for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau's decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.

  • An Efficient Distributed Power Control for Infeasible Downlink Scenarios--Global-Local Fixed-Point-Approximation Technique

    Noriyuki TAKAHASHI  Masahiro YUKAWA  Isao YAMADA  

     
    PAPER

      Vol:
    E89-A No:8
      Page(s):
    2107-2118

    In this paper, we present an efficient downlink power control scheme, for wireless networks, based on two key ideas: (i) global-local fixed-point-approximation technique (GLOFPAT) and (ii) bottleneck removal criterion (BRC). The proposed scheme copes with all scenarios including infeasible case where no power allocation can provide all multiple accessing users with target quality of service (QoS). For feasible case, the GLOFPAT efficiently computes a desired power allocation which corresponds to the allocation achieved by conventional algorithms. For infeasible case, the GLOFPAT offers valuable information to detect bottleneck users, to be removed based on the BRC, which deteriorate overall QoS. The GLOFPAT is a mathematically-sound distributed algorithm approximating desired power allocation as a unique fixed-point of an isotone mapping. The unique fixed-point of the global mapping is iteratively computed by fixed-point-approximations of multiple distributed local mappings, which can be computed in parallel by base stations respectively. For proper detection of bottleneck users, complete analysis of the GLOFPAT is presented with aid of the Tarski's fixed-point theorem. Extensive simulations demonstrate that the proposed scheme converges faster than the conventional algorithm and successfully increases the number of happy users receiving target QoS.

  • Multi-Domain Adaptive Learning Based on Feasibility Splitting and Adaptive Projected Subgradient Method

    Masahiro YUKAWA  Konstantinos SLAVAKIS  Isao YAMADA  

     
    PAPER-Digital Signal Processing

      Vol:
    E93-A No:2
      Page(s):
    456-466

    We propose the multi-domain adaptive learning that enables us to find a point meeting possibly time-varying specifications simultaneously in multiple domains, e.g. space, time, frequency, etc. The novel concept is based on the idea of feasibility splitting -- dealing with feasibility in each individual domain. We show that the adaptive projected subgradient method (Yamada, 2003) realizes the multi-domain adaptive learning by employing (i) a projected gradient operator with respect to a ‘fixed’ proximity function reflecting the time-invariant specifications and (ii) a subgradient projection with respect to ‘time-varying’ objective functions reflecting the time-varying specifications. The resulting algorithm is suitable for real-time implementation, because it requires no more than metric projections onto closed convex sets each of which accommodates the specification in each domain. A convergence analysis and numerical examples are presented.

  • Image Restoration with Multiple DirLOTs

    Natsuki AIZAWA  Shogo MURAMATSU  Masahiro YUKAWA  

     
    PAPER

      Vol:
    E96-A No:10
      Page(s):
    1954-1961

    A directional lapped orthogonal transform (DirLOT) is an orthonormal transform of which basis is allowed to be anisotropic with the symmetric, real-valued and compact-support property. Due to its directional property, DirLOT is superior to the existing separable transforms such as DCT and DWT in expressing diagonal edges and textures. The goal of this paper is to enhance the ability of DirLOT further. To achieve this goal, we propose a novel image restoration technique using multiple DirLOTs. This paper generalizes an image denoising technique in [1], and expands the application of multiple DirLOTs by introducing linear degradation operator P. The idea is to use multiple DirLOTs to construct a redundant dictionary. More precisely, the redundant dictionary is constructed as a union of symmetric orthonormal discrete wavelet transforms generated by DirLOTs. To select atoms fitting a target image from the dictionary, we formulate an image restoration problem as an l1-regularized least square problem, which can efficiently be solved by the iterative-shrinkage/thresholding algorithm (ISTA). The proposed technique is beneficial in expressing multiple directions of edges/textures. Simulation results show that the proposed technique significantly outperforms the non-subsampled Haar wavelet transform for deblurring, super-resolution, and inpainting.

  • A Fast Stochastic Gradient Algorithm: Maximal Use of Sparsification Benefits under Computational Constraints

    Masahiro YUKAWA  Wolfgang UTSCHICK  

     
    PAPER-Digital Signal Processing

      Vol:
    E93-A No:2
      Page(s):
    467-475

    In this paper, we propose a novel stochastic gradient algorithm for efficient adaptive filtering. The basic idea is to sparsify the initial error vector and maximize the benefits from the sparsification under computational constraints. To this end, we formulate the task of algorithm-design as a constrained optimization problem and derive its (non-trivial) closed-form solution. The computational constraints are formed by focusing on the fact that the energy of the sparsified error vector concentrates at the first few components. The numerical examples demonstrate that the proposed algorithm achieves the convergence as fast as the computationally expensive method based on the optimization without the computational constraints.

  • Efficient Blind MAI Suppression in DS/CDMA Systems by Embedded Constraint Parallel Projection Techniques

    Masahiro YUKAWA  Renato L.G. CAVALCANTE  Isao YAMADA  

     
    PAPER-Digital Signal Processing

      Vol:
    E88-A No:8
      Page(s):
    2062-2071

    This paper presents two novel blind set-theoretic adaptive filtering algorithms for suppressing "Multiple Access Interference (MAI)," which is one of the central burdens in DS/CDMA systems. We naturally formulate the problem of MAI suppression as an asymptotic minimization of a sequence of cost functions under some linear constraint defined by the desired user's signature. The proposed algorithms embed the constraint into the direction of update, and thus the adaptive filter moves toward the optimal filter without stepping away from the constraint set. In addition, using parallel processors, the proposed algorithms attain excellent performance with linear computational complexity. Geometric interpretation clarifies an advantage of the proposed methods over existing methods. Simulation results demonstrate that the proposed algorithms achieve (i) much higher speed of convergence with rather better bit error rate performance than other blind methods and (ii) much higher speed of convergence than the non-blind NLMS algorithm (indeed, the speed of convergence of the proposed algorithms is comparable to the non-blind RLS algorithm).

  • An Efficient Adaptive Filtering Scheme Based on Combining Multiple Metrics

    Osamu TODA  Masahiro YUKAWA  Shigenobu SASAKI  Hisakazu KIKUCHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E97-A No:3
      Page(s):
    800-808

    We propose a novel adaptive filtering scheme named metric-combining normalized least mean square (MC-NLMS). The proposed scheme is based on iterative metric projections with a metric designed by combining multiple metric-matrices convexly in an adaptive manner, thereby taking advantages of the metrics which rely on multiple pieces of information. We compare the improved PNLMS (IPNLMS) algorithm with the natural proportionate NLMS (NPNLMS) algorithm, which is a special case of MC-NLMS, and it is shown that the performance of NPNLMS is controllable with the combination coefficient as opposed to IPNLMS. We also present an application to an acoustic echo cancellation problem and show the efficacy of the proposed scheme.

  • Online Model-Selection and Learning for Nonlinear Estimation Based on Multikernel Adaptive Filtering

    Osamu TODA  Masahiro YUKAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:1
      Page(s):
    236-250

    We study a use of Gaussian kernels with a wide range of scales for nonlinear function estimation. The estimation task can then be split into two sub-tasks: (i) model selection and (ii) learning (parameter estimation) under the selected model. We propose a fully-adaptive and all-in-one scheme that jointly carries out the two sub-tasks based on the multikernel adaptive filtering framework. The task is cast as an asymptotic minimization problem of an instantaneous fidelity function penalized by two types of block l1-norm regularizers. Those regularizers enhance the sparsity of the solution in two different block structures, leading to efficient model selection and dictionary refinement. The adaptive generalized forward-backward splitting method is derived to deal with the asymptotic minimization problem. Numerical examples show that the scheme achieves the model selection and learning simultaneously, and demonstrate its striking advantages over the multiple kernel learning (MKL) method called SimpleMKL.

  • Kernel Weights for Equalizing Kernel-Wise Convergence Rates of Multikernel Adaptive Filtering

    Kwangjin JEONG  Masahiro YUKAWA  

     
    PAPER-Algorithms and Data Structures

      Pubricized:
    2020/12/11
      Vol:
    E104-A No:6
      Page(s):
    927-939

    Multikernel adaptive filtering is an attractive nonlinear approach to online estimation/tracking tasks. Despite its potential advantages over its single-kernel counterpart, a use of inappropriately weighted kernels may result in a negligible performance gain. In this paper, we propose an efficient recursive kernel weighting technique for multikernel adaptive filtering to activate all the kernels. The proposed weights equalize the convergence rates of all the corresponding partial coefficient errors. The proposed weights are implemented via a certain metric design based on the weighting matrix. Numerical examples show, for synthetic and multiple real datasets, that the proposed technique exhibits a better performance than the manually-tuned kernel weights, and that it significantly outperforms the online multiple kernel regression algorithm.

  • Efficient Adaptive Stereo Echo Canceling Schemes Based on Simultaneous Use of Multiple State Data

    Masahiro YUKAWA  Isao YAMADA  

     
    PAPER-Speech/Acoustic Signal Processing

      Vol:
    E87-A No:8
      Page(s):
    1949-1957

    In this paper, we propose two adaptive filtering schemes for Stereophonic Acoustic Echo Cancellation (SAEC), which are based on the adaptive projected subgradient method (Yamada et al., 2003). To overcome the so-called non-uniqueness problem, the schemes utilize a certain preprocessing technique which generates two different states of input signals. The first one simultaneously uses, for fast convergence, data from two states of inputs, meanwhile the other selects, for stability, data based on a simple min-max criteria. In addition to the above difference, the proposed schemes commonly enjoy (i) robustness against noise by introducing the stochastic property sets, and (ii) only linear computational complexity, since it is free from solving systems of linear equations. Numerical examples demonstrate that the proposed schemes achieve, even in noisy situations, compared with the conventional technique, (i) much faster and more stable convergence in the learning process as well as (ii) lower level mis-identification of echo paths and higher level Echo Return Loss Enhancement (ERLE) around the steady state.