The search functionality is under construction.

Author Search Result

[Author] Osamu TODA(2hit)

1-2hit
  • An Efficient Adaptive Filtering Scheme Based on Combining Multiple Metrics

    Osamu TODA  Masahiro YUKAWA  Shigenobu SASAKI  Hisakazu KIKUCHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E97-A No:3
      Page(s):
    800-808

    We propose a novel adaptive filtering scheme named metric-combining normalized least mean square (MC-NLMS). The proposed scheme is based on iterative metric projections with a metric designed by combining multiple metric-matrices convexly in an adaptive manner, thereby taking advantages of the metrics which rely on multiple pieces of information. We compare the improved PNLMS (IPNLMS) algorithm with the natural proportionate NLMS (NPNLMS) algorithm, which is a special case of MC-NLMS, and it is shown that the performance of NPNLMS is controllable with the combination coefficient as opposed to IPNLMS. We also present an application to an acoustic echo cancellation problem and show the efficacy of the proposed scheme.

  • Online Model-Selection and Learning for Nonlinear Estimation Based on Multikernel Adaptive Filtering

    Osamu TODA  Masahiro YUKAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:1
      Page(s):
    236-250

    We study a use of Gaussian kernels with a wide range of scales for nonlinear function estimation. The estimation task can then be split into two sub-tasks: (i) model selection and (ii) learning (parameter estimation) under the selected model. We propose a fully-adaptive and all-in-one scheme that jointly carries out the two sub-tasks based on the multikernel adaptive filtering framework. The task is cast as an asymptotic minimization problem of an instantaneous fidelity function penalized by two types of block l1-norm regularizers. Those regularizers enhance the sparsity of the solution in two different block structures, leading to efficient model selection and dictionary refinement. The adaptive generalized forward-backward splitting method is derived to deal with the asymptotic minimization problem. Numerical examples show that the scheme achieves the model selection and learning simultaneously, and demonstrate its striking advantages over the multiple kernel learning (MKL) method called SimpleMKL.