The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] convergence rate(8hit)

1-8hit
  • Data-Reuse Extended NLMS Algorithm Based on Optimized Time-Varying Step-Size for System Identification Open Access

    Hakan BERCAG  Osman KUKRER  Aykut HOCANIN  

     
    LETTER-Analog Signal Processing

      Pubricized:
    2024/01/11
      Vol:
    E107-A No:8
      Page(s):
    1369-1373

    A new extended normalized least-mean-square (ENLMS) algorithm is proposed. A novel non-linear time-varying step-size (NLTVSS) formula is derived. The convergence rate of ENLMS increases due to NLTVSS as the number of data-reuse L is increased. ENLMS does not involve matrix inversion, and, thus, avoids numerical instability issues.

  • Shrinkage Widely Linear Recursive Least Square Algorithms for Beamforming

    Huaming QIAN  Ke LIU  Wei WANG  

     
    PAPER-Antennas and Propagation

      Vol:
    E99-B No:7
      Page(s):
    1532-1540

    Shrinkage widely linear recursive least squares (SWL-RLS) and its improved version called structured shrinkage widely linear recursive least squares (SSWL-RLS) algorithms are proposed in this paper. By using the relationship between the noise-free a posterior and a priori error signals, the optimal forgetting factor can be obtained at each snapshot. In the implementation of algorithms, due to the a priori error signal known, we still need the information about the noise-free a priori error which can be estimated with a known formula. Simulation results illustrate that the proposed algorithms have faster convergence and better tracking capability than augmented RLS (A-RLS), augmented least mean square (A-LMS) and SWL-LMS algorithms.

  • Image Inpainting Based on Adaptive Total Variation Model

    Zhaolin LU  Jiansheng QIAN  Leida LI  

     
    LETTER-Image

      Vol:
    E94-A No:7
      Page(s):
    1608-1612

    In this letter, a novel adaptive total variation (ATV) model is proposed for image inpainting. The classical TV model is a partial differential equation (PDE)-based technique. While the TV model can preserve the image edges well, it has some drawbacks, such as staircase effect in the inpainted image and slow convergence rate. By analyzing the diffusion mechanism of TV model and introducing a new edge detection operator named difference curvature, we propose a novel ATV inpainting model. The proposed ATV model can diffuse the image information smoothly and quickly, namely, this model not only eliminates the staircase effect but also accelerates the convergence rate. Experimental results demonstrate the effectiveness of the proposed scheme.

  • A Variable Step-Size Adaptive Cross-Spectral Algorithm for Acoustic Echo Cancellation

    Xiaojian LU  Benoit CHAMPAGNE  

     
    PAPER-Digital Signal Processing

      Vol:
    E86-A No:11
      Page(s):
    2812-2821

    The adaptive cross-spectral (ACS) technique recently introduced by Okuno et al. provides an attractive solution to acoustic echo cancellation (AEC) as it does not require double-talk (DT) detection. In this paper, we first introduce a generalized ACS (GACS) technique where a step-size parameter is used to control the magnitude of the incremental correction applied to the coefficient vector of the adaptive filter. Based on the study of the effects of the step-size on the GACS convergence behaviour, a new variable step-size ACS (VSS-ACS) algorithm is proposed, where the value of the step-size is commanded dynamically by a special finite state machine. Furthermore, the proposed algorithm has a new adaptation scheme to improve the initial convergence rate when the network connection is created. Experimental results show that the new VSS-ACS algorithm outperforms the original ACS in terms of a higher acoustic echo attenuation during DT periods and faster convergence rate.

  • Novel First Order Optimization Classification Framework

    Peter GECZY  Shiro USUI  

     
    PAPER-Numerical Analysis and Optimization

      Vol:
    E83-A No:11
      Page(s):
    2312-2319

    Numerous scientific and engineering fields extensively utilize optimization techniques for finding appropriate parameter values of models. Various optimization methods are available for practical use. The optimization algorithms are classified primarily due to the rates of convergence. Unfortunately, it is often the case in practice that the particular optimization method with specified convergence rates performs substantially differently on diverse optimization tasks. Theoretical classification of convergence rates then lacks its relevance in the context of the practical optimization. It is therefore desirable to formulate a novel classification framework relevant to the theoretical concept of convergence rates as well as to the practical optimization. This article introduces such classification framework. The proposed classification framework enables specification of optimization techniques and optimization tasks. It also underlies its inherent relationship to the convergence rates. Novel classification framework is applied to categorizing the tasks of optimizing polynomials and the problem of training multilayer perceptron neural networks.

  • Absolute Exponential Stability of Neural Networks with Asymmetric Connection Matrices

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    LETTER-Neural Networks

      Vol:
    E80-A No:8
      Page(s):
    1531-1534

    In this letter, the absolute exponential stability result of neural networks with asymmetric connection matrices is obtained, which generalizes the existing one about absolute stability of neural networks, by a new proof approach. It is demonstrated that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also given to illustrate the obtained analysis results.

  • On the Absolute Exponential Stability of Neural Networks with Globally Lipschitz Continuous Activation Functions

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E80-D No:6
      Page(s):
    687-690

    In this letter, we obtain the absolute exponential stability result of neural networks with globally Lipschitz continuous, increasing and bounded activation functions under a sufficient condition which can unify some relevant sufficient ones for absolute stability in the literature. The obtained absolute exponential stability result generalizes the existing ones about absolute stability of neural networks. Moreover, it is demonstrated, by a mathematically rigorous proof, that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also presented to illustrate the analysis results.

  • Equation for Brief Evaluation of the Convergence Rate of the Normalized LMS Algorithm

    Kensaku FUJII  Juro OHGA  

     
    LETTER

      Vol:
    E76-A No:12
      Page(s):
    2048-2051

    This paper presents an equation capable of briefly evaluating the length of white noise sequence to be sent as a training signal. The equation is formulated by utilizing the formula describing the convergence property, which has been derived from the IIR filter expression of the NLMS algorithm. The result revealed that the length is directly proportional to I/[K(2-K)] where K is a step gain and I is the number of the adaptive filter taps.