1-8hit |
Hakan BERCAG Osman KUKRER Aykut HOCANIN
A new extended normalized least-mean-square (ENLMS) algorithm is proposed. A novel non-linear time-varying step-size (NLTVSS) formula is derived. The convergence rate of ENLMS increases due to NLTVSS as the number of data-reuse L is increased. ENLMS does not involve matrix inversion, and, thus, avoids numerical instability issues.
Shrinkage widely linear recursive least squares (SWL-RLS) and its improved version called structured shrinkage widely linear recursive least squares (SSWL-RLS) algorithms are proposed in this paper. By using the relationship between the noise-free a posterior and a priori error signals, the optimal forgetting factor can be obtained at each snapshot. In the implementation of algorithms, due to the a priori error signal known, we still need the information about the noise-free a priori error which can be estimated with a known formula. Simulation results illustrate that the proposed algorithms have faster convergence and better tracking capability than augmented RLS (A-RLS), augmented least mean square (A-LMS) and SWL-LMS algorithms.
Zhaolin LU Jiansheng QIAN Leida LI
In this letter, a novel adaptive total variation (ATV) model is proposed for image inpainting. The classical TV model is a partial differential equation (PDE)-based technique. While the TV model can preserve the image edges well, it has some drawbacks, such as staircase effect in the inpainted image and slow convergence rate. By analyzing the diffusion mechanism of TV model and introducing a new edge detection operator named difference curvature, we propose a novel ATV inpainting model. The proposed ATV model can diffuse the image information smoothly and quickly, namely, this model not only eliminates the staircase effect but also accelerates the convergence rate. Experimental results demonstrate the effectiveness of the proposed scheme.
The adaptive cross-spectral (ACS) technique recently introduced by Okuno et al. provides an attractive solution to acoustic echo cancellation (AEC) as it does not require double-talk (DT) detection. In this paper, we first introduce a generalized ACS (GACS) technique where a step-size parameter is used to control the magnitude of the incremental correction applied to the coefficient vector of the adaptive filter. Based on the study of the effects of the step-size on the GACS convergence behaviour, a new variable step-size ACS (VSS-ACS) algorithm is proposed, where the value of the step-size is commanded dynamically by a special finite state machine. Furthermore, the proposed algorithm has a new adaptation scheme to improve the initial convergence rate when the network connection is created. Experimental results show that the new VSS-ACS algorithm outperforms the original ACS in terms of a higher acoustic echo attenuation during DT periods and faster convergence rate.
Numerous scientific and engineering fields extensively utilize optimization techniques for finding appropriate parameter values of models. Various optimization methods are available for practical use. The optimization algorithms are classified primarily due to the rates of convergence. Unfortunately, it is often the case in practice that the particular optimization method with specified convergence rates performs substantially differently on diverse optimization tasks. Theoretical classification of convergence rates then lacks its relevance in the context of the practical optimization. It is therefore desirable to formulate a novel classification framework relevant to the theoretical concept of convergence rates as well as to the practical optimization. This article introduces such classification framework. The proposed classification framework enables specification of optimization techniques and optimization tasks. It also underlies its inherent relationship to the convergence rates. Novel classification framework is applied to categorizing the tasks of optimizing polynomials and the problem of training multilayer perceptron neural networks.
In this letter, the absolute exponential stability result of neural networks with asymmetric connection matrices is obtained, which generalizes the existing one about absolute stability of neural networks, by a new proof approach. It is demonstrated that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also given to illustrate the obtained analysis results.
In this letter, we obtain the absolute exponential stability result of neural networks with globally Lipschitz continuous, increasing and bounded activation functions under a sufficient condition which can unify some relevant sufficient ones for absolute stability in the literature. The obtained absolute exponential stability result generalizes the existing ones about absolute stability of neural networks. Moreover, it is demonstrated, by a mathematically rigorous proof, that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also presented to illustrate the analysis results.
This paper presents an equation capable of briefly evaluating the length of white noise sequence to be sent as a training signal. The equation is formulated by utilizing the formula describing the convergence property, which has been derived from the IIR filter expression of the NLMS algorithm. The result revealed that the length is directly proportional to I/[K(2-K)] where K is a step gain and I is the number of the adaptive filter taps.