The search functionality is under construction.
The search functionality is under construction.

Open Access
Dynamic Limited Variable Step-Size Algorithm Based on the MSD Variation Cost Function

Yufei HAN, Jiaye XIE, Yibo LI

  • Full Text Views

    265

  • Cite this
  • Free PDF (500.2KB)

Summary :

The steady-state and convergence performances are important indicators to evaluate adaptive algorithms. The step-size affects these two important indicators directly. Many relevant scholars have also proposed some variable step-size adaptive algorithms for improving performance. However, there are still some problems in these existing variable step-size adaptive algorithms, such as the insufficient theoretical analysis, the imbalanced performance and the unachievable parameter. These problems influence the actual performance of some algorithms greatly. Therefore, we intend to further explore an inherent relationship between the key performance and the step-size in this paper. The variation of mean square deviation (MSD) is adopted as the cost function. Based on some theoretical analyses and derivations, a novel variable step-size algorithm with a dynamic limited function (DLF) was proposed. At the same time, the sufficient theoretical analysis is conducted on the weight deviation and the convergence stability. The proposed algorithm is also tested with some typical algorithms in many different environments. Both the theoretical analysis and the experimental result all have verified that the proposed algorithm equips a superior performance.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E107-A No.6 pp.919-922
Publication Date
2024/06/01
Publicized
2023/09/11
Online ISSN
1745-1337
DOI
10.1587/transfun.2023EAL2069
Type of Manuscript
LETTER
Category
Digital Signal Processing

1.  Introduction

With a continuous development of signal processing, adaptive filtering algorithms have been widely applied in many fields, such as echo cancellation, active noise control and system identification [1]. Among many adaptive algorithms, the Least Mean Square (LMS) algorithm is flexible and simplified [2]. These advantages make the LMS algorithm applied widely. The convergence and steady-state performances are significant indicators to evaluate the LMS algorithm. The step-size is a key parameter that affects the convergence, steady-state, and stability performances. Its performance is governed by the step-size, whose choice is tied to a compromise: lower step-size lead to improved steady-state performance, but slow down the convergence rate [3]. Although a bigger step-size can get a fast convergence speed, it will deteriorate the steady-state performance. Moreover, a big step-size is also easy to make the algorithm misadjusted or divergent.

To better balance the convergence and steady-state performances, many scholars have proposed some variable step-size LMS algorithms [3]-[11]. The functional controlled variable step-size (FCVSS) algorithm was proposed in [4]. It utilizes the function that conforms to the ideal change trend of a step-size to improve the convergence and steady-state performances. In [5], the method which adopts the modified error function to vary a step-size (MEFVSS) was proposed. It establishes a link between a control function and an error signal. However, these algorithms which select a specific function to control the step-size are not derived from any theoretical formulas. They lack some theoretical depth and have many limitations.

In [6], Bhotto and Antoniou further refined a prior error and a posterior error. The variable step-size strategy (VSS) which is based on the shrinkage method is proposed in [7]. The VSS [7] algorithm is derived from some basic theoretical formulas and it equips a certain theoretical depth. The following algorithm which is based on the prior and posterior errors has been widely recognized due to the good performance [8]. However, there is a severe problem in their expressions. In some unknown environments, it is difficult to obtain some prior information. Thus, the actual performance of these algorithms are difficult to guarantee. In addition, some adjustment processes of the variable step-size are highly susceptible to some interference. A step-size value is prone to an unexpected fluctuation. A fluctuation may affect some performances and even causes the divergence or misalignment. But few scholars pay attention to the stability problem of variable step-size algorithms.

For solving these existing problems, a novel variable step-size algorithm is proposed in this letter. We adopt the MSD variation as the cost function. By using some basic theoretical analyzes and derivations, the reasonable expression of step-size is obtained. The proposed algorithm can make the MSD experience the maximum convergence variation during an iteration. Besides, the proposed algorithm adopts some relevant statistical values of input and error signals to calculate and estimate the system noise. At the same time, the DLF is combined with the proposed step-size expression. It adopts different constraint parameters at different stages to further prevent some fluctuations and ensure a fast convergence speed. The proposed algorithm also omits a large number of parameters. Some theoretical and experimental results all show that the proposed algorithm equips good convergence, steady-state and stability performances.

2.  LMS Adaptive Algorithm

An error sequence, an input vector, an output sequence, a weight vector and a step-size constitute a typical LMS adaptive algorithm [1], [2]. The LMS algorithm updates the weight vector continuously to approach the optimal weight as close as possible. The overall calculation is as follows:

\[\begin{align} & e(n)=\mathbf{X}^T(n)\mathbf{W}_{\mathrm{opt}}+v(n)-y(n) \tag{1} \\ & \mathbf{W}(n)=\mathbf{W}(n-1)+\mu e(n)\mathbf{X}(n) \tag{2} \\ & y(n)=\mathbf{X}^T(n)\mathbf{W}(n-1) \tag{3} \end{align}\]

where \(\mathbf{X}(n)\) is the input vector and \(y(n)\) is the output sequence. \(e(n)\) denotes the error. \(\mathbf{W}_{\mathrm{opt}}\) is the targeted weight value and \(\mathbf{W}(n)\) represents the weight vector. \(v(n)\) is the system noise. We assume that the system noise \(v(n)\) is independent with the input vector \(\mathbf{X}(n)\) and their mean values are zero. \(\mu\) represents the step-size and it is a key parameter to adjust the weight vector \(\mathbf{W}(n)\).

3.  Proposed Algorithm

We assume that the system is in a static situation. \(\mathbf{X}(n)\) and \(e(n)\) are unrelated. Firstly, the equations of a prior error and a prior error without system noise are defined as follows [6]:

\[\begin{align} & e_{a,f}(n)=\mathbf{X}^T(n)\mathbf{W}_{\mathrm{opt}}-\mathbf{X}^T(n)\mathbf{W}(n-1) \tag{4} \\ & e_a(n)=e_{a,f}(n)+v(n) \tag{5} \end{align}\]

where \(e_a(n)\) is the error signal and \(e_{a,f}(n)\) denotes the prior error without system noise. Subtracting \(\mathbf{W}_{\mathrm{opt}}\) from both sides of the Eq. (2), it becomes as follows:

\[\begin{equation*} \mathbf{W}_{\mathrm{opt}}-\mathbf{W}(n)=\mathbf{W}_{\mathrm{opt}}-\mathbf{W}(n-1)-\mu e(n)\mathbf{X}(n). \tag{6} \end{equation*}\]

Squaring both sides of the Eq. (6) and then taking expectations on both sides of it, the above equation becomes as follows:

\[\begin{eqnarray*} &&\!\!\!\!\! E\left[\mathbf{W}_{\mathrm{opt}}-\mathbf{W}(n)\right]^2 -E\left[\mathbf{W}_{\mathrm{opt}}-\mathbf{W}(n-1)\right]^2=\nonumber\\ &&\!\!\!\!\! -E\left[e(n)e_{a,f}(n)\right]2\mu+E\left[e^2(n)\mathbf{X}^T(n)\mathbf{X}(n)\right]\mu^2. \tag{7} \end{eqnarray*}\]

The above Eq. (7) is the MSD variation and \(E[\quad]\) is the mean value function. The MSD variation is a core of the LMS algorithm. The MSD variation shows the distance degree between the weight vector and the optimum value. When the MSD variation is negative and its absolute value is big, it helps the weight vector to approach the optimal value closer. If we could find the method that makes the MSD experience the maximum convergence variation during an iteration, it is benefit to some performances.

Thus, taking derivative of the Eq. (7) with respect to the step-size \(\mu\) and setting the value of derivative equation to zero, the Eq. (7) becomes as follows:

\[\begin{eqnarray*} &&\!\!\!\!\! \frac{\partial E\left[\mathbf{W}_{\mathrm{opt}}-\mathbf{W}(n)\right]^2 -E\left[\mathbf{W}_{\mathrm{opt}}-\mathbf{W}(n-1)\right]^2}{\partial\mu}=\nonumber\\ &&\!\!\!\!\! -2E\left[e(n)e_{a,f}(n)\right]+2E\left[e^2(n)\mathbf{X}^T(n)\mathbf{X}(n)\right]\mu=0. \tag{8} \end{eqnarray*}\]

Then the optimal step-size which brings the fast converged weight can be obtained. The changing step-size can be calculated as follows:

\[\begin{equation*} \mu(n)=\frac{E\left[e_{a,f}(n)\right]}{E\left[e(n)\right]}* \frac{1}{E\left[\mathbf{X}^T(n)\mathbf{X}(n)\right]}. \tag{9} \end{equation*}\]

In order to avoid the situation that the denominator is 0, the regularization factor \(\alpha\) should be added to the Eq. (9). The corrected variable step-size expression is as follows:

\[\begin{equation*} \mu (n)=\frac{E\left[e_{a,f}(n)\right]}{E\left[e(n)\right]+\alpha}* \frac{1}{E\left[\mathbf{X}^T(n)\mathbf{X}(n)\right]+\alpha}. \tag{10} \end{equation*}\]

The \(E[e(n)]\) of the Eq. (10) is a mean value and its value can be calculated as follows:

\[\begin{align} & E\left[e^2(n)\right]=\beta E\left[e^2(n-1)\right]+(1-\beta)e^2(n) \tag{11} \\ & E[e(n)]=\left\{E\left[e^2(n)\right]\right\}^{0.5} \tag{12} \end{align}\]

where \(\beta\) is the smooth factor and its value is close to 1. Besides, \(E\left[e_{a,f}(n)\right]\) also appears in the Eq. (10). It is often calculated by the following method [9]:

\[\begin{equation*} E\left[e_{a,f}(n)\right]=\mathrm{sgn}[e(n)]\max[\left|e(n)\right| -t,0] \tag{13} \end{equation*}\]

where \(t=\sqrt{\sigma_v^2}\) is the threshold parameter and \(\sigma_v^2\) represents the variance value of system error. \(\mathrm{sgn}[\quad]\) and \(\max[\quad]\) denotes the sign and the maximum functions.

As the system noise information is difficult to obtain in some unknown environments, the proposed algorithm adopts some relevant statistical values of input and error signals to calculate and estimate the system noise. The full calculation process is as follows [10]:

\[\begin{align} & \hat{x}^2(n)=\beta\hat{x}^2(n-1)+(1-\beta){\|\mathbf{X}(\boldsymbol{n})\|}_2^2 \tag{14} \\ & \hat{\boldsymbol{P}}\left(\boldsymbol{n}\right)=\beta\hat{\boldsymbol{P}} \left(\boldsymbol{n}\right)+(1-\beta)e(n)*\mathbf{X}(\boldsymbol{n}) \tag{15} \\ & E\left[e_{a,f}(n)\right]=\sqrt{E\left[e^2(n)\right] -\frac{{\left\|\hat{\boldsymbol{P}}(\boldsymbol{n})\right\|}_2^2}{\hat{x}^2(n)}} \tag{16} \end{align}\]

where \(\left\|\cdot\right\|_2\) denotes the \(l_2\) norm of a vector. By using this method, the problem of unachievable system noise has been solved. It also helps to expand many applied ranges.

The proposed variable step-size algorithm is derived from some basic theoretical formulas and it equips a certain theoretical depth. At the initial stage, as the distance between the actual weight and the optimum weight is relatively large, \(e_{a,f}(n)\) will be also large and close to the error signal. According to the Eq. (10), this situation will result in a relatively large step-size value at the beginning of iterations. When the proposed algorithm approaches to a steady-state, the weight vector closes to the optimum value. The value of \(e_{a,f}(n)\) is close to zero and the error value \(e(n)\) is mainly composed of the system noise \(v(n)\). Thus, the step-size value obtained by the Eq. (10) will be relatively small in a steady-state.

This trend of the step-size is also consistent with the reasonable variation which helps to bring the better convergence and steady-state performances. It also means that the proposed strategy helps the iterative weight vector to approach the optimal value as close as possible. In addition, the overall calculation process of this method is simple and the number of parameter is relatively less.

4.  Dynamic Limited Function

The step-size is critical to the convergence, steady-state and stability performances of an adaptive LMS algorithm. Although some variable step-size algorithms can effectively adjust step-size to improve some performances, the step-size is vulnerable to some interferences. The variable step-size may occur some unexpected fluctuations which will reduce the robustness, convergence and steady-state performances. Moreover, some fluctuations may also lead to the divergence or misalignment phenomenon.

For solving this problem, some limited functions are proposed to reduce some unexpected fluctuations. However, when a conventional limited function is adopted, the changing range of a step-size value will be also limited. It will directly affect the convergence rate at the initial stage. In order to ensure both the stability and the convergence speed, the dynamic limited function is proposed in this letter. The proposed method adopts two different limited parameters at different stages. The specific information of the proposed method is as follows:

\[\begin{equation*} L[\mu(n)]=-0.5+\frac{1}{1+exp(-C*\mu(n))} \tag{17} \end{equation*}\]

where \(L[\mu(n)]\) represents the limited step-size value. \(\exp(\ )\) is the exponential function. C is a scaling factor and the value of C determines the degree of limitation.

At the initial stage, a large value of C is adopted to reduce the constraint effect of the limited function. It helps to hold the MSD variation speed of the proposed algorithm. At the steady-state stage, a small value of C is used to strengthen the constraint effect of the limited function. It helps to hold the stability. Thus, this DLF strategy could minimize the impact on the convergence speed of MSD variation while ensuring the stability of step-size. It is a coordination between robustness and MSD variation. In addition, the proposed method can maintain a same symbol as the variable. It does not affect the update direction of a variable step-size and helps to adapt some environments.

5.  Performance Analysis

In this section, the convergence stability and the weight deviation of the proposed algorithm will be analyzed. Firstly, the weight deviation of the proposed algorithm is analyzed. We assume that the system is in a static situation. \(\mathbf{X}(n)\) and \(e(n)\) are unrelated. To bring the Eq. (9) into the Eq. (7), the MSD variation becomes \(-\frac{E\left[e_{a,f}^2(n)\right]}{E\left[\mathbf{X}^T(n)\mathbf{X}(n)\right]}\). Since the right numerator and denominator of the MSD variation are both square values, its value always less than or equal to 0. At the initial stage, as the distance between the actual weight and the optimum weight is relatively large, \(e_{a,f}(n)\) will be also large. Then the upper limit of the MSD variation is less than 0 and its absolute value is big. It indicates that the weight vector always decreases in the direction of convergence and equips a fast convergence speed. When the proposed algorithm reaches the steady-state, the value of \(e_{a,f}(n)\) approaches zero. Then, the value of MSD is also close to zero. It shows that the weight of the proposed algorithm is very close to the optimal value in the steady-state. Therefore, the proposed algorithm can better balance the convergence performance and the robustness performance. It can keep a good performance at all stages.

Then the convergence stability of the proposed algorithm is analyzed. If an adaptive LMS algorithm could converge ultimately, its step-size should be less than \(\frac{2}{\lambda_{\max}}\) [11]. \(\lambda_{\max}\) represents the maximum eigenvalues in the autocorrelation matrix of the input vector.

Then we analyze the step-size range of the proposed algorithm. The left side of the Eq. (9) is made of the prior error without noise and the error signals. According to their relationship in the Eq. (16), it is obvious that the value of \(e_{a,f}(n)\) is smaller than the value of \(e(n)\). Thus, the left side value of the Eq. (9) is less than 1. The right side value of the Eq. (9) is approximately equal to \(\frac{1}{\lambda_{\max}}\). In other words, the value of the Eq. (9) is smaller than \(\frac{1}{\lambda_{\max}}\). In addition, the proposed algorithm adopts the limited function which is combined with the step-size. As the Eq. (17) is the final step-size expression, then the actual value of the step-size will be smaller than the value of the Eq. (9). Thus, the proposed variable step-size algorithm meets the convergence and stability conditions.

6.  Experimental Results

In this section, the proposed algorithm with DLF will be tested and compared with the original proposed algorithm without DLF, the FCVSS [4], the MEFVSS [5], and the VSS [7] algorithms in some different environments. The simulation environment adopts the case 1 of the MEFVSS [5]. The mean value of the input vector \(\mathbf{X}(n)\) is zero and its variance value is 1. Some system noise is also added into the environment to obtain a signal-to-noise ratio (SNR) of 30.

For the experimental environment, the maximum number of iteration number is 4000 and the number of the system tap-length is 16. The initial value of the optimal weight vector is 0 except the fifth weight coefficient which is set to 1. When the algorithm iterates to the half time (2000), the optimal weight vector will be changed suddenly. The optimal weight vector changes to 1 except the fifth weight coefficient which is set to 0. This operation helps to test the ability of coping with some changing environments. The corresponding parameter is consistent with their literatures strictly. \(\beta =0.9\) and \(\alpha =0.1\) are used for the proposed algorithm. When the iterative number is less than one tenth of the total, the value of parameter C is 3.5. In other situations, its value becomes 1.2.

Figure 1 and Fig. 2 show the MSD and step-size variation curves of each algorithm in the experimental environment. These curves are acquired by averaging over 100 independent experiments. From the Fig. 1, the VSS [7] algorithm equips a good convergence speed. However, its steady-state performance is the worst one (\(-31\) dB). The MEFVSS [5] algorithm converges at more than 1000 iterations and its steady-state performance is general (\(-37\) dB). Although the steady-state performance of the FCVSS [4] algorithm is good (\(-39\) dB), the convergence speed is relative slow. The proposed algorithm without DLF exhibits good convergence ability. However, it is vulnerable to some interfere and its MSD value is \(-38\) dB. The proposed algorithm with DLF gets \(-40\) dB less than 400 iterations in these two environments. Although the environment changed, it shows a stable performance. It reaches the steady-state at about 600 iterations and its steady-state value of MSD is \(-44\) dB. Its steady-state value is also the lowest one among these algorithms. In addition, the MSD performance curve of the proposed algorithm is smoother than the others at 500 to 2000 iterations and 2500 to 4000 iterations. In the Fig. 2, thestep-size fluctuation of the proposed algorithm with DLF is relative small. It also indicates that the proposed algorithm with DLF can prevent some fluctuations in a changing environment. Besides, the step-size of the proposed algorithm with DLF converges at about 200 iterations. It also shows a fast convergence speed.

Fig. 1  MSD performance curve.

Fig. 2  Step-size variation curve.

7.  Conclusions

A novel variable step-size algorithm which sets the MSD variation as the cost function is proposed. The proposed algorithm can make the MSD experience the maximum convergence variation at each iteration. The DLF that uses different constraint parameters at different stages is also proposed. This method is well combined with the adjustment strategy of step-size. It can further prevent some fluctuations while ensuring the convergence speed. The theoretical analysis is also conducted on the weight deviation and the convergence stability of the proposed algorithm. Various simulations are also tested in this letter. Both the theoretical and experimental results show that the proposed algorithm has a certain adaptability in some unknown changing environments.

References

[1] S. Haykin, Adaptive Filter Theory, Pearson, 2015.

[2] A.H. Sayed, Adaptive Filters, Wiley, 2008.
CrossRef

[3] D.G. Tiglea, R. Candido, and M.T.M. Silva, “A variable step size adaptive algorithm with simple parameter selection,” IEEE Signal Process. Lett., vol.29, pp.1774-1778, Aug. 2022.
CrossRef

[4] M. Li, L. Li, and H. Tai, “Variable step size LMS algorithm based on function control,” Circuits Syst. Signal Process., vol.32, no.6, pp.3121-3130, April 2013.
CrossRef

[5] T. Fan and Y. Lin, “A variable step-size strategy based on error function for sparse system identification,” Circuits Syst. Signal Process., vol.36, pp.1301-1310, May 2016.
CrossRef

[6] Z.A. Bhotto and A. Antoniou, “A family of shrinkage adaptive-filtering algorithms,” IEEE Trans. Signal Process., vol.61, no.7, pp.1689-1697, April 2013.
CrossRef

[7] K. Yin and H. Zhao, “A variable step-size shrinkage set-membership affine projection algorithm for noisy input,” Circuits Syst. Signal Process., vol.38, pp.455-469 May 2019.
CrossRef

[8] L. Shi, H. Zhao, and Y. Zakharov, “Performance analysis of shrinkage linear complex-valued LMS algorithm,” IEEE Signal Process. Lett., vol.26, no.8, pp.1202-1206, Aug. 2019.
CrossRef

[9] P. Wen, J. Zhang, Y. Yu, and H. Zhao, “A novel variable step-size normalized subband adaptive filter based on mixed error cost function,” Signal Processing, vol.138, pp.48-52, Sept. 2017.
CrossRef

[10] M. Jiang, Y. Gao, Z. Cai, J. Xu, and S. Ou, “Combined regularization factor for affine projection algorithm using variable mixing factor,” IEEE Access, vol.10, pp.120630-120639, Nov. 2022.
CrossRef

[11] O.B.S. Muhammad, A.P. Syed and Z. Azzedine, “A variable step-size incremental LMS solution for low SNR applications,” Signal Processing, vol.178, p.107730, Jan. 2021.
CrossRef

Authors

Yufei HAN
  Nanjing Institute of Technology
Jiaye XIE
  Nanjing Institute of Technology
Yibo LI
  Nanjing Institute of Technology

Keyword