1. Introduction
Adaptive notch filtering is well known as a useful technique for detecting, removing, enhancing and retrieving sinusoidal signals of unknown frequency from additive broad-band noise. It can be widely applied in digital communications, control, radar, sonar, biomedical engineering, active noise control and so on [1]-[3]. Adaptive notch digital filters can mainly be classified into two classes of filters such as the direct-form filters with constrained poles and zeros [4]-[10] or the lattice-based filters [11]-[14]. The former usually possesses biased frequency estimate [6], [8], [10], whereas the latter yields unbiased frequency estimate for a single sinusoid case, regardless of the filter bandwidth [11], [12]. It is noted that a simplified iterative algorithm was used in [11], [14], an ordinary differential equation approach was employed in [12], and the affine combination of Regalia’s simplified lattice algorithm and lattice gradient algorithm was utilized in [13] where the notch frequency was directly estimated. To our best knowledge, the notch digital filters based on all-pass transfer functions in [15], [16] can be considered an expansion of the various IIR notch digital filters.
In this paper, a state-space approach for adaptive second-order notch digital filters is explored. To begin with, a simplified iterative algorithm is derived from the gradient-descent method to minimize the mean-squared output of the filter. The stability and parameter-estimation bias of the adaptive state-space notch digital filter are then analyzed by utilizing a first-order linear dynamical system. As a result, it is clarified that the proposed simplified iterative algorithm yields unbiased parameter-estimation. A numerical example is included to confirm the validity and effectiveness of the adaptive state-space notch digital filter and the bias analysis of parameter estimation for the simplified iterative algorithm. This journal paper is a partially modified version of our work in a conference paper [17].
2. Adaptive State-Space Notch Digital Filters
2.1 Description of All-Pass-Based Notch Digital Filters
Consider an all-pass-based second-order IIR notch digital filter with the notch frequency \(\omega_o\) of bandwidth \(B_o\) that is described by [15], [16]
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} H_o(z)&=&\displaystyle\frac{1}{2}\left[1+\frac{\rho^2+a(1+\rho^2) z^{-1}+z^{-2}}{1+a(1+\rho^2) z^{-1}+\rho^2 z^{-2}}\right] \\ &=&\displaystyle\frac{1+\rho^2}{2}\,\frac{1+2a z^{-1}+z^{-2}}{1+a(1+\rho^2) z^{-1}+\rho^2 z^{-2}} \end{array} \tag{1} \end{equation*}\] |
where pole radius \(\rho>0\) is less than but close to unity and
\[a=-\cos\omega_o,\hspace{3mm}0\leq\omega_o\leq\pi,\hspace{3mm}\rho^2=\frac{1-\tan(B_o/2)}{1+\tan(B_o/2)}.\] |
The above digital filter \(H_o(z)\) can be approximated to
\[\begin{equation*} H(z)=\rho\,\frac{1+2a z^{-1}+z^{-2}}{1+a(1+\rho^2) z^{-1}+\rho^2 z^{-2}} \tag{2} \end{equation*}\] |
in case of \(\rho\simeq 1\) by the arithmetic-geometric inequality. The magnitude response of an IIR notch digital filter with \(a=-\cos(0.3\pi)\) in (2) is depicted in Fig. 1.
The IIR notch digital filter in (2) can be realized by the following state-space model:
\[\begin{array}{@{}r@{~}c@{~}l@{}}{\boldsymbol{x}}(k+1) & = & {\boldsymbol{A}} {\boldsymbol{x}}(k) + {\boldsymbol{b}}u(k) \\ y(k) & = & {\boldsymbol{c}} {\boldsymbol{x}}(k) + d u(k) \tag{3a}\end{array} \] |
where \({\boldsymbol{x}}(k)=[x_1(k), x_2(k)]^T\) is a \(2 \times 1\) state-variable vector, \(u(k)\) is a scalar input, \(y(k)\) is a scalar output, and real constant matrices \({\boldsymbol{A}}, {\boldsymbol{b}}\), \({\boldsymbol{c}}\) and \(d\) are defined by
\[\begin{array}{@{}r@{~}c@{~}l@{}} {\boldsymbol{A}}& = &\left[ \begin{array}{cc} -a & -\rho^2 \\ 1-a^2 & -\rho^2 a \end{array} \right], \hspace{6mm} {\boldsymbol{b}}= \displaystyle\rho\left[ \begin{array}{c} 1 \\[1mm] a \end{array} \right] \\ {\boldsymbol{c}} & = & \left[ \begin{array}{cc} 0 &~ 1-\rho^2 \end{array} \right], \hspace{6mm}d=\displaystyle\rho. \tag{3b}\end{array} \] |
The transfer function of the state-space model in (3a) can be expressed as
\[\begin{equation*} H(z)=\frac{Y(z)}{U(z)}={\boldsymbol{c}}(z{\boldsymbol{I}}_2 - {\boldsymbol{A}})^{-1}{\boldsymbol{b}} + d \tag{4} \end{equation*}\] |
where \(U(z)\) and \(Y(z)\) denote the \(z\)-transform of input \(u(k)\) and output \(y(k)\), respectively.
The transfer function from input \(u(k)\) to state-variable vector \(\boldsymbol{x}(k)\) can be written as
\[\begin{array}{@{}r@{~}c@{~}l@{}} \left[\begin{array}{c} F_1(z) \\ F_2(z) \end{array} \right] &=& (z{\boldsymbol{I}}_2-{\boldsymbol{A}})^{-1} {\boldsymbol{b}} \\ &=& \displaystyle\frac{\rho}{z^2+ a(1+\rho^2) z+\rho^2} \begin{bmatrix} z \\[1mm] 1+a z \end{bmatrix} \tag{5a}\end{array}\] |
which leads to
\[\begin{array}{@{}r@{~}c@{~}l@{}} X_1(z)&=&F_1(z) U(z) \\ &=&\displaystyle\frac{\rho\,z^{-1}}{1+a(1+\rho^2) z^{-1}+\rho^2 z^{-2}}\,U(z)\tag{5b} \end{array}\] |
where \(X_1(z)\) denotes the \(z\)-transform of the first entry \(x_1(k)\) in the state-variable vector \(\boldsymbol{x}(k)=[x_1(k), x_2(k)]^T\).
From (2) and (4), it follows that
\[\begin{array}{@{}r@{~}c@{~}l@{}} \displaystyle\frac{\partial Y(z)}{\partial a} &=&\displaystyle\frac{\partial H(z)}{\partial a}\,U(z) \\ &=&\displaystyle\frac{(1-\rho^2)(1-z^{-2})}{1+a(1+\rho^2) z^{-1}+\rho^2 z^{-2}}\,F_1(z) U(z) \\ &=&\Delta H(z)\cdot 2 X_1(z) \tag{6a}\end{array} \] |
where \(\Delta H(z)\) is a bandpass digital filter given by
\[\begin{array}{@{}r@{~}c@{~}l@{}} \Delta H(z)&=&\displaystyle 1-\frac{1+\rho^2}{2\rho}H(z) \\ &=&\displaystyle\frac{1}{\,2\,}\cdot\frac{(1-\rho^2)(1-z^{-2})}{1+a(1+\rho^2) z^{-1}+\rho^2 z^{-2}}. \tag{6b}\end{array} \] |
The magnitude response of the bandpass digital filter \(\Delta H(z)\) with \(a=-\cos(0.3\pi)\) in (6b) is depicted in Fig. 2. This reveals that \(2 x_1(k)\) can be viewed as an approximation of derivative \(\partial y(k)/\partial a\).
2.2 Construction of Adaptive Notch Digital Filters
Let the input signal to an adaptive state-space notch digital filter be given by
\[\begin{equation*} u(k)=A\cos(\omega_o k+\theta)+v(k) \tag{7} \end{equation*}\] |
where \(A\) and \(\theta\) denote the magnitude and phase of the sinusoid, respectively, \(\omega_o\) is the frequency of the sinusoid, and \(v(k)\) is the white noise of a Gaussian random process with zero mean and variance \(\sigma_v^2\).
Referring to (3), an adaptive state-space notch digital filter can be described by
\[\begin{array}{@{}r@{~}c@{~}l@{}} {\boldsymbol{x}}(k+1) & = & {\boldsymbol{A}}(k){\boldsymbol{x}}(k) + {\boldsymbol{b}}(k)u(k) \\ y(k) & = & {\boldsymbol{c}}{\boldsymbol{x}}(k) + d u(k) \tag{8a}\end{array} \] |
where
\[\begin{array}{@{}r@{~}c@{~}l@{}}{\boldsymbol{A}}(k) & \!=\! &\left[\!\! \begin{array}{cc} -a(k) & -\rho^2 \\ 1-a(k)^2 & -\rho^2 a(k) \end{array}\!\!\right],\ {\boldsymbol{b}}(k)\!=\!\displaystyle\rho\left[\!\! \begin{array}{c} 1 \\ a(k) \end{array} \!\!\right] \\ {\boldsymbol{c}} &\!=\!& \left[ \begin{array}{cc} 0 & ~ 1-\rho^2 \end{array} \right],\ d=\rho. \tag{8b}\end{array}\] |
To tune the time-varying parameter \(a(k)\), we now consider a simplified iterative algorithm (SIA) which is deduced from (6a) and the magnitude response illustrated in Fig. 2, that is,
\[\begin{equation*} a(k+1)=a(k)-\mu y(k) x_1(k) \tag{9} \end{equation*}\] |
where \(\mu\) is the step-size parameter to control the convergence and \(x_1(k)\) is half an approximation of derivative \(\partial y(k)/\partial a\) in (6a) which can be obtained from the first entry of the state-variable vector \(\boldsymbol{x}(k)=[x_1(k), x_2(k)]^T\) in (8a).
The block diagram of an adaptive state-space notch digital filter described by (8) and (9) is drawn in Fig. 3.
3. Stability and Parameter-Estimation Bias Analysis
In this section, the stability and parameter-estimation bias of time-varying parameter \(a(k)\) at steady state are analyzed for the adaptive state-space notch digital filter described by (8) and (9). When the simplified iterative algorithm sufficiently approaches its steady state, the time-varying parameter \(a(k)\) will be close to its true value \(a_0=-\cos\omega_o\).
When applying the Taylor series expansion, the transfer function \(F_1(z)\) in (5b) in the vicinity of \(a_0\) is approximated to
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} F_1(e^{j\omega_o})&=&\displaystyle\frac{\rho\, e^{-j\omega_o}}{1+a(k)(1+\rho^2) e^{-j\omega_o}+\rho^2 e^{-j2\omega_o}} \\ &=&\displaystyle\frac{\rho}{(\rho^2-1)a_0+(1+\rho^2)\delta_a(k)+(\rho^2-1)e^{-j\omega_o}} \\ &\simeq&\displaystyle\frac{\rho}{\rho^2-1}\cdot\frac{1}{a_0+e^{-j\omega_o}} \\ &&\hspace{1mm}-\,\displaystyle 2\left(\frac{\rho}{\rho^2-1}\right)^2\left(\frac{1}{a_0+e^{-j\omega_o}}\right)^2\delta_a(k) \end{array} \tag{10} \end{equation*}\] |
since \(1+\rho^2\simeq 2\rho\) in case of \(\rho\simeq 1\) where \(\delta_a(k)=a(k)-a_0\). The magnitude and phase of \(F_1(e^{j\omega_o})\) in case of \(\delta_a(k)=0\) are given by
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} B&=&\displaystyle\left|\frac{\rho}{\rho^2-1}\cdot\frac{1}{a_0+e^{-j\omega_o}}\right| =\displaystyle\frac{\rho}{1-\rho^2}\cdot\frac{1}{\sin\omega_o} \\ \phi&=&-\displaystyle\frac{\pi}{\,2\,}. \end{array} \tag{11} \end{equation*}\] |
Hence, state-variable \(x_1(k)\) can be expressed from (5b), (7), (10) and (11) as
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} x_1(k)&=&\displaystyle AB\sin(\omega_o k+\theta) \\[2mm]&& +\,\displaystyle 2AB^2\delta_a(k)\cos(\omega_o k+\theta)+v_F(k) \end{array} \tag{12} \end{equation*}\] |
where \(v_F(k)\) denotes a noise term in \(x_1(k)\) due to input noise \(v(k)\) in (7). Applying the Taylor series expansion, the transfer function \(H(z)\) in (2) in the vicinity of \(a_0\) is approximated to
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} H(e^{j\omega_o})&=&\displaystyle\rho\,\frac{1+2a(k)e^{-j\omega_o}+e^{-j2\omega_o}}{1+a(k)(1+\rho^2) e^{-j\omega_o}+\rho^2 e^{-j2\omega_o}} \\ &=&\displaystyle\frac{2\rho\,\delta_a(k)}{(\rho^2-1)a_0+(1+\rho^2)\delta_a(k)+(\rho^2-1)e^{-j\omega_o}} \\ &\simeq&2\,\displaystyle\frac{\rho}{\rho^2-1}\cdot\frac{1}{a_0+e^{-j\omega_o}}\,\delta_a(k). \end{array} \tag{13} \end{equation*}\] |
Hence, the output \(y(k)\) can be derived from (4), (7), (11) and (13) as
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} y(k)&=&2AB\delta_a(k)\sin(\omega_o k+\theta)+v_H(k) \end{array} \tag{14} \end{equation*}\] |
where \(v_H(k)\) stands for a noise term in \(y(k)\) due to input noise \(v(k)\) in (7).
Taking the expectation of (9), a difference equation on the expectation of time-varying parameter \(a(k)\) is written as
\[\begin{equation*} E[\delta_a(k+1)]=E[\delta_a(k)]-\mu E[y(k)x_1(k)]. \tag{15} \end{equation*}\] |
Substituting (12) and (14) into (15) yields
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} E[\delta_a(k+1)]&\simeq&\displaystyle\left(1-\mu A^2B^2\right)E[\delta_a(k)] \\ &&~~-\,\mu E[v_H(k) v_F(k)]. \end{array} \tag{16} \end{equation*}\] |
A condition for the first-order linear dynamical system in (16) to be stable is then given by
\[\begin{equation*} \left|1-\mu A^2B^2\right| < 1 \tag{17} \end{equation*}\] |
or equivalently,
\[\begin{equation*} 0 < \mu < 2\left(\frac{1-\rho^2}{\rho}\right)^2\frac{\sin^2\omega_o}{A^2} \tag{18} \end{equation*}\] |
which is obtained by substituting \(B\) in (11) into (17). It is noted that (18) is quite different from (31) in [14] and much simpler than that in [14]. The stable regions of the linear dynamical system in (16) are drawn for \(\omega_o=0.3\pi, 0.5\pi\) and \(0.8\pi\) in Fig. 4 when \(A=1\).
Fig. 4 The stable region of a first-order linear dynamical system with \(A=1\) in (18) for \(\omega_o=0.3\pi, 0.5\pi\) and \(0.8\pi\). |
In the case where \(v(k)\) is a white noise, \(E[v_H(k)v_F(k)]\) can be obtained from the complex integration along unit circle in the \(z\)-plane as
\[\begin{array}{@{}r@{~}c@{~}l@{}} E[v_H(k)v_F(k)]&=&\displaystyle\frac{\sigma^2_v}{2\pi j}\oint H(z)F_1(z^{-1})\frac{dz}{z} \\ &=&\displaystyle\frac{\sigma^2_v}{2\pi j}\oint \rho\,\frac{z^2+2 a_0 z+1}{(z-z_1)(z-z_2)} \\[2mm]&&\hspace{8mm} \cdot\displaystyle\frac{\rho}{(1-z_1 z)(1-z_2 z)}\,dz \\ &=&\sigma_v^2\rho^2\displaystyle[\mbox{Res}(z_1)+\mbox{Res}(z_2)] \tag{19a}\end{array}\] |
where \(z_1\) and \(z_2\) denote the roots of \(z^2+a_0(1+\rho^2) z+\rho^2=0\). Since \(z_i^2+2 a_0 z_i+1=(1-\rho^2)(a_0 z_i+1)\) holds for \(i=1, 2\), the residues can be computed as
\[\begin{array}{@{}r@{~}c@{~}l@{}} \mbox{Res}(z_1)&=&\displaystyle\frac{a_0 z_1+1}{(z_1-z_2)(1-z_1^2)} \\ \mbox{Res}(z_2)&=&-\displaystyle\frac{a_0 z_2+1}{(z_1-z_2)(1-z_2^2)}. \end{array} \tag{19b}\] |
Noting that \(z_1+z_2=-a_0(1+\rho^2)\) and \(z_1 z_2=\rho^2\), it follows from (19) that
\[\begin{equation*} \mbox{Res}(z_1)+\mbox{Res}(z_2)=0,\hspace{4mm}E[v_H(k)v_F(k)]=0. \tag{20} \end{equation*}\] |
Hence, assuming that the first-order linear dynamical system in (16) is stable, as \(k\rightarrow \infty\) from (16) and (20) we obtain
\[\begin{equation*} E[\delta_a(\infty)]=-\frac{\mu}{\mu A^2 B^2}E[v_H(\infty)v_F(\infty)]=0. \tag{21} \end{equation*}\] |
This shows that the parameter estimate is unbiased at steady state, irrespective of the magnitude of \(\sigma_v^2\) and \(\rho\).
4. A Numerical Example
The amplitude and phase of the sinusoid are set to \(A=1\) and \(\theta=\pi/4\) in (7), respectively. The mean and variance of the white noise \(v(k)\) are set to \(E[v(k)]=0\) and \(\sigma_v^2=0.05\) in (7) (except for Fig. 11), respectively. Moreover, the initial state-variable vector is chosen to be \({\boldsymbol{x}}(0)=\bf0\) in (8a).
When the simplified iterative algorithm (SIA) in (9) is applied to this example, the convergence characteristics of the estimate \(a(k)\) are shown in Figs. 5-7 where \(\rho=0.96\) is chosen in (8b) as a representative sample. From Figs. 5-7, it is observed that the estimate \(a(k)\) at the steady state is close to the true value \(a_0=-\cos(0.3\pi)\) in Fig. 5, \(a_0=-\cos(0.5\pi)\) in Fig. 6 and \(a_0=-\cos(0.8\pi)\) in Fig. 7, respectively, for all the values of step-size \(\mu\) and initial estimate \(a(0)\).
Fig. 5 The convergence characteristics of \(a(k)\) for the SIA in (9) with fixed step-size \(\mu\) in the case where \(a_0=-\cos (0.3\pi)\). |
Fig. 6 The convergence characteristics of \(a(k)\) for the SIA in (9) with fixed step-size \(\mu\) in the case where \(a_0=-\cos (0.5\pi)\). |
Fig. 7 The convergence characteristics of \(a(k)\) for the SIA in (9) with fixed step-size \(\mu\) in the case where \(a_0=-\cos (0.8\pi)\). |
Moreover, the MSEs of the convergence characteristics of the estimate \(a(k)\) for the SIA in (9) with step-size \(\mu=0.0003\), \(0.0002\) and \(0.0001\) are plotted in Fig. 8. These plots are obtained by the ensemble average of 100 independent runs, and the MSEs at time \(k\) are computed from
\[\begin{equation*} \mbox{MSE}=\displaystyle\frac{1}{100}\sum_{i=1}^{100} \bigl(a(k)_i-a_0\bigr)^2 \tag{22} \end{equation*}\] |
where \(a(k)_i\) stands for the estimate \(a(k)\) derived from the \(i\)th independent run at time \(k\). Fig. 8 shows that the SIA in (9) converges to its true value \(a_0=-\cos(0.3\pi)\) with the high accuracy of parameter estimation for all the values of step-size \(\mu\).
Fig. 8 MSE polts obtained by ensemble average of 100 independent runs for the SIA in (9) with fixed step-size \(\mu\). |
Regarding the parameter-estimation bias at steady state, the experimental results are compared with the theory based on (21) in Figs. 9-12 where \(E[a(k)]\) are the ensemble average of 100 independent runs at time \(k\) and the time-average of \(E[a(k)]-a_0\) at steady state is defined by
\[\begin{equation*} \langle E[a(k)]-a_0\rangle=\frac{1}{N-M}\sum_{k=M+1}^N \left(E[a(k)]-a_0\right) \tag{23} \end{equation*}\] |
with \(a_0=-\cos\omega_o\), \(M=2000\) and \(N=3000\) on condition that the stationary random process fulfills ergodicity [18].
Fig. 9 Comparison between theoretical and experimental biases at steady state versus pole radius \(\rho\) for \(\omega_o=0.3\pi\) and \(a(0)=-\cos(0.2\pi)\) where \(\mu\epsilon[0.00008,0.00019]\). |
Fig. 11 Comparison between theoretical and experimental biases at steady state versus variance \(\sigma_v^2\) for \(\omega_o=0.3\pi\) and \(\rho=0.96\) where \(\mu\epsilon[0.00007,0.0001]\). |
Fig. 12 Comparison between theoretical and experimental biases at steady state versus step-size \(\mu\) for \(\sigma_v^2=0.05\), \(\rho=0.96\), \(\omega_o=0.3\pi\) and \(a(0)=-\cos(0.2\pi)\). |
From Figs. 9-11, it is observed that the experimental results are close to the theory based on (21) for all the discrete values of pole radius \(\rho\) over the range \(0.9\le\rho\le 0.98\) in Fig. 9, notch frequency \(\omega_o\) over the range \(0.1\pi\le\omega_o\le 0.9\pi\) in Fig. 10 and variance \(\sigma_v^2\) over the range \(0.01\le\sigma_v^2\le 0.1\) in Fig. 11, respectively. In other words, the experimental results are distributed around the unbiased value (i.e. zero) based on (21) due to the noise \(v(k)\) in (7) for all the cases. Moreover, it is seen from Fig. 12 that the experimental results tend to almost constant, regardless of step-size \(\mu\) and very close to the unbiased value (i.e. zero) based on (21) for all the values of step-size \(\mu\) where \(\sigma_v^2=0.05\) and \(a(0)=-\cos(0.2\pi)\).
5. Comparison with the Conventional Methods
5.1 Comparison with the Methods in [11], [13]
Consider an IIR notch digital filter with the following transfer function: [11], [13, Eq.(2)]
\[\begin{equation*} H(z)=\frac{1+\sin\theta_2}{2}\,\displaystyle\frac{1+2\sin\theta_1 z^{-1}+z^{-2}}{1+\sin\theta_1(1+\sin\theta_2) z^{-1}+\sin\theta_2 z^{-2}} \tag{24} \end{equation*}\] |
where \(\theta_1\) is the parameter of notch frequency \(\omega_o\), \(\theta_2\) is the notch bandwidth parameter, \(\theta_1=\omega_o-\pi/2\) and \(0\leq\omega_o\leq\pi\).
Referring to (24), an adaptive second-order IIR notch digital filter can be realized by a state-space model
\[\begin{array}{@{}r@{~}c@{~}l@{}} {\boldsymbol{\xi}}(k+1)&=&{\boldsymbol{F}}(k){\boldsymbol{\xi}}(k)+{\boldsymbol{g}}(k)u(k) \\ y(k) & = & {\boldsymbol{h}}{\boldsymbol{\xi}}(k) + d_o u(k) \tag{25a}\end{array} \] |
where \({\boldsymbol{\xi}}(k)=[\xi_1(k), \xi_2(k)]^T\) is a \(2\times 1\) state-variable vector, \(u(k)\) is a scalar input given by (7), \(y(k)\) is a scalar output, and coefficient matrices \({\boldsymbol{F}}(k)\), \({\boldsymbol{g}}(k)\), \({\boldsymbol{h}}\) and \(d_o\) are defined by
\[\begin{array}{@{}r@{~}c@{~}l@{}} {\boldsymbol{F}}(k)& = &\left[ \begin{array}{rr} -\sin\theta_1(k) & -\cos\theta_1(k)\sin\theta_2 \\ \cos\theta_1(k) & -\sin\theta_1(k)\sin\theta_2 \end{array} \right] \\ {\boldsymbol{g}}(k) & = & \left[ \begin{array}{r} \cos\theta_1(k)\cos\theta_2 \\ \sin\theta_1(k)\cos\theta_2 \end{array} \right] \\ {\boldsymbol{h}}&=&\displaystyle\frac{1}{\,2\,}\left[ \begin{array}{cc} 0 & \cos\theta_2 \end{array} \right], \hspace{3mm}d_o=\frac{1+\sin\theta_2}{2}. \tag{25b}\end{array} \] |
Together with (25), the simplified lattice algorithm (SLA) is given by [11], [13, Eq.(10)]
\[\begin{equation*} \theta_1(k+1)=\theta_1(k)-\mu y(k)\,\xi_1(k) \tag{26} \end{equation*}\] |
where \(\mu\) is the step-size parameter and \(\xi_1(k)\) is the first entry of state-variable vector \({\boldsymbol{\xi}}(k)\) in (25a).
Noting that
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} \displaystyle\frac{\partial Y(z)}{\partial \theta_1} &=&\displaystyle\frac{\partial H(z)}{\partial \theta_1}U(z) \\ &=&[1-H(z)]\displaystyle\frac{1+\sin\theta_2}{\cos\theta_2}Z[\xi_1(k)] \\ &\propto& [1-H(z)]Z[\xi_1(k)] \end{array} \tag{27} \end{equation*}\] |
where \(Z[\xi_1(k)]\) denotes the \(z\)-transform of \(\xi_1(k)\), the signal proportional to the derivative in (27) can be generated from
\[\begin{array}{@{}r@{~}c@{~}l@{}}{\boldsymbol{w}}(k+1) & = & {\boldsymbol{F}}(k){\boldsymbol{w}}(k) + \hat{{\boldsymbol{g}}}(k)\xi_1(k) \\\Delta y(k) & = & \hat{{\boldsymbol{h}}}{\boldsymbol{\xi}}(k) + \hat{d}_o\xi_1(k) \tag{28a}\end{array}\] |
where \({\boldsymbol{w}}(k)=[w_1(k), w_2(k)]^T\) is a \(2\times 1\) state-variable vector, \({\boldsymbol{F}}(k)\) is the same as in (25b) and
\[\begin{array}{@{}r@{~}c@{~}l@{}} \hat{{\boldsymbol{g}}}(k) & = & \left[ \begin{array}{r} -\cos\theta_1(k) \\ -\sin\theta_1(k) \end{array} \right],\hspace{4mm}\hat{d}_o=\displaystyle\frac{1-\sin\theta_2}{2} \\ \hat{{\boldsymbol{h}}}&=&\displaystyle\frac{(1-\sin\theta_2)(1+\sin\theta_2)}{2}\left[ \begin{array}{cc} 0 & 1 \end{array} \right]. \tag{28b}\end{array} \] |
Together with (25) and (28), the lattice gradient algorithm (LGA) is given by [19], [13, Eq.(9)]
\[\begin{equation*} \theta_1(k+1)=\theta_1(k)-\mu y(k)\,\Delta y(k). \tag{29} \end{equation*}\] |
Moreover, together with (25) and (28), the affine combination lattice algorithm (ACLA) is given by [13, Eqs.(24)-(25)]
\[\begin{equation*} \theta_1(k+1)=\theta_1(k)-\mu y(k)[\kappa\xi_1(k)+(1-\kappa)\Delta y(k)] \tag{30} \end{equation*}\] |
where \(\kappa\) is the weight parameter such that \(\kappa>1\).
It is noted that the LGA and the ACLA require two second-order linear dynamical systems, respectively, while this paper’s method and the SLA require a second-order linear dynamical system, respectively. This means that the former is more expensive and needs more computational complexity than the latter.
In Fig. 13, the convergence characteristics of estimate \(\omega_o(k)=\arccos(-a(k))\) for the SIA in (9) are compared with those of estimate \(\omega_o(k)=\theta_1(k)+\pi/2\) for the SLA in (26) where \(\omega_o=0.2\pi\), \(\theta_1(0)=0.4\pi\) (\(\omega_o(0)=0.9\pi\)), \(\rho=0.96\) (\(\sin\theta_2=\rho^2\)) and \(\sigma_v^2=0.05\). Fig. 13 shows that they have similar convergence characteristics together with the small difference of trajectories because the SIA in (9) estimates notch frequency \(\omega_o\) indirectly through \(a(k)=-\cos\omega_o(k)\). Hence, this paper’s method can be viewed as an alternative to the SLA for efficiently estimating the notch frequency.
Fig. 13 Comparison between this paper’s method and the SLA in (26) for the convergence characteristics where \(\rho=0.96\) and \(\sigma_v^2=0.05\). |
The SLA in (26) is compared with the ACLA in (30) by the numerical simulations in Fig. 14 where \(\omega_o=0.3\pi\), \(\theta_1(0)=-0.4\pi\) and \(\sigma_v^2=0.05\). Figure 14 shows that when the value of \(\mu\) is given, the ACLA which requires more calculations outperforms the SLA from the viewpoint of the convergence speed. Figure 14 also reveals that the selection of \(\mu\) is more important than that of \(\kappa\) to achieve faster convergence for the estimation of notch frequency \(\omega_o\). In other words, by choosing the step-size \(\mu\) nicely, the SLA and this paper’s method can attain the fast convergence of estimate \(\omega_o(k)=\theta_1(k)+\pi/2\) with less computational complexity.
5.2 Comparison with the Lattice Algorithm in [12]
Consider an IIR notch digital filter with the following transfer function: [12, Eq.(2)]
\[\begin{equation*} H_L(z)=\frac{Y(z)}{U(z)}=\frac{1+2p z^{-1}+z^{-2}}{1+p(1+\alpha) z^{-1}+\alpha z^{-2}} \tag{31} \end{equation*}\] |
where \(p\) is an adaptive parameter that should converge to \(-\cos\omega_o\). The output of the all-pole section can be written as [12, Eq.(5)]
\[\begin{equation*} x(k)=\frac{1}{1+q(1+\alpha) z^{-1}+\alpha z^{-2}}u(k) \tag{32} \end{equation*}\] |
where \(q\) corresponds to \(p\) in (31). The output of the notch filter is described as [12, Eq.(6)]
\[\begin{equation*} y(k)=x(k)+2px(k-1)+x(k-2). \tag{33} \end{equation*}\] |
Assuming that \(x(k)\) is a sequence independent of \(p\), the lattice algorithm (LA) can be summarized as follows: [12, p.407]
\[\begin{equation*} \begin{array}{@{}r@{~}c@{~}l@{}} D(k)&=&\lambda D(k-1)+(1-\lambda)2x(k-1)^2 \\ C(k)&=&\lambda C(k-1)+(1-\lambda)x(k-1)[x(k)+x(k-2)] \\ p(k)&=&-C(k)/D(k) \\ \overline{p}(k)&=&\left\{ \begin{array}{cl} p(k), & ~\mbox{if}~~-1\leq p(k) \leq 1 \\[1mm] 1, & ~\mbox{if}~~p(k) > 1 \\[1mm] -1, & ~\mbox{if}~~p(k) < -1 \end{array} \right. \\ \hat{p}(k)&=&\gamma\overline{p}(k-1)+(1-\gamma)\overline{p}(k) \end{array} \tag{34} \end{equation*}\] |
where \(\hat{p}(k)\) is the estimate of parameter \(p\) at time \(k\), \(\lambda\) is the forgetting factor satisfying \(0<\lambda<1\), and \(\gamma\) is the smoothing factor such that \(0<\gamma<1\).
In Fig. 15, the convergence characteristics of estimate \(\omega_o(k)=\arccos(-a(k))\) for the SIA in (9) are compared with those of estimate \(\omega_o(k)=\arccos(-\hat{p}(k))\) for the LA in (34) where \(\omega_o=0.3\pi\), \(\rho=0.96\) (\(\alpha=\rho^2\)), \(\sigma_v^2=0.05\), \(C(0)=0.05\) and \(D(0)=0.1\). The data related to Fig. 15 are summarized in Table 1 where \(M[\omega_o(k)]\) and \(D[\omega_o(k)]\) stand for the time-average and variance of estimate \(\omega_o(k)\) between \(k=201\) and \(500\) at the steady state, respectively, and \(\Delta M[\omega_o(k)]\) is defined by
\[\begin{equation*} \Delta M[\omega_o(k)]=M[\omega_o(k)]-\omega_o~~\mbox{for}~~\omega_o=0.3\pi. \tag{35} \end{equation*}\] |
Fig. 15 Comparison between this paper’s method and the LA in (34) for the convergence characteristics where \(\omega_o=0.3\pi\), \(\rho=0.96\) and \(\sigma_v^2=0.05\). |
From Fig. 15, it can be observed that the LA converges to area near the goal faster than the SIA. On the other hand, Table 1 suggests that the convergence accuracy of the SIA at the steady state is better than that of the LA. It should be noted that the LA in (34) requires more computational complexity than the SIA in (9).
6. Conclusion
This paper has explored a state-space approach for adaptive second-order notch digital filters. A simplified iterative algorithm has been derived from the gradient-descent method to minimize the mean-squared output of the filter. Stability and parameter-estimation bias of the adaptive state-space notch digital filter have then been analyzed by using a first-order linear dynamical system. As a consequence, it has been shown that the parameter estimate is unbiased, regardless of the magnitude of noise variance and pole radius. Moreover, the simulation results of a numerical example have illustrated the validity and effectiveness of the adaptive state-space notch digital filter and parameter-estimation bias analysis for the simplified iterative algorithm. Finally, this paper’s method has been compared with the conventional methods by means of numerical simulations in which it has been shown that this paper’s method can be viewed as an alternative to the SLA for the efficient estimation of notch frequency.
References
[1] J.R. Glover, “Adaptive noise canceling applied to sinusoidal interferences,” IEEE Trans. Acoust., Speech, Signal Process., vol.ASSP-25, no.6, pp.484-491, 1977.
CrossRef
[2] D.V.B. Rao and S.Y. Kung, “Adaptive notch filtering for the retrieval of sinusoidal in noise,” IEEE Trans. Acoust., Speech, Signal Process., vol.ASSP-32, no.4, pp.791-802, 1984.
CrossRef
[3] K. Martin and M.T. Sun, “Adaptive filters suitable for real-time spectral analysis,” IEEE Trans. Circuits Syst., vol.CAS-33, no.2, pp.218-229, Feb. 1986.
CrossRef
[4] A. Nehoral, “A minimal parameter adaptive notch filter with constrained poles and zeros,” IEEE Trans. Acoust., Speech, Signal Process., vol.33, no.4, pp.983-996, Aug. 1985.
CrossRef
[5] T.S. Ng, “Some aspects of an adaptive digital notch filter with constrained poles and zeros,” IEEE Trans. Acoust., Speech, Signal Process., vol.35, no.2, pp.158-161, Feb. 1987.
CrossRef
[6] P. Stoica and A. Nehorai, “Performance analysis of an adaptive notch filter with constrained poles and zeros,” IEEE Trans. Acoust., Speech, Signal Process., vol.36, no.6, pp.911-919, June 1988.
CrossRef
[7] S.-C. Pei and C.-C. Tseng, “Adaptive IIR notch filter based on least mean p-power error criterion,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol.40, no.8, pp.525-529, Aug. 1993.
CrossRef
[8] Y. Xiao, Y. Takeshita, and K. Shida, “Steady-state analysis of a plain gradient algorithm for a second-order adaptive IIR notch filter with constrained poles and zeros,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol.48, no.7, pp.733-740, July 2001.
CrossRef
[9] Y.C. Lim, Y.X. Zou, and N. Zheng, “A piloted adaptive notch filter,” IEEE Trans. Signal Process., vol.53, no.4, pp.1310-1323, April 2005.
CrossRef
[10] Y. Hinamoto and S. Nishimura, “A state-space approach and its estimation bias analysis for adaptive notch digital filters with constrained poles and zeros,” IEICE Trans. Fundamentals, vol.E106-A, no.3, pp.582-589, March 2023.
CrossRef
[11] P.A. Regalia, “An improved lattice-based adaptive IIR notch filter,” IEEE Trans. Signal Process., vol.39, no.9, pp.2124-2128, Sept. 1991.
CrossRef
[12] N.I. Cho and S.U. Lee, “On the adaptive lattice notch filter for the detection of sinusoids,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol.40, no.7, pp.405-416, July 1993.
CrossRef
[13] S. Nakamura, S. Koshita, M. Abe, and M. Kawamata, “A new adaptive notch filtering algorithm based on normalized lattice structure with improved mean update term,” IEICE Trans. Fundamentals, vol.E98-A, no.7, pp.1482-1493, July 2015.
CrossRef
[14] A. Mvuma and S. Nishimura, “Steady-state analysis of a simplified lattice-based adaptive IIR notch filter,” IEICE Trans. Fundamentals, vol.E83-A, no.6, pp.965-972, June 2000.
[15] P.A. Regalia, S.K. Mitra, and P.P. Vaidyanathan, “The digital all-pass filter: A versatile signal processing building block,” Proc. IEEE, vol.76, no.1, pp.19-37, Jan. 1988.
CrossRef
[16] Y.V. Joshi and S.C.D. Roy, “Design of IIR multiple notch filters based on all-pass filters,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol.46, no.2, pp.134-138, Feb. 1999.
CrossRef
[17] Y. Hinamoto and S. Nishimura, “A state-space approach for adaptive notch digital filters with unbiased parameter-estimation,” Proc. 38th International Technical Conference on Circuits/Systems, Computers, and Communications 2023 (ITC-CSCC 2023), Jeju, Republic of Korea, pp.375-380, June 2023.
CrossRef
[18] W.B. Davenport and W.L. Root, An Introduction to the Theory of Random Signals and Noise, McGraw-Hill, New York, 1958.
[19] P.A. Regalia, Adaptive IIR Filtering in Signal Processing and Control, Marcel Dekker, 1995.
CrossRef