The search functionality is under construction.
The search functionality is under construction.

Open Access
Adaptive Output Feedback Leader-Following in Networks of Linear Systems Using Switching Logic

Sungryul LEE

  • Full Text Views

    79

  • Cite this
  • Free PDF (1.6MB)

Summary :

This study explores adaptive output feedback leader-following in networks of linear systems utilizing switching logic. A local state observer is employed to estimate the true state of each agent within the network. The proposed protocol is based on the estimated states obtained from neighboring agents and employs a switching logic to tune its adaptive gain by utilizing only local neighboring information. The proposed leader-following protocol is fully distributed because it has a distributed adaptive gain and relies on only local information from its neighbors. Consequently, compared to conventional adaptive protocols, the proposed design method provides the advantages of a very simple adaptive law and dynamics with a low dimension.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E107-A No.9 pp.1565-1569
Publication Date
2024/09/01
Publicized
2024/05/13
Online ISSN
1745-1337
DOI
10.1587/transfun.2024EAL2019
Type of Manuscript
LETTER
Category
Systems and Control

1.  Introduction

Recently, a great deal of research attention has been paid to the synchronization problem of complex networks because of its broad applicability in various fields like formation control, cooperative control, robotics, sensor networks, and so on [1]. Extensive research has demonstrated that the solvability of the synchronization problem is determined by the algebraic connectivity related to the Laplacian matrix [2]-[7]. Nevertheless, a significant drawback of existing synchronization protocols lies in their dependence on global information associated with the Laplacian matrix. To remove this obstacle, recent research has focused on various adaptive synchronization approaches that rely exclusively on locally acquired information from neighboring agents.

Early works [8] and [9] on the adaptive synchronization approach were developed for the directed network of first or second-order systems. Importantly, they are considered fully distributed because the coupling gain of each agent is adjusted using solely local neighboring information. The fully distributed studies on the network of low-order systems were extended to the undirected network of high-order nonlinear systems in [10], the directed network of high-order linear systems in [11] and [12], and the directed network of complex nonlinear systems in [13]. Remarkably, both [14] and [15] extended previous adaptive state feedback synchronization schemes to output feedback approaches for the adaptive synchronization problem. It’s worth noting that the protocols discussed in [14] and [15] exhibit high-order dynamics as they incorporate both local and distributed observers. Furthermore, the structures of the protocols introduced in [13], [14], and [15] are very complicated since their design is closely related to the construction of the Lyapunov function. Unlike the adaptive methods explained earlier, an adaptive edge-based protocol is proposed for the synchronization of nonlinear networks in [16], [17], and [18]. However, their design approaches are limited to state feedback and only applicable to undirected networks.

Inspired by the limitations of previous adaptive approaches, we introduce a novel local observer-based leader-following protocol employing switching logic. The presented leader-following protocol employs a low-dimensional observer and a simple update law based on logic-based switching. These advantages not only diminish the computational burden but also facilitate the implementation of the protocol. In contrast to previous Lyapunov-based adaptive protocols, the proposed method allows for an independent design of switching logic, state observer, and leader-following protocol. Threrefore, the presented methodology can be derived by integrating the presented switching logic with existing non-adaptive protocols. As a result, it can be considered a general solution to the adaptive leader-following problem.

2.  Problem Statement

Consider a network of general linear systems as follows.

\[\begin{eqnarray*} &&\!\!\!\!\! {{\dot x}_i}(t) = A{x_i}(t) + B{u_i}(t), \tag{1} \\ &&\!\!\!\!\! {y_i}(t) = C{x_i}(t),i = 0, \cdots ,N,\nonumber \end{eqnarray*}\]

where \({x_i} \in {R^n},{y_i} \in {R^p},{u_i} \in {R^m}\) are the state, the measured output, and the control input of the \({i}\)th agent, respectively, and \(A,B,C\) are constant matrices with appropriate dimensions. \({x_0}(t)\) denotes the leader’s state and \({x_i}(t),i = 1, \cdots ,N\) denote the follower’s state. Since the dynamics of the leader is not influenced by the followers, we assume that \({u_0}(t) = 0\).

Assumption 1: \((A,B,C)\) is stabilizable and detectable.

Lemma 1: [7] Under Assumption 1, there exist always unique matrices \(P = P^T > 0\) and \(Q = Q^T > 0\) such that

\[\begin{eqnarray*} &&\!\!\!\!\! {A^T}P + PA - PB{B^T}P = - {I_n}, \tag{2} \\ &&\!\!\!\!\! AQ + Q{A^T} - Q{C^T}CQ = - {I_n}, \tag{3} \end{eqnarray*}\]

where \({I_n} \in {R^{n \times n}}\) denotes the identity matrix.

The network topology of the system (1) is defined by a directed graph \(G = (V,E,A)\) where \(V = \{0, \cdots ,N\}\) denotes the node set, \(E \subset V \times V\) denotes the edge set, and \({\rm A} = ({a_{ij}}) \in {R^{(N+1) \times (N+1)}}\) denotes the adjacency matrix with \({a_{ii}} = 0\), \({a_{ij}} = 1\) if \((j,i) \in E\), and \({a_{ij}} = 0\) if \((j,i) \notin E\). \(L = ({l_{ij}}) \in {R^{(N+1) \times (N+1)}}\) denotes the Laplacian matrix of \(G\) with \({l_{ii}} = \sum\nolimits_{j = 0}^N {{a_{ij}}}\) and \({l_{ij}} = - {a_{ij}}\) for \(i \ne j\).

Assumption 2: The directed graph \(G\) has a directed spanning tree whose root node is the leader.

Lemma 2: [15] Under Assumption 2, the Laplacian matrix \(L\) can be decomposed as

\[\begin{eqnarray*} L = \left( {\begin{array}{*{20}{c}} 0&{{0_{1 \times N}}}\\ {{L_2}}&{{L_1}} \end{array}} \right) \tag{4} \end{eqnarray*}\]

where \({L_1} \in {R^{N \times N}}\) is a nonsingular M-matrix, \({L_2} \in {R^{N \times 1}}\) is some matrix, and \(0_{1 \times N}\in {R^{1 \times N}}\) is a zero matrix. Furthermore, there exists a matrix \(U = diag\left( {{u_1},{u_2}, \cdots ,{u_N}} \right)>0\) such that \(\hat L = UL_{1} + {L_{1}^T}U>0\).

The goal of this paper is to design a fully distributed output feedback protocol to solve the leader-following problem defined as follows.

Definition 1: The leader-following problem of the network (1) is to design \({u_i} (t)\) such that \[\begin{eqnarray*} \mathop {\lim }\limits_{t \to \infty } \left\| {{x_i}(t) - {x_0}(t)} \right\| = 0, i = 1, \cdots ,N, \tag{5} \end{eqnarray*}\]

3.  Main Results

Before introducing a switching logic-based leader-following protocol, we first examine an observer-based protocol with a distributed constant gain for the network (1).

\[\begin{eqnarray*} &&\!\!\!\!\! {{\dot {\hat x}}_i}(t) = A{\hat x_i}(t) + B{u_i}(t) + Q{C^T}\left( {{y_i}(t) - C{{\hat x}_i}(t)} \right),\nonumber\\ &&\!\!\!\!\! i = 0, \cdots ,N \tag{6} \\ &&\!\!\!\!\! {u_i}(t) = -{\sigma_i}{B^T}P{z_i}(t),\,i =1, \cdots ,N \tag{7} \end{eqnarray*}\]

where \({\sigma_i} > 0\) is a distributed constant gain, \(P\) and \(Q\) are solution matrices of (2) and (3), respectively, and \({z_i}(t)\) is the estimated states of neighboring agents as follows.

\[\begin{eqnarray*} {z_i}(t) = \sum\limits_{j = 0}^N {{a_{ij}}\left( {{{\hat x}_i}(t) - {{\hat x}_j}(t)} \right)} \tag{8} \end{eqnarray*}\]

The Eq. (6) is a local state observer to estimate the full state of the leader and followers. The protocol (7) is fully distributed in the sense that it has a distributed static gain \({\sigma_i}\) and uses only estimated states obtained from its neighbours.

Lemma 3: Under Assumptions 1 and 2, there always exists a constant \(\sigma_i^* > 0\) such that, for any \({\sigma_i} > \sigma_i^*\), the protocols (6) and (7) solve the leader-following problem of (1).

Proof: First, let \(\varepsilon (t) = {\left( {e_0^T(t), \cdots ,e_N^T(t)} \right)^T}\) with \({e_i}(t) = {x_i}(t) - {\hat x_i}(t),i = 0, \cdots ,N\) be the estimation error of (6). Then, we have

\[\begin{eqnarray*} \dot \varepsilon(t) = {I_{N+1}} \otimes \left( {A - Q{C^T}C} \right)\varepsilon(t) \tag{9} \end{eqnarray*}\]

where \(\otimes\) denotes the Kronecker product. Consider the Lyapunov function \(V_1(t) = {\varepsilon^T}(t)\left( {{I_{N+1}} \otimes {Q^{ - 1}}} \right)\varepsilon(t)\). Then, using (3) and defining \({\bar \varepsilon}(t) = \left( {{I_{N+1}} \otimes {Q^{ - 1}}} \right){\varepsilon}(t)\), we have

\[\begin{eqnarray*} {\dot V}_1(t) &=& {\varepsilon^T}\left( {{I_{N+1}} \otimes ({Q^{ - 1}}A + {A^T}{Q^{ - 1}} - 2{C^T}C)} \right)\varepsilon\nonumber\\ &=& {{\bar \varepsilon}^T}\left( {{I_{N+1}} \otimes (AQ + Q{A^T} - 2Q{C^T}CQ)} \right)\bar \varepsilon\nonumber\\ &\le& - {\left\| {\bar \varepsilon(t)} \right\|^2} \tag{10} \end{eqnarray*}\]

which means that the estimation error dynamics (9) is exponentially stable. Second, considering (1) and (7), we have

\[\begin{eqnarray*} \dot x(t) = ({I_N} \otimes A)x(t) - (\Sigma \otimes B{B^T}P)z(t) \tag{11} \end{eqnarray*}\]

where \(x(t) = {\left( {x_1^T, \cdots ,x_N^T} \right)^T}\), \(z(t) = {\left( {z_1^T, \cdots ,z_N^T} \right)^T}\), and \(\Sigma = diag\left( {{\sigma_1}, \cdots ,{\sigma_N}} \right)\). Consider the state transformation

\[\begin{eqnarray*} \eta (t) = ({L_1} \otimes {I_n}) \left( {x(t) - {1_N} \otimes {x_0}(t)} \right) \tag{12} \end{eqnarray*}\]

where \({1_N} = {(1, \cdots ,1)^T} \in {R^N}\). Since \({L_1}\) is nonsingular, (12) implies that (5) holds if and only if \(\eta (t) \to 0\) as \(t \to \infty\). Thus, for proving (5), it is enough to prove the asymptotical stability of \(\eta (t)\). Using (11) and (12), it follows that

\[\begin{eqnarray*} \dot \eta (t) = ({I_N} \otimes A)\eta (t) - ({L_1}\Sigma \otimes B{B^T}P)z(t) \tag{13} \end{eqnarray*}\]

From (8), we have

\[\begin{eqnarray*} z(t) = ({L_1} \otimes {I_n})\hat x(t) + {L_2} \otimes {\hat x_0}(t) \tag{14} \end{eqnarray*}\]

Since \(L{1_{N + 1}} = 0\), it follows that \({L_2} + {L_1}{1_N} = 0\). Then, we have

\[\begin{eqnarray*} z(t) &=& ({L_1} \otimes {I_n})\left( {\hat x(t) - {1_N} \otimes {{\hat x}_0}(t)} \right) \tag{15} \end{eqnarray*}\]

where \(\hat x(t) = {\left( {\begin{array}{*{20}{c}}{\hat x_1^T}& \cdots &{\hat x_N^T}\end{array}} \right)^T}\). Let \(e(t) = x(t) - \hat x(t)\) and \(\zeta (t) = ({L_1} \otimes {I_n})(e(t) - {1_N} \otimes {e_0}(t))\). Then, it follows that

\[\begin{eqnarray*} z(t) = \eta (t) - \zeta (t) \tag{16} \end{eqnarray*}\]

Substituting (16) into (13), we have

\[\begin{eqnarray*} \dot \eta (t)&=& ({I_N} \otimes A)\eta (t) - (L_1\Sigma \otimes B{B^T}P)\eta (t) \tag{17} \\ & &+(L_1 \Sigma\otimes B{B^T}P)\zeta(t)\nonumber \end{eqnarray*}\]

(10) means that \(\zeta (t) \to 0\) as \(t \to \infty\). Therefore, the stability of (17) reduces to that of (18) below.

\[\begin{eqnarray*} \dot \eta (t) = ({I_N} \otimes A)\eta (t) - (L_1\Sigma \otimes B{B^T}P)\eta (t) \tag{18} \end{eqnarray*}\]

To examine the stability of (18), define the Lyapunov function \(V_2(t) = {\eta ^T}(t)\left( {U\Sigma \otimes P} \right)\eta (t)\). Then, we have

\[\begin{eqnarray*} \dot V_2(t) &=& {\eta ^T}(t)\left( {U\Sigma \otimes (PA + {A^T}P)} \right)\eta (t) \tag{19} \\ &&-{\eta ^T}(t)(\Sigma \hat L\Sigma \otimes PB{B^T}P)\eta (t)\nonumber \end{eqnarray*}\]

Using Lemma 2, we have

\[\begin{eqnarray*} {\eta ^T}(\Sigma \hat L\Sigma \otimes PB{B^T}P)\eta \ge {c_0}{\eta^T}(U{\Sigma ^2} \otimes PB{B^T}P)\eta \tag{20} \end{eqnarray*}\]

where \({c_0} = {{{\lambda _m}(\hat L)} \mathord{\left/ {\vphantom {{{\lambda _m}(\hat L)} {{\lambda _M}(R)}}} \right. } {{\lambda _M}(U)}}\), \({\lambda _m}(\hat L) > 0\) is the smallest eigenvalue of \(\hat L\), and \({\lambda _M}(U) > 0\) is the largest eigenvalue of \(U\). Using (2) and (20), we have

\[\begin{eqnarray*} \dot V_2 &\le& \sum\limits_{i = 1}^N {{u_i}{\sigma_i}{\eta _i}^T\left( {(PA + {A^T}P) - {c_0}{\sigma_i}PB{B^T}P} \right)} {\eta _i}, \nonumber\\ &=& \sum\limits_{i = 1}^N {{u_i}{\sigma_i}{\eta _i}^T\left( { - {I_n} - ({c_0}{\sigma_i}- 1)PB{B^T}P} \right)} {\eta _i} \tag{21} \end{eqnarray*}\]

Let \(\sigma_i^* = {1 \mathord{\left/{\vphantom {1 {{c_0}}}} \right.} {{c_0}}}\). If \({\sigma_i} \ge \sigma_i^*\), we have

\[\begin{eqnarray*} \dot V_2(t) \le - \sum\limits_{i = 1}^N {{u_i}{\sigma_i}{{\left\| {{\eta _i}(t)} \right\|}^2}} \tag{22} \end{eqnarray*}\]

From (22), it is evident that (17) exhibits exponential stability, implying the satisfaction of (5).

Remark 1: Equation (22) shows that to solve the leader-following problem, the design of the static gain \(\sigma_i\) depends on the matrices \(U\) and \(\hat L\) that are derived from the Laplacian matrix \(L_1\). Despite the fully distributed structure of the proposed protocol, its static gain is determined by utilizing global information on the Laplacian matrix. The reliance on global information is eliminated by the adaptive protocol presented in the next Theorem 1.

Next, we propose the observer-based protocol with a distributed adaptive gain as follows.

\[\begin{eqnarray*} &&\!\!\!\!\! {{\dot {\hat x}}_i}(t) = A{\hat x_i}(t) + B{u_i}(t) + Q{C^T}\left( {{y_i}(t) - C{{\hat x}_i}(t)} \right), \nonumber\\ &&\!\!\!\!\!\hskip5.5mm i = 0, \cdots ,N \tag{23} \\ &&\!\!\!\!\! {u_i}(t) = - {\sigma _i}(t){B^T}P{z_i}(t) ,i = 1, \cdots ,N \tag{24} \end{eqnarray*}\]

where \({\sigma_i}(t) > 0\) denotes an adaptive gain. The switching logic to tune \({\sigma_i}(t)\) is proposed as follows.

\[\begin{eqnarray*} &&\!\!\!\!\! {\sigma_i}(t) = \varepsilon _i^j,\,\,if\,\,\varepsilon _i^j \le {\delta_i}(t) < \varepsilon _i^{j + 1},{\sigma_i}(0) = \varepsilon _i^0,\nonumber\\ &&\!\!\!\!\! \varepsilon _i^j = \varepsilon _i^0a_i^j,j = 0, \cdots ,\infty, \tag{25} \\ &&\!\!\!\!\! {{\dot \delta }_i}(t) = {\left\| {{z_i}(t)} \right\|^2},{\delta_i}(0) = \delta_i^0,\nonumber \end{eqnarray*}\]

where \({\delta_i}(t)\) is a variable to estimate a synchronization error between the leader and followers, \(\varepsilon _i^j\) is an increasing sequence, and \(\varepsilon _i^0 > 0,{a_i} > 1,\delta_i^0 > 0\) are initial constants. By using (25), the switching happens when \({\delta_i}(t) = \varepsilon _i^j\), and \({\sigma _i}(t)\) is changed to \(\varepsilon _i^j\). \({\sigma _i}(t)\) is kept constant while \(\varepsilon _i^j \le {\delta_i}(t) < \varepsilon _i^{j + 1}\). Thus, it is clear from (25) that \({\delta_i}(t) > 0\) is monotonically non-decreasing and \({\sigma_i}(t)\) increases in a piecewise constant manner.

Theorem 1: If Assumptions 1 and 2 are satisfied, the proposed protocols (23), (24), and (25) solve the leader-following problem in (1). Moreover, \({\sigma_i}(t)\) is bounded for all \(t \ge 0\) and there exists \(\sigma_i^s > 0\) such that \({\sigma_i}(t) \to \sigma_i^s\) as \(t \to \infty\).

Proof: Firstly, let \({t_k}\) be the \(k\)th switching moment determined by the switching logic (25). Because \({\sigma_i}(t)\) is piecewise constant, it is assumed that \({\sigma_i}(t) = {\sigma_{ik}}\) during the interval \([{t_k},{t_{k + 1}})\). In the same manner as in deriving (17), we have, for all \(t \in [{t_k},{t_{k + 1}})\),

\[\begin{eqnarray*} \dot \eta (t) &=& ({I_N} \otimes A)\eta (t) - (L_1{\Sigma_k} \otimes B{B^T}P)\eta (t) \tag{26} \\ & & + (L_1{\Sigma_k}\otimes B{B^T}P)\zeta(t)\nonumber \end{eqnarray*}\]

where \({\Sigma_k} = diag\left( {{\sigma_{1k}}, \cdots ,{\sigma_{Nk}}} \right)\). Taking (10) into account, there exist \(\zeta_{k}^0 > 0\) and \(\beta > 0\) such that for all \(t \in [{t_k},{t_{k + 1}})\),

\[\begin{eqnarray*} {\left\| {\zeta(t)} \right\|^2} \le {\zeta_{k}^0}{e^{ - \beta (t - {t_k})}} \tag{27} \end{eqnarray*}\]

Thus, in order to prove the stability of (26), it is sufficient to prove the stability of (28) below.

\[\begin{eqnarray*} \dot \eta (t) = ({I_N} \otimes A)\eta (t) - (L_1{\Sigma_k} \otimes B{B^T}P)\eta (t) \tag{28} \end{eqnarray*}\]

Define the Lyapunov function \({V_{3k}}(t) = {\eta^T}(t)\left( {U{\Sigma_k} \otimes P} \right)\eta (t)\). Using the same manner as deriving (21), we have

\[\begin{eqnarray*} {\dot V_{3k}} \le \sum\limits_{i = 1}^N {{u_i}{\sigma_{ik}}{\eta_i}^T\left( { - {I_n} - ({c_0}{\sigma_{ik}} - 1)PB{B^T}P} \right)}{\eta_i} \tag{29} \end{eqnarray*}\]

By considering (29) and letting \(\sigma_{ik}^* = {1 \mathord{\left/ {\vphantom {1 {{c_0}}}} \right.} {{c_0}}}\), there exist positive constants \({\eta_{k}^0}\) and \(\alpha\) such that for \(t \in [{t_k},{t_{k + 1}})\) and \({\sigma_{ik}} \ge \sigma_{ik}^*\),

\[\begin{eqnarray*} {\left\| {\eta (t)} \right\|^2} \le {\eta_{k}^0}{e^{ - \alpha (t - {t_k})}} \tag{30} \end{eqnarray*}\]

Second, employing a contradiction method, we will prove that a finite number of switching occurs. Conversely, let us assume that an infinite number of switching takes place. From (25), this means that \({\sigma_i}(t) \to \infty\) as \(k \to \infty\). Therefore, there is the switching time \({t_{\bar k}}\) such that \({\sigma_{ik}}(t) > \sigma_{ik}^*\) for all \(t \ge {t_{\bar k}}\). Let \(\Delta {\delta_i} = {\delta_i}({t_{k + 1}}) - {\delta_i}({t_k})\). Using (16) and (25), we have for all \(k \ge \bar k\),

\[\begin{eqnarray*} \Delta {\delta_i} &=& {\int_0^{{t_{k + 1}}} {\left\| {{z_i}(\tau )} \right\|} ^2}d\tau - {\int_0^{{t_k}} {\left\| {{z_i}(\tau )} \right\|} ^2}d\tau \tag{31} \\ &=& {\int_{{t_k}}^{{t_{k + 1}}} {\left\| {{z_i}(\tau )} \right\|} ^2}d\tau \le {\int_{{t_k}}^{{t_{k + 1}}} {\left\| {z(\tau )} \right\|} ^2}d\tau \nonumber\\ &\le&{\int_{{t_k}}^{{t_{k + 1}}} {\left\| {\eta (\tau )} \right\|} ^2}d\tau + {\int_{{t_k}}^{{t_{k + 1}}} {\left\| {\zeta(\tau )} \right\|} ^2}d\tau\nonumber \end{eqnarray*}\]

Considering (27) and (30), for all \(k \ge \bar k\), (31) becomes

\[\begin{eqnarray*} \Delta {\delta_i} &\le& \frac{{{\eta_{k}^0}}}{\alpha }\left( {1 - {e^{ - \alpha ({t_{k + 1}} - {t_k})}}} \right) + \frac{{{\zeta_{k}^0}}}{\beta }\left( {1 - {e^{ - \beta ({t_{k + 1}} - {t_k})}}} \right)\nonumber\\ \tag{32} \end{eqnarray*}\]

From (32), it is easy to show that there exists \(\bar \delta > 0\) such that \(\Delta {\delta_i} \le \bar \delta\) for all \(k \ge \bar k\). Let \(\Delta {\varepsilon _{i}^{j}} = {\varepsilon _{i}^{j + 1}} - {\varepsilon _{i}^j}\). Keeping (25) in mind, it is clear that

\[\begin{eqnarray*} \Delta {\varepsilon _{i}^j} = {\varepsilon _{i}^0}(a_i - 1){a_{i}^j} \tag{33} \end{eqnarray*}\]

From (33), it is clear that \(\Delta \varepsilon _i^j \to \infty\) as \(j \to \infty\), which implies that there is a finite constant \(j\) satisfying \(\Delta \varepsilon _i^j > \bar \delta\). Therefore, it is evident that a finite number of switching occurs. The result contradicts an initial supposition of an infinite number of switching. Hence, we conclude that a finite number of switching occurs. Since \({\sigma_i}(t)\) is bounded and non-decreasing, it is clear that \({\sigma_i}(t) \to \sigma_i^s\) as \(t \to \infty\). Lastly, we will prove that \(\eta (t) \to 0\) as \(t \to \infty\). Let \({t_s}\) be the last switching instant. Then, there exists \(\sigma_i^s\) such that \({\sigma_i}(t) = \sigma_i^s\) for all \(t \ge {t_s}\). Thus, for all \(t \ge {t_s}\), we can rewrite the Eq. (28) as

\[\begin{eqnarray*} \dot \eta (t) &=& ({I_N} \otimes A)\eta(t) - (L_1{\Sigma ^s} \otimes B{B^T}P)\eta (t) \tag{34} \\ &=& ({I_N} \otimes A)\eta (t) - (L_1{\Sigma^*} \otimes B{B^T}P)\eta (t)\nonumber\\ & &+ (L_1({\Sigma^*} - {\Sigma^s}) \otimes B{B^T}P)\eta (t)\nonumber\\ &=& \hat A\eta(t) + \hat B\eta (t)\nonumber \end{eqnarray*}\]

where \(\hat A = ({I_N} \otimes A) - (L_1{\Sigma^*} \otimes B{B^T}P), \hat B = L_1({\Sigma^*} - {\Sigma^s}) \otimes B{B^T}P, {\Sigma^s} = diag(\sigma _1^s, \cdots ,\sigma_N^s), {\Sigma^*} = diag(\sigma_1^*, \cdots ,\sigma_N^*)\). In order to analyze the stability of (34), define the Lyapunov function \({V_s}(t) = {\eta ^T}(t)\left( {U{\Sigma^*} \otimes P} \right)\eta (t)\). Then, we have

\[\begin{eqnarray*} {\dot V_s}(t) \le - \alpha {V_s}(t) + {c_1}{\left\| {\eta (t)} \right\|^2} \le {c_1}{\left\| {\eta (t)} \right\|^2} \tag{35} \end{eqnarray*}\]

where \({c_1} = 2{\left\| {\left( {U{\Sigma^*} \otimes P} \right)\hat B} \right\|^2}\). Integrating both sides of (35) from \({t_s}\) to \(t\), it follows that

\[\begin{eqnarray*} {V_s}(t) &\le& {V_s}({t_s}) + {c_1}\int_{{t_s}}^t {{{\left\| {\eta (\tau )} \right\|}^2}d\tau } \tag{36} \\ &=& {V_s}({t_s}) + {c_1}\int_{{t_s}}^t {{{\left\| {\zeta (\tau )} \right\|}^2}d\tau } + {c_1}\int_{{t_s}}^t {{{\left\| {z(\tau )} \right\|}^2}d\tau } \nonumber \end{eqnarray*}\]

From (25) and the boundedness of \({\sigma_i}(t)\), it is clear that \({\delta_i}(t)\) is bounded, and \(z(t)\) is square integrable. Moreover, (27) implies square integrability of \(\zeta (t)\). From (36), it is evident that \(\eta (t)\) is also bounded and square integrable. Furthermore, considering (34), we can see that \(\dot \eta (t)\) is bounded. Consequently, the asymptotical stability of \(\eta (t)\) is guaranteed by Barbalat’s lemma.

Remark 2: Lemma 3 means that if the static gain \(\sigma_i\) is sufficiently large, the leader-following problem can be solved. Motivated by this fact, the proposed switching logic (25) increases the adaptive gain \({\sigma_i}(t)\) in a piecewise constant manner until the synchronization error between the leader and followers converges to zero. Since the adaptive gain \({\sigma_i}(t)\) is determined by the sequence \(\varepsilon _i^j\) as shown in (25), \(\varepsilon _i^j\) must be a strictly increasing sequence. It is worth noting that the presented method employs a simpler update law and lower dimensional observer than the previous Lyapunov-based methods [14] and [15]. In addition, compared to state feedback adaptive protocols [17] and [18], our approach is applicable to the directed network and depends on output feedback.

4.  Numerical Example

Consider the network of the form (1) as follows.

\[\begin{eqnarray*} A = \left( {\begin{array}{*{20}{c}} {0.15}&2.5\\ { -3}&{ - 0.2} \end{array}} \right),B = \left( {\begin{array}{*{20}{c}} 1\\ 1 \end{array}} \right),C = \left( {\begin{array}{*{20}{c}} 1&1 \end{array}} \right) \tag{37} \end{eqnarray*}\]

Figure 1 illustrates the directed graph representing the communication network of (37). It is easy to confirm that the Assumptions 1 and 2 hold for the system (37). From (2) and (3), we can derive the matrices \(P\) and \(Q\) as follows.

\[\begin{eqnarray*} P = \left( {\begin{array}{*{20}{c}} {1.29}&{-0.03}\\ {-0.03}&{0.76} \end{array}} \right),Q = \left( {\begin{array}{*{20}{c}} {0.85}&{-0.16}\\ {-0.16}&{1.34} \end{array}} \right) \tag{38} \end{eqnarray*}\]

Fig. 1  Network graph.

For simulation, we choose \({\varepsilon _{i}^0} = 0.2,\) \({a_i} = 2,\) \({\delta_{i}^0} = 0,\) \( i = 1, \cdots ,4,\) \( {x_0}(0) = {\left( { - 2,1} \right)^T},\) \({x_1}(0) = {\left( { - 1,2} \right)^T},\) \({x_2}(0) = {\left( {0, - 2} \right)^T},\) \({x_3}(0) = {\left( {1, - 1} \right)^T},\) \({x_4}(0) = {\left( {2,0} \right)^T}, \) \({{\hat x}_i}(0) = {\left( {0,0} \right)^T},\) \(i = 0, \cdots ,4\). Figure 2 demonstrates that the states of all followers converge to the state of the leader asymptotically. Importantly, as proved in Theorem 1, Fig. 3 illustrates that both \({\delta _i}(t)\) and \({\sigma _i}(t)\) are non-decreasing and bounded.

Fig. 2  The graphs of \({x_{i}(t)}\).

Fig. 3  The graphs of \({\sigma_{i}(t)}\) and \({\delta_{i}(t)}\).

References

[1] R. Olfati-Saber, and R. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. Autom. Control, vol.49, no.9, pp.1520-1533, 2004.
CrossRef

[2] J. Seo, H. Shim, and J. Back, “Consensus of high-order linear systems using dynamic output feedback compensator: Low gain approach,” Automatica, vol.45, no.11, pp.2659-2664, 2009.
CrossRef

[3] P. Wieland, R. Sepulchre, and F. Allgöwer, “An internal model principle is necessary and sufficient for linear output synchronization,” Automatica, vol.47, no.5, pp.1068-1074, 2011.
CrossRef

[4] H. Kim, H. Shim, and J.H. Seo, “Output consensus of heterogeneous uncertain linear multi-agent systems,” IEEE Trans. Autom. Control, vol.56, no.1, pp.200-206, 2011.
CrossRef

[5] Z. Li, Z. Duan, G. Chen, and L. Huang, “Consensus of multiagent systems and synchronization of complex networks: A unified viewpoint,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol.57, no.1, pp.213-224, 2010.
CrossRef

[6] H.L. Trentelman, K. Takaba, and N. Monshizadeh, “Robust synchronization of uncertain linear multi-agent systems,” IEEE Trans. Autom. Control, vol.58, no.6, pp.1511-1523, 2013.
CrossRef

[7] H. Zhang, F.L. Lewis, and A. Das, “Optimal design for synchronization of cooperative systems: State feedback, observer and output feedback,” IEEE Trans. Autom. Control, vol.56, no.8, pp.1948-1952, Aug. 2011.
CrossRef

[8] Z. Qu, C. Li, and F. Lewis, “Cooperative control with distributed gain adaptation and connectivity estimation for directed networks,” International Journal of Robust and Nonlinear Control, vol.24, pp.450-476, 2014.
CrossRef

[9] J. Mei, W. Ren, and J. Chen, “Distributed consensus of second-order multi-agent systems with heterogeneous unknown inertias and control gains under a directed graph,” IEEE Trans. Autom. Control, vol.61, no.8, pp.2019-2034, 2015.
CrossRef

[10] Z. Li, W. Ren, X. Liu, and M. Fu, “Consensus of multi-agent systems with general linear and Lipschitz nonlinear dynamics using distributed adaptive protocols,” IEEE Trans. Autom. Control, vol.58, no.7, pp.1786-1791, July 2013.
CrossRef

[11] Z. Li, G. Wen, Z. Duan, and W. Ren, “Designing fully distributed consensus protocols for linear multi-agent systems with directed graphs,” IEEE Trans. Autom. Control, vol.60, no.4, pp.1152-1157, April 2015.
CrossRef

[12] Y. Lv, Z. Li, Z. Duan, and G. Feng, “Novel distributed robust adaptive consensus protocols for linear multi-agent systems with directed graphs and external disturbances,” International Journal of Control, vol.90, no.2, pp.137-147, 2016.
CrossRef

[13] Q. Song, G. Wen, W. Yu, D. Meng, and W. Lu, “Fully distributed synchronization of complex networks with adaptive coupling strengths,” IEEE Trans. Cybern., vol.52, no.11, pp.11581-11593, 2022.
CrossRef

[14] Y. Lv, Z. Li, Z. Duan, and J. Chen, “Distributed adaptive output feedback consensus protocols for linear systems on directed graphs with a leader of bounded input,” Automatica, vol.74, pp.308-314, 2016.
CrossRef

[15] Y. Lv, Z. Li, and Z. Duan, “Distributed adaptive consensus protocols for linear multi-agent systems over directed graphs with relative output information,” IET Control Theory and Applications, vol.12, no.5, pp.613-620, 2018.
CrossRef

[16] P. DeLellis, M. diBernardo, and F. Garofalo, “Novel decentralized adaptive strategies for the synchronization of complex networks,” Automatica, vol.45, no.5, pp.1312-1318, 2009.
CrossRef

[17] W. Yu, P. DeLellis, G. Chen, M. di Bernardo, and J. Kurths, “Distributed adaptive control of synchronization in complex networks,” IEEE Trans. Autom. Control, vol.57, no.8, pp.2153-2158, 2012.
CrossRef

[18] L. Wang, Z. Sun, and Y. Cao, “Adaptive synchronization of complex networks with general distributed update laws for coupling weights,” Journal of the Franklin Institute, vol.356, pp.7444-7465, 2019.
CrossRef

Authors

Sungryul LEE
  Kunsan National University

Keyword