The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

Open Access
Output Feedback Ultimate Boundedness Control with Decentralized Event-Triggering

Koichi KITAMURA, Koichi KOBAYASHI, Yuh YAMASHITA

  • Full Text Views

    115

  • Cite this
  • Free PDF (9.3MB)

Summary :

In cyber-physical systems (CPSs) that interact between physical and information components, there are many sensors that are connected through a communication network. In such cases, the reduction of communication costs is important. Event-triggered control that the control input is updated only when the measured value is widely changed is well known as one of the control methods of CPSs. In this paper, we propose a design method of output feedback controllers with decentralized event-triggering mechanisms, where the notion of uniformly ultimate boundedness is utilized as a control specification. Using this notion, we can guarantee that the state stays within a certain set containing the origin after a certain time, which depends on the initial state. As a result, the number of times that the event occurs can be decreased. First, the design problem is formulated. Next, this problem is reduced to a BMI (bilinear matrix inequality) optimization problem, which can be solved by solving multiple LMI (linear matrix inequality) optimization problems. Finally, the effectiveness of the proposed method is presented by a numerical example.

Publication
IEICE TRANSACTIONS on Fundamentals Vol.E107-A No.5 pp.770-778
Publication Date
2024/05/01
Publicized
2023/11/10
Online ISSN
1745-1337
DOI
10.1587/transfun.2023MAP0005
Type of Manuscript
Special Section PAPER (Special Section on Mathematical Systems Science and its Applications)
Category

1.  Introduction

A cyber-physical system (CPS) is composed of physical and cyber layers, and a system where physical and information components are deeply connected through communication networks [1]-[3]. Several systems such as smart grid, healthcare, distributed robotic systems, and automobile systems can be regarded as a CPS (see, e.g., [4]-[7]). In large-scale CPSs, there are many sensors and actuators in the boundary of physical and cyber layers. Especially, in a sensor network, multiple sensors are located in a distributed way. As control theory of CPSs, it is important to develop a control method over a sensor network.

Event-triggered control is well known as one of the typical control methods in CPSs [8]-[10]. In event-triggered control, communications occur only when a measured signal is widely changed (i.e., an event occurs). One of the typical event-triggering conditions is to evaluate the difference between the measured state and the state that was recently sent to the controller. In event-triggered control over a sensor network, event-triggering mechanisms are also decentralized, that is, an event-triggering condition is implemented in each sensor unit. Such control method that each sensor has an event-triggering condition is known as a decentralized event-triggered control method (see, e.g., [11]-[14]).

In this paper, a design method of output feedback controllers with decentralized event-triggering mechanisms is proposed, where the notion of uniformly ultimate boundedness [15]-[19] is utilized as a control specification. Uniformly ultimate boundedness is a specification that the state stays within a certain set containing the origin after a certain time depending on the initial state. Hence, as a control specification, uniformly ultimate boundedness is weaker than asymptotic stability. However, introducing uniformly ultimate boundedness, the number of communications from sensors to a controller can be reduced in the neighborhood of the origin. This is an advantage of utilizing uniformly ultimate boundedness.

In design of event-triggered state-feedback controllers based on uniformly ultimate boundedness, some results using an LMI (linear matrix inequality) technique have been obtained in [16], [18], [19]. An LMI feasibility/optimization problem is a convex programming problem that can be efficiently solved. As a standard tool of control theory, an LMI technique has been widely used in stability analysis, stabilization, robust control, and so on [20]. It has been also used in design of event-triggering mechanisms [21].

To the best of our knowledge, an LMI-based controller design method based on uniformly ultimate boundedness has not been proposed in the framework of output feedback event-triggered control. Based on asymptotic stability, the authors have proposed a design method of output feedback controllers with decentralized event-triggering conditions [22]. In [22], the design problem is reduced to an LMI feasibility problem. In this paper, we extend the method in [22] to the case of uniformly ultimate boundedness.

First, we formulate the design problem of output feedback controllers in decentralized event-triggered control based on uniformly ultimate boundedness. In this problem formulation, we consider simultaneously finding of both the output feedback controller and the ellipsoid used in uniformly ultimate boundedness. Next, using LMI techniques, the design problem is rewritten. As a result, the design problem is reduced to a BMI (bilinear matrix inequality) optimization problem. Since a BMI optimization problem is a non-convex optimization problem, it is generally hard to solve it. BMI constraint conditions derived in the proposed method become LMI constraint conditions by fixing two scalars. In addition, these two scalars are chosen from the bounded intervals. Hence, using e.g., a grid search method, the BMI optimization problem derived in this paper can be rewritten as multiple LMI optimization problems. Finally, we present a numerical example to verify the effectiveness of the proposed method.

Notation: Denote by \({\mathcal R}\) the set of real numbers. Denote by \(I_n\) and \(0_{m \times n}\) the \(n \times n\) identity matrix and the \(m \times n\) zero matrix, respectively. For simplicity of notations, instead of \(0_{m \times n}\) and \(I_n\), the symbols \(0\) and \(I\) are sometimes used, respectively. Denote by \(M \succ 0\) (\(M \succeq 0\)) that the matrix \(M\) is positive-(semi)definite. For the scalar \(a \in R\), denote \(\lceil a \rceil\) by the ceiling function of \(a\). Denote by \(1_n\) the \(n\)-dimensional vector whose elements are all one. For the vector \(x\), denote by \(x_i\) the \(i\)-th element of \(x\). For the matrix \(M\), denote by \(M^{\top}\) the transpose matrix of \(M\). For the matrix \(M\), denote by \({\rm tr} (M)\) the trace of \(M\). For scalars \(a_1, a_2, \dots, a_n\), denote by \(\text{diag}(a_1,a_2,\dots,a_n)\) the diagonal matrix. For matrices/vector \(A_1, A_2, \dots, A_n\), denote by \(\text{block-diag}(A_1,A_2,\dots,A_n)\) the block diagonal matrix. For the matrix \(P\succ 0\) and a scalar \(\gamma\), we define the ellipsoid \(\mathcal{E}(P,\gamma):=\{x\in \mathcal{R}^n | x^{\top}Px \leq \gamma \}\). The symmetric matrix \(\begin{bmatrix} A & B^{\top} \\ B & C \end{bmatrix}\) is denoted by \(\begin{bmatrix} A & \ast \\ B & C \end{bmatrix}\). For the matrix \(M\), denote by \({\rm Row}_i(M)\) the \(i\)-th row of \(M\).

2.  Problem Formulation

Suppose that the following discrete-time linear system is given as a plant:

\[\begin{equation*} \begin{cases} x(k+1) = A x(k) + B u(k), \\ y(k) = C x(k), \end{cases} \tag{1} \end{equation*}\]

where \(x(k) \in {\mathcal R}^n\) is the state of the plant, \(u(k) \in {\mathcal R}^m\) is the control input, \(y(k) \in {\mathcal R}^r\) is the measured output, and \(k \in \{0,1,2,\dots \}\) is the discrete time. Coefficient matrices \(A \in {\mathcal R}^{n\times n}\), \(B\in{\mathcal R}^{n\times m}\), and \(C\in{\mathcal R}^{r\times n}\) are given in advance. Suppose also that the number of sensors is \(r\) (the dimension of the measured output), and the sensor \(i \in \{ 1,2,\dots,r \}\) measures the \(i\)-th element of the measured output. Figure 1 illustrates a networked control system studied in this paper. We suppose a sensor network, and the controller collects the measured output from \(r\) sensors through a communication network. Each sensor has an event-triggering mechanism that determines if the measured value is sent. We suppose that the controller is directly connected to the plant.

Fig. 1  Networked control system studied in this paper.

The structure of a controller is given in the form of the following output-feedback controller with a direct term:

\[\begin{equation*} \begin{cases} \hat{x}(k+1) = A_K \hat{x}(k) + B_K \hat{y}(k), \\ u(k) = C_K \hat{x}(k) + D_K \hat{y}(k), \end{cases} \tag{2} \end{equation*}\]

where \(\hat{x}(k) \in {\mathcal R}^n\) is the state of the controller, and \(A_K \in {\mathcal R}^{n\times n}\), \(B_K \in{\mathcal R}^{n \times r}\), \(C_K\in{\mathcal R}^{m \times n}\), and \(D_K\in{\mathcal R}^{m \times r}\) are coefficient matrices including design parameters. The vector \(\hat{y}(k) \in {\mathcal R}^r\) is the measured output of the plant, which is managed in the controller, and is defined by

\[\begin{equation*} \hat{y}(k) := \begin{cases} y(k) & \mbox{if $y(k)$ is updated,} \\ \hat{y}(k-1) & \mbox{if $y(k)$ is not updated.} \end{cases} \tag{3} \end{equation*}\]

The event-triggering mechanism in each sensor determines if the following condition with \(y_i\) and \(\hat{y}_i\) holds:

\[\begin{equation*} ( \hat{y}_i(k-1)-y_i(k) )^2 > a_i^2 y_i^2(k) + b_i, \tag{4} \end{equation*}\]

where \(a_i > 0\) and \(b_i > 0\) are scalar parameters which are given in advance. It is said that the event occurs if the condition (4) is satisfied. When the event occurs, the measured output in the controller is updated. By the parameter \(a_i\), we can relatively evaluate the difference between the error \(( \hat{y}_i(k-1)-y_i(k) )^2\) and \(y_i^2(k)\). By the parameter \(b_i\), we can absolutely evaluate this difference. If we set \(b_i=0\), then updates of the measured output frequently occur in the neighborhood of the origin. This is because (4) may be satisfied even if \(( \hat{y}_i(k-1)-y_i(k) )^2\) is sufficiently small. By setting \(b_i\) appropriately, we will overcome this technical issue. Using (4), the update rule of \(\hat{y}(k)\) of (3) is given by

\[\begin{equation*} \hat{y}(k) := \begin{cases} y(k) & \text{if (4) holds for some $i$,} \\ \hat{y}(k-1) & \mbox{otherwise,} \end{cases} \tag{5} \end{equation*}\]

which implies that all measured outputs are aggregated in the controller if the triggering condition for at least one sensor is satisfied. From (5), the following inequality always satisfies:

\[\begin{equation*} (\hat{y}_i(k)-y_i(k))^2 \le a_i^2 y_i^2(k)+b_i, i \in \{1,2,\dots,r\}. \tag{6} \end{equation*}\]

Next, we define the state of the closed-loop system by \(\bar{x}(k) := [x^{\top}(k) \hat{x}^{\top}(k)]^{\top}\). Then, we define uniformly ultimate boundedness proposed in [15] as follows.

Definition 1: Consider the closed-loop system composed of the linear system (1) and the output feedback controller (2), (5). Then, it is said that the closed-loop system is uniformly ultimately bounded (UUB) in a convex and compact set \({\mathcal S}\) containing the origin in its interior if the following condition holds: for every initial condition \(\bar{x}(0)=\bar{x}_0\), there exists \(T(\bar{x}_0) \in \{0,1,2,...\}\) such that the condition \(\bar{x}(k) \in {\mathcal S}\) holds for any \(k\) satisfying \(k \geq T(\bar{x}_0)\).

Based on the above setting, we formulate the design problem of an output-feedback controller with decentralized event-triggering conditions as follows.

Problem 1: For the system (1), suppose that the parameters \(a_i > 0\) and \(b_i > 0\) in the decentralized event-triggering condition (4) are given. Then, find coefficient matrices \(A_K\), \(B_K\), \(C_K\), and \(D_K\) in the controller (2) and a positive definite matrix \(P \in {\mathcal R}^{2n \times 2n}\) such that the closed-loop system is UUB in a certain ellipsoid \(\mathcal{E}(P,1)\).

Remark 1: The update rule (5) of \(\hat{y}(k)\) is synchronous. We may consider the asynchronous update rule as follows:

\[\hat{y}_i(k) := \begin{cases} y_i(k) & \text{if (4) holds for some $i$,} \\ \hat{y}_i(k-1) & \mbox{otherwise.} \end{cases}\]

The proposed method can be directly applied to this case.

3.  Main Result

In this section, we propose a solution method for Problem 1 based on LMI techniques. LMI techniques efficiently work, and Problem 1 can be rewritten as an LMI optimization problem.

Defining the error variable by \(e(k) := \hat{y}(k) - y(k)\), which can replace (6) with

\[\begin{equation*} e_i^2(k) \le a_i^2 y_i^2(k)+b_i, \tag{7} \end{equation*}\]

the closed-loop system is derived as

\[\begin{equation*} \bar{x}(k+1)=\bar{A} \bar{x}(k) + \bar{B} e(k), \tag{8} \end{equation*}\]

where

\[\begin{aligned} \bar{A} &= \begin{bmatrix} A+B D_K C & B C_K \\ B_K C & A_K \end{bmatrix}, \\ \bar{B} &= \begin{bmatrix} B D_K \\ B_K \end{bmatrix}. \end{aligned}\]

To design a controller such that the closed-loop system is UUB, we need to consider two cases, i.e., i) \(\bar{x}(k)\notin \mathcal{E}(P,1)\) and ii) \(\bar{x}(k) \in \mathcal{E}(P,1)\).

3.1  Condition that Must be Satisfied in \(\bar{x}(k) \notin \mathcal{E}(P,1)\)

First, consider the case of \(\bar{x}(k) \notin \mathcal{E}(P,1)\). In this case, the state \(\bar{x}\) must reach the ellipsoid \({\mathcal E}(P,1)\). We introduce a quadratic Lyapunov function as follows:

\[\begin{equation*} V(k) = \bar{x}^{\top}(k) P \bar{x}(k), \tag{9} \end{equation*}\]

where \(P=P^{\top} \in {\mathcal R}^{2n\times 2n}\) is a positive-definite matrix. Here, the control specification that the state \(\bar{x}\) reaches the ellipsoid \({\mathcal E}(P,1)\) is replaced with the control specification that \(V(k)\) monotonically decreases. Hence, we consider the problem of finding an output feedback controller such that the following condition is satisfied:

\[\begin{equation*} V(k+1)-V(k) < -\beta V(k), \tag{10} \end{equation*}\]

where \(\beta \in [0,1)\) is a given parameter that tunes the convergence rate.

Then, we can obtain the following lemma in the case i) \(\bar{x}(k)\notin \mathcal{E}(P,1)\).

Lemma 1: Assume that (7) holds. A sufficient condition that (10) holds is given by the following condition:

\[\begin{equation*} P_1- \left( \sum_{i=1}^{r} \tau_iP_{2,i} + \tau_{r+1}P_3 \right) \succ 0, \tag{11} \end{equation*}\]

where \(P_1\), \(P_{2,i}\), and \(P_3\) are given as follows:

\[\begin{aligned} P_1 &= \begin{bmatrix} \bar{\beta}P-\bar{A}^{\top}P\bar{A} & \ast & \ast \\ -\bar{B}^{\top}P\bar{A} & -\bar{B}^{\top}P\bar{B} & \ast \\ 0 & 0 & 0 \end{bmatrix} , \\ P_{2,i} &= \mbox{block-diag}(a_i^2 {\rm Row}_i(C)^{\top} {\rm Row}_i(C),0,-E_i^r,b_i),\\ P_3 &= \mbox{block-diag}(P,0,-1), \end{aligned}\]

where \(\bar{\beta}:=1-\beta\), and \(\tau_1, \tau_2, \dots, \tau_{r+1}\) are positive scalars that are freely determined. The matrix \(E_i^r \in {\mathcal R}^{r \times r}\) denotes the \(r \times r\) diagonal matrix in which only the \((i,i)\)-th element is \(1\) and other elements are \(0\).

Proof: Substituting (8) and \(V(k) = \bar{x}^{\top}(k) P \bar{x}(k)\) into (10), we can obtain the following inequality:

\[\begin{aligned} & (\bar{A} \bar{x}(k) + \bar{B} e(k))^{\top} P (\bar{A} \bar{x}(k) + \bar{B} e(k)) - \bar{x}^{\top}(k) P \bar{x}(k)\\ & < -\beta \bar{x}^{\top}(k) P \bar{x}(k). \end{aligned}\]

From this inequality, we can obtain

\[\begin{equation*} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix}^{\top}P_1 \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix} >0 \tag{12} \end{equation*}\]

Noting that the following expression holds:

\[y_i^2(k) = \bar{x}^{\top}(k) \begin{bmatrix} {\rm Row}_i(C)^{\top} \\ 0_{n \times 1} \end{bmatrix} \begin{bmatrix} {\rm Row}_i(C) & 0_{1 \times n} \end{bmatrix} \bar{x}(k),\]

which can rewrite (7) as

\[\begin{equation*} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix}^{\top}P_{2,i} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix} \geq 0,\ i \in \{ 1,2,\dots, r \}. \tag{13} \end{equation*}\]

The condition \(\bar{x}(k) \notin \mathcal{E}(P, 1)\) implies \(\bar{x}^{\top}(k) P \bar{x}(k) > 1\), can be rewritten as

\[\begin{equation*} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix}^{\top}P_{3} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix} > 0. \tag{14} \end{equation*}\]

Finally, the \(\mathcal{S}\)-procedure [20] is applied to (12), (13), and (14). Thus, we can obtain (11). \(\tag*{◻}\)

3.2  Condition that Must be Satisfied in \(\bar{x}(k) \in \mathcal{E}(P,1)\)

Next, consider the case ii) \(\bar{x}(k) \in \mathcal{E}(P,1)\). To achieve that the closed-loop system is UUB, it is sufficient that the following condition holds: if \(\bar{x}(k) \in \mathcal{E}(P,1)\) is satisfied, then \(\bar{x}(k+1) \in \mathcal{E}(P,1)\) is also satisfied. From this condition, we can derive the following lemma.

Lemma 2: Assume that (7) holds. A sufficient condition that both \(\bar{x}(k)\in\mathcal{E}(P,1)\) and \(\bar{x}(k+1)\in\mathcal{E}(P,1)\) hold is given by the following condition:

\[\begin{equation*} P_4-\left(\sum_{i=1}^r \kappa_iP_{2,i}+\kappa_{r+1}P_5\right) \succ 0, \tag{15} \end{equation*}\]

where

\[P_4 = \begin{bmatrix} -\bar{A}^{\top}P\bar{A} & \ast & \ast \\ -\bar{B}^{\top}P\bar{A} & - \bar{B}^{\top}P\bar{B} & \ast \\ 0 & 0 & 1 \end{bmatrix} ,\]

\(P_5 = -P_3\), and \(\kappa_1, \kappa_2, \dots, \kappa_{r+1}\) are positive scalars that are freely determined.

Proof: From the condition \(\bar{x}(k+1)\in\mathcal{E}(P,1)\), we can derive the following quadratic form:

\[\begin{equation*} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix}^{\top}P_4 \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix} \ge0. \tag{16} \end{equation*}\]

In a similar way, from the condition \(\bar{x}(k)\in\mathcal{E}(P,1)\), we can derive the following quadratic form:

\[\begin{equation*} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix}^{\top}P_{5} \begin{bmatrix} \bar{x}(k) \\ e(k) \\ 1 \end{bmatrix} >0. \tag{17} \end{equation*}\]

Finally, the \(\mathcal{S}\)-procedure is applied to (13), (16), and (17). Thus, we can obtain (15). \(\tag*{◻}\)

3.3  Reduction to an LMI Optimization Problem

Consider reducing Problem 1 to an LMI optimization problem.

From the viewpoint of control performance, it is desirable the volume of the ellipsoid \({\mathcal{E}}(P,1)\) is smaller. We also consider minimizing \({\rm tr}(P^{-1})\). By minimizing \({\rm tr}(P^{-1})\), it is expected that the volume of \({\mathcal{E}}(P,1)\) becomes smaller. Based on the result in [18], Lemma 1, Lemma 2, and minimization of \({\rm tr}(P^{-1})\), we can obtain the following theorem as a solution to Problem 1.

Theorem 1: Problem 1 is reduced to the following BMI (bilinear matrix inequality) optimization problem:

Problem 2:

\[\begin{align} \mbox{find}\quad & \tau_{r+1} \in (0, \bar{\beta}), \kappa_{r+1} \in (0, 1), X \succ 0, Y \succ 0, \nonumber \\ & \Gamma \succ 0, \Lambda \succ 0, Q \succeq 0, W_1, W_2, W_3, D_K \nonumber \\ \mbox{minimize}\quad & {\rm tr}(2Y+Q) \tag{18} \\ \mbox{subject to}\quad & \begin{bmatrix} (\bar{\beta}-\tau_{r+1}) \Phi_1 & \ast & \ast & \ast \\ 0 & \Phi_2 & \ast & \ast \\ \Phi_3 & \Phi_4 & \Phi_1 & \ast \\ \Phi_5 & \Phi_6 & 0 & \Phi_7 \end{bmatrix} \succ 0, \tag{19} \\ & \begin{bmatrix} \kappa_{r+1} \Phi_1 & \ast & \ast & \ast \\ 0 & \Phi_2^{\prime} & \ast & \ast \\ \Phi_3 & \Phi_4 & \Phi_1 & \ast \\ \Phi_5 & \Phi_6 & 0 & \Phi_7^{\prime} \end{bmatrix} \succ 0, \tag{20} \\ & \begin{bmatrix} Q & I & 0 \\ I & X & I \\ 0 & I & Y \end{bmatrix} \succeq 0, \tag{21} \end{align}\]

where

\[\begin{aligned} \Phi_1&= \begin{bmatrix} X & I \\ I & Y \end{bmatrix}, \\ \Phi_2&= \mbox{block-diag}(\Gamma,\tau_{r+1}), \\ \Phi_3&= \begin{bmatrix} XA+W_1C & W_3 \\ A + B D_K C & AY-BW_2 \end{bmatrix}, \\ \Phi_4&= \begin{bmatrix} W_1 & 0 \\ B D_K & 0 \end{bmatrix}, \\ \Phi_5&= \begin{bmatrix} M & MY \\ 0 & 0 \end{bmatrix}, \\ \Phi_6&= \mbox{block-diag}(0, 1_r), \\ \Phi_7&= \mbox{block-diag}(2I_r-\Gamma, \Sigma_b^{-1}(2I_r-\Gamma)), \\ \Phi_2^{\prime}&= \mbox{block-diag}(\Lambda,1-\kappa_{r+1}), \nonumber \\ \Phi_7^{\prime}&= \mbox{block-diag}(2I_r-\Lambda, \Sigma_b^{-1}(2I_r-\Lambda)), \nonumber \\ M &= \begin{bmatrix} {\rm diag}(a_1, a_2, \dots, a_r) & 0_{r \times (n-r)} \end{bmatrix}, \\ \Sigma_b &={\rm diag}(b_1, b_2, \dots, b_r). \end{aligned}\]

Two matrices \(X, Y \in {\mathcal R}^{n\times n}\) and two diagonal matrices \(\Gamma = {\rm diag}(\tau_1, \tau_2, \dots, \tau_r) \in {\mathcal R}^{r \times r}\), \(\Lambda = {\rm diag}(\kappa_1, \kappa_2, \dots, \kappa_r) \in {\mathcal R}^{r \times r}\) are positive definite matrices including decision variables. The matrices \(W_1 \in {\mathcal R}^{n\times r}\), \(W_2 \in {\mathcal R}^{m \times n}\), \(W_3 \in {\mathcal R}^{n \times n}\), and \(D_c\) are unconstrained matrices including decision variables.

Using the solution for Problem 2, the matrices \(A_K\), \(B_K\), \(C_K\) in the controller (2) are derived as

\[\begin{aligned} A_K =& \ (X-Y^{-1})^{-1} (XA + XBD_KC - XBC_K \\ & \ + XB_KCs -Y^{-1} B_K C - W_3 Y^{-1}), \\ B_K = & \ (X-Y^{-1})^{-1} (W_1 - XBD_K), \\ C_K = & \ W_2 Y^{-1} + D_KC, \end{aligned}\]

respectively.

Proof: First, without loss of generality, the matrix \(P\) can be replaced with

\[\begin{equation*} P= \begin{bmatrix} X & Z \\ Z & Z \end{bmatrix}, \tag{22} \end{equation*}\]

where \(Z \in R^{n \times n}\) is a positive definite matrix (see [23]). It is shown that \(X-Z \succ 0\) holds by applying the Schur complement [20] to \(P \succ 0\). We define the matrix \(Y\) by \(Y := (X-Z)^{-1} \succ 0\). The conditions (11) and (15) can be rewritten as

\[\begin{align} & \Theta_1-\Theta_2^{\top}\Theta_3^{-1}\Theta_2 \succ 0, \tag{23} \\ & \Theta_1^{\prime}-\Theta_2^{\top}(\Theta_3^{\prime})^{-1}\Theta_2 \succ 0, \tag{24} \end{align}\]

respectively, where

\[\begin{aligned} \Theta_1 &= \mbox{block-diag}((\bar{\beta}-\tau_{r+1})P, \Gamma, \tau_{r+1}),\\ \Theta_2 &= \begin{bmatrix} P\bar{A} & P\bar{B} & 0 \\ \bar{C} & 0 & 0 \\ 0 & 0 & 1_r \end{bmatrix}, \\ \bar{C} &= \begin{bmatrix} M & 0_{r \times n} \end{bmatrix}, \\ \Theta_3 &= \mbox{block-diag}(P, \Gamma^{-1}, \Sigma_b^{-1}\Gamma^{-1}), \\ \Theta_1^{\prime} &= \mbox{block-diag}(\kappa_{r+1}P, \Lambda, 1-\kappa_{r+1}), \\ \Theta_3^{\prime} &= \mbox{block-diag}(P, \Lambda^{-1}, \Sigma_b^{-1}\Lambda^{-1}). \end{aligned}\]

Applying the Schur complement to (23) and (24), we can derive the following two conditions:

\[\begin{align} \Theta :=& \ \begin{bmatrix} \Theta_1 & \Theta_2^{\top} \\ \Theta_2 & \Theta_3 \end{bmatrix} \succ 0, \tag{25} \\ \Theta^{\prime} :=& \ \begin{bmatrix} \Theta_1^{\prime} & \Theta_2^{\top} \\ \Theta_2 & \Theta_3^{\prime} \end{bmatrix} \succ 0. \tag{26} \end{align}\]

We define the matrices \(T\), \(\bar{T}_1\), \(\bar{T}_2\), \(\bar{T}\) as follows:

\[\begin{aligned} T &:= \begin{bmatrix} I_n & 0_{n \times n} \\ Y & -Y \end{bmatrix}, \\ \bar{T}_1 &:= \mbox{block-diag}(T, I_r, 1), \\ \bar{T}_2 &:= \mbox{block-diag}(T, I_r, I_r). \\ \bar{T} &:= \mbox{block-diag}(\bar{T}_1, \bar{T}_2), \end{aligned}\]

We multiply \(\bar{T}\) from the left of \(\Theta\) of (25) and \(\Theta^{\prime}\) of (26). In addition, we multiply \(\bar{T}^{\top}\) from the right of the obtained two matrices. We can derive the following two conditions:

\[\begin{align} \bar{T} \Theta \bar{T}^{\top} &= \begin{bmatrix} \bar{T}_1 \Theta_1 \bar{T}_1^{\top} & \ast \\ \bar{T}_2 \Theta_2 \bar{T}_1^{\top} & \bar{T}_2 \Theta_3 \bar{T}_2^{\top} \end{bmatrix}, \tag{27} \\ \bar{T} \Theta^{\prime} \bar{T}^{\top} &= \begin{bmatrix} \bar{T}_1 \Theta_1^{\prime} \bar{T}_1^{\top} & \ast \\ \bar{T}_2 \Theta_2 \bar{T}_1^{\top} & \bar{T}_2 \Theta_3^{\prime} \bar{T}_2^{\top} \end{bmatrix}, \tag{28} \end{align}\]

where

\[\begin{aligned} \bar{T}_1 \Theta_1 \bar{T}_1^{\top} &= \mbox{block-diag}(\bar{\beta}-\tau_{r+1})TPT^{\top}, \Gamma, \tau_{r+1}), \nonumber \\ \bar{T}_2 \Theta_2 \bar{T}_1^{\top} &= \begin{bmatrix} TP\bar{A}T^{\top} & TP\bar{B} & 0 \\ \bar{C} T^{\top} & 0 & 0 \\ 0 & 0 & 1_r \end{bmatrix} \nonumber \\ \bar{T}_2 \Theta_3 \bar{T}_2^{\top} &= \mbox{block-diag}(TPT^{\top}, \Gamma^{-1}, \Sigma_b^{-1}\Gamma^{-1}), \nonumber \\ \bar{T}_1 \Theta_1^{\prime} \bar{T}_1^{\top} &= \mbox{block-diag}(\kappa_{r+1}TPT^{\top}, \Lambda, 1-\kappa_{r+1}), \nonumber \\ \bar{T}_2 \Theta_3^{\prime} \bar{T}_2^{\top} &= \mbox{block-diag}(TPT^{\top}, \Lambda^{-1}, \Sigma_b^{-1}\Lambda^{-1}). \nonumber \end{aligned}\]

From \(\Gamma \succ 0\) and \(\Lambda \succ 0\), the following two inequalities holds (see, e.g., [18]):

\[\begin{aligned} & (I_r-\Gamma)\Gamma^{-1}(I_r-\Gamma) \succeq 0, \\ & (I_r-\Lambda)\Lambda^{-1}(I_r-\Lambda) \succeq 0, \end{aligned}\]

which can be rewritten as

\[\begin{aligned} & \Gamma^{-1} -2I_r +\Gamma \succeq 0, \\ & \Lambda^{-1} -2I_r +\Lambda \succeq 0, \end{aligned}\]

respectively. Applying these inequalities to (27) and (28), the following conditions can be obtained as a sufficient condition of (27) and (28):

\[\begin{align} \begin{bmatrix} \bar{T}_1 \Theta_1 \bar{T}_1^{\top} & \ast \\ \bar{T}_2 \Theta_2 \bar{T}_1^{\top} & \bar{T}_2 \tilde{\Theta}_3 \bar{T}_2^{\top} \end{bmatrix} \succ 0, \tag{29} \\ \begin{bmatrix} \bar{T}_1 \Theta_1 \bar{T}_1^{\top} & \ast \\ \bar{T}_2 \Theta_2 \bar{T}_1^{\top} & \bar{T}_2 \tilde{\Theta}_3^{\prime} \bar{T}_2^{\top} \end{bmatrix} \succ 0, \tag{30} \end{align}\]

where

\[\begin{aligned} \tilde{\Theta}_3 &= \mbox{block-diag}(P, 2I_r-\Gamma, \Sigma_b^{-1}(2I_r-\Gamma)), \\ \tilde{\Theta}_3^{\prime} &= \mbox{block-diag}(P, 2I_r-\Lambda, \Sigma_b^{-1}(2I_r-\Lambda)). \end{aligned}\]

From the definition of \(T\), we can obtain

\[\begin{aligned} TPT^{\top} &= \begin{bmatrix} X & I \\ I & Y \end{bmatrix}, \\ TP\bar{B} &= \begin{bmatrix} W_1 \\ B D_K \end{bmatrix}, \\ TP\bar{A}T^{\top} &= \begin{bmatrix} XA+W_1C & W_3 \\ A+B D_K C & AY-BW_2 \end{bmatrix}, \\ \bar{C}T^{\top} &= \begin{bmatrix} M & MY \end{bmatrix}, \end{aligned}\]

where

\[\begin{aligned} W_1 = & \ X B D_K + Z B_K, \\ W_2 = & \ C_K Y - D_K C Y, \\ W_3 = & \ XAY + XBD_KCY - XBC_KY + ZB_KCY \\ & \ - ZA_KY. \end{aligned}\]

Applying these relations to (29) and (30), we have (19) and (20). The matrices \(A_K\), \(B_K\), and \(C_K\) in the controller (2) can be obtained from \(W_1\), \(W_2\), and \(W_3\).

Next, consider minimization of \({\rm tr}(P^{-1})\). From (22), we can obtain

\[P^{-1} = \begin{bmatrix} Y & -Y \\ -Y & Y + Z^{-1} \end{bmatrix}.\]

From this expression, we can obtain \({\rm tr}(P^{-1}) = 2 {\rm tr}(Y) + {\rm tr}(Z^{-1})\). We introduce a new matrix \(Q\) such that \(Q - Z^{-1} \succeq 0\) holds. Then, \({\rm tr}(Q) \geq {\rm tr}(Z^{-1})\) holds. Hence, minimization of \({\rm tr}(P^{-1})\) can be equivalently rewritten as minimization of \({\rm tr}(Q)\) under \(Q - Z^{-1} \succeq 0\). From this fact, we can obtain (18). Furthermore, applying the Schur complement to \(Q - Z^{-1} \succeq 0\), we can obtain

\[\begin{bmatrix} Q & I \\ I & Z \end{bmatrix} \succeq 0.\]

Nothing that \(Z = X-Y^{-1}\) holds, this expression can be rewritten as

\[\begin{bmatrix} Q & I \\ I & X \end{bmatrix} - \begin{bmatrix} 0 \\ I \end{bmatrix} Y^{-1} \begin{bmatrix} 0 & I \end{bmatrix} \succeq 0.\]

Applying the Schur complement to this expression, we can obtain (21).

Finally, consider \(T(\bar{x}_0)\) in Definition 1. From (10), we can obtain \(V(k) < \bar{\beta}^k V(0)\). Then, from \(\bar{\beta}^k V(0)=1\), \(T(\bar{x}_0)\) can be obtained as

\[T(\bar{x}_0) = \left\lceil - \frac{{\rm \log} (\bar{x}_0)^{\top} P \bar{x}_0}{{\rm \log} \bar{\beta}} \right\rceil.\]

This completes the proof. \(\tag*{◻}\)

The BMI constraint conditions (19) and (20) can be transformed into the LMI constraint conditions by fixing the two scalars \(\tau_{r+1}\) and \(\kappa_{r+1}\). Furthermore, optimal \(\tau_{r+1}\) and \(\kappa_{r+1}\) must be appropriately chosen from the intervals \((0,\bar{\beta})\) and \((0,1)\), respectively. Hence, a solution to Problem 1 can be derived by solving multiple LMI optimization problems through a grid search method [18]. The outline of the optimization procedure is given as follows: First, enumerate in an appropriate grid pattern the finite set of points \({\mathcal G} := \{ (\tau_{r+1}^1, \kappa_{r+1}^1), (\tau_{r+1}^2, \kappa_{r+1}^2), \dots, (\tau_{r+1}^G, \kappa_{r+1}^G) \}\) from the \((0,\bar{\beta}) \times (0,1)\) (a fine grid pattern (i.e., a larger \(G\)) is better). Next, solve an LMI optimization problem for each element of \({\mathcal G}\). Finally, choose the element of \({\mathcal G}\) and the corresponding controller that \({\rm tr}(2Y+Q)\) is minimal.

4.  Numerical Example

4.1  Problem Setting and Obtained Controller

As a plant, we consider the eight-state four-input four-output system, which is derived based on the decentralized interconnected system in [24]. The coefficient matrices \(A\), \(B\), and \(C\) are given by respectively. The parameters \(a_i\) and \(b_i\) (\(i=1,2,3,4\)) in the event-triggering condition (4) are given by \(a_i = 0.6\) and \(b_i = 0.8\), respectively. The parameter \(\beta\) in (10) is given by \(\beta=0.1\).

By solving Problem 2 (i.e., multiple LMI optimization problems are solved), we can obtain the coefficient matrices \(A_K\), \(B_K\), \(C_K\), and \(D_K\) of the controller (2) as follows: respectively.

4.2  Simulation Result and Discussion

Next, we present a numerical simulation. The initial states of the plant and the controller are given by \(x(0) = [10\ 20\ 15\ 20\ 30\ 20\ 10\ 45]^{\top}\) and \(\hat{x}(0) = 0_{8 \times 1}\), respectively. Figure 2 shows the time response of the state in the plant. Figure 3 shows the time response of the Lyapunov function. Figure 4 shows the control input. Figure 5 shows the time response of \(x(k)+\hat{x}(k)\). Figure 6 shows the time response of the event.

\[\begin{align}A &=\begin{bmatrix} 1.0130 & -0.0044 & 0.0057 & 0.0043 & 0.0110 & 0.1989 & 0.0020 & 0.0076 \\ -0.0515 & 1 & -0.0022 & -0.0014 & -0.0058 & -0.0908 & 0.0977 & -0.0040 \\0 & 0 & 1 & 0.0289 & 0 & 0 & 0 & 0.0519 \\ 0 & 0 & 0 & 0.9675 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0.0275 & 0.0031 & 0.8462 & 0 & 0 & 0.0084 \\ -0.0150 & 0.0125 & -0.0006 & -0.0004 & -0.0013 & 0.8420 & 0.0204 & 0.0010 \\-0.0321 & 0.0272 & -0.0018 & -0.0014 & -0.0029 & -0.0558 & 0.8341 & -0.0023 \\ 0 & 0 & 0 & 0.0220 & 0 & 0 & 0 & 0.8205\end{bmatrix},\\B &=\begin{bmatrix} -0.0078 & 0.0024 & 0.0017 & 0.0001 \\0.0583 & -0.0008 & 0.0014 & -0.0001 \\0 & 0.0171 & 0 & 0.0007 \\0 & 0.1426 & 0 & 0 \\0 & 0.0004 & 0 & 0.0012 \\0.0004 & -0.0001 & 0.0153 & -0.0002 \\0.0010 & 0.0002 & 0.0374 & -0.0008 \\0 & 0.0017 & 0 & 0.0225\end{bmatrix},\\C &=\begin{bmatrix}I_4 & 0_{4 \times 4}\end{bmatrix},\\A_K &=\begin{bmatrix} 0.4021 & 0.0673 & -0.0384 & -0.0087 & 0.0050 & -0.1112 & 0.0647 & 0.0002 \\ 0.0322 & -0.0448 & 0.0004 & 0.0017 & -0.0002 & -0.0116 & 0.0061 & 0.0001 \\-0.0246 & -0.0041 & 0.5403 & -0.0415 & -0.1638 & -0.0171 & 0.0033 & 0.0317 \\ 0.0232 & 0.0007 & -0.0517 & 0.0093 & 0.0113 & -0.0003 & 0.0006 & -0.0025 \\ -0.0292 & -0.0027 & -1.0186 & 0.0959 & 0.9120 & -0.0209 & 0.0050 & -0.0618 \\-5.6143 & -0.6988 & 0.4401 & -0.0215 & -0.0489 & -0.4545 & -0.0735 & -0.0153 \\-13.6276 & -1.7071 & 1.3427 & -0.0785 & -0.0992 & -3.3303 & 0.6500 & -0.0244 \\-0.5654 & -0.0575 & -19.0819 & 1.7548 & -1.0977 & -0.3335 & 0.0717 & -0.3759\end{bmatrix},\\B_K &=\begin{bmatrix}-0.0809 & 0.0012 & -0.0056 & -0.0007 \\-0.0082 & -0.0469 & 0.0022 & 0.0050 \\0.0019 & -0.0009 & -0.0855 & 0.0063 \\0.0155 & 0.0005 & 0.0062 & 0.0032 \\-0.0005 & 0.0004 & -0.0024 & -0.0014 \\-0.0016 & 0.0017 & 0.0022 & -0.0114 \\ 0.0017 & -0.0056 & 0.0073 & -0.0269 \\0.0016 & 0.0055 & -0.0197 & -0.0250\end{bmatrix},\\C_K &=\begin{bmatrix}-9.8287 & 15.9198 & 1.6434 & -0.0802 & -0.1271 & -3.6604 & 1.5570 & -0.0311 \\-0.0984 & -0.0011 & 0.2887 & 6.4833 & -0.0285 & -0.0141 & 0.0035 & 0.0123 \\364.4753 & 45.8527 & -18.8258 & 0.5099 & 3.5559 & 87.5031 & 4.9988 & 1.6705 \\ 25.1427 & 2.5616 & 848.2085 & -77.5632 & 48.3749 & 14.7800 & -3.1733 & 53.2072\end{bmatrix},\\D_K &=\begin{bmatrix} 0.1592 & -0.0768 & -0.0091 & -0.0693 \\-0.0768 & -0.0022 & -0.0298 & -0.2382 \\-0.0091 & -0.0298 & -0.1616 & 0.7547 \\-0.0693 & -0.2382 & 0.7547 & 1.0794\end{bmatrix},\end{align}\]

Fig. 2  Time response of the state.

Fig. 3  Time response of the Lyapunov function \(V(k)\).

Fig. 4  Control input.

Fig. 5  Time response of \(x(k)+\hat{x}(k)\).

Fig. 6  Time response of the event. “1” implies the event occurs (i.e., the event-triggering condition is satisfied). “0” implies the event does not occur.

From Fig. 2, we see that the state converges to the neighborhood of the origin. From Fig. 3, we see that the closed-loop system is UUB. From Fig. 4, we see that the control input is changed continuously, because event triggering mechanisms are not introduced in actuators.

From Fig. 5, we see that the state \(\hat{x}\) in the controller is the estimated value of \(-x\). In other words, the controller estimates \(-x\), and calculates the control input based on the estimated value of \(-x\). Hence, the controller obtained by the proposed method has the structure of conventional controllers consisting of an observer and a state-feedback controller. The reason why \(\hat{x}\) becomes the estimated value of \(-x\) is described as follows. In the Lyapunov function \(V(k)\) of (9), the matrix \(P\) is given by (22). Then, we can obtain \(V(k) = x^{\top}(k) X x(k) + 2 \hat{x}^{\top}(k) Z x(k) + \hat{x}^{\top}(k) Z \hat{x}(k)\). From this expression, \(\partial V(k) / \partial \hat{x}(k)\) can be derived as \(\partial V(k) / \partial \hat{x}(k) = 2 Z x(k) + 2 Z \hat{x}(k)\), which implies that \(\hat{x}(k)\) minimizing \(V(k)\) is given by \(\hat{x}(k) = -x(k)\). Thus, to decrease \(V(k)\), it is required that \(\hat{x}(k)\) approaches to \(-x(k)\).

Finally, from Fig. 6, we see that if the state comes closer to the neighborhood of the origin (i.e., the Lyapunov function \(V(k)\) comes closer to \(0\)), the occurrence of the event is inhibited. The number of event occurrences is 22. In the cases of \(a_i=0.1, 0.2, 0.3, 0.4, 0.5\), the number of event occurrences are \(47, 39, 32, 28, 26\), respectively (for simplicity, \(b_i\) is fixed as \(b_i=0.8\)). We see that for a larger \(a_i\), the number of event occurrences is decreased.

From the viewpoint of event occurrences, consider comparing the proposed method with our previously proposed method [22]. In the method in [22], The event-triggering condition is given by (4) with \(a_i=0.6\) and \(b_i=0\). Since the asymptotic stability is considered in [22], \(b_i\) must be set as \(b_i=0\). Figure 7 and Fig. 8 show the time response of the state and the event, respectively. The number of event occurrences is 72. Comparing Fig. 2 with Fig. 7, we see that the latter state converges to the neighborhood of the origin faster than the former state. Thus, when we consider the asymptotic stability, the convergence of the state is better, but many event occurrences are required. In particular, since \(b_i\) must be set as \(b_i=0\), the event occurs even if the state reaches the neighborhood of the origin (see Fig. 8). As a result, the number of event occurrences is increased. In the proposed method based on the notion of uniformly ultimate boundedness, the convergence of the state becomes deteriorated, but the number of event occurrences is inhibited. Depending on applications and control purposes, we may choose these methods.

Fig. 7  Time response of the state in our previously proposed method [22].

Fig. 8  Time response of the event. “1” implies the event occurs (i.e., the event-triggering condition is satisfied). “0” implies the event does not occur.

5.  Conclusion

In this paper, we proposed a new design method for decentralized event-triggered control of discrete-time linear systems over a sensor network. We adopted uniformly ultimate boundedness as a control specification in the proposed method, which is a weaker specification than asymptotic stability. Introducing it, communications in the neighborhood of the origin can be decreased. The output feedback controller considering decentralized event-triggering mechanism can be derived by solving a set of LMI optimization problems. The proposed method was demonstrated by a numerical example.

In the current stage, we need to determine the parameters \(a_i\) and \(b_i\) in the event-triggering condition (4) by trial and error. One of the future efforts is to develop a tuning method of these parameters from viewpoints of both control and communication performances. In this paper, a simple grid search method has been used for solving the BMI optimization problem (Problem 2) in Theorem 1. It is also important to develop a more efficient method.

This work was partly supported by JSPS KAKENHI Grant Numbers JP21H04558, JP22K04163, JP23H01430.

References

[1] E.A. Lee, “Cyber physical systems: Design challenges,” 2008 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing, pp.363-369, IEEE, 2008.
CrossRef

[2] E.A. Lee, “The past, present and future of cyber-physical systems: A focus on models,” Sensors, vol.15, no.3, pp.4837-4869, 2015.
CrossRef

[3] Y. Liu, Y. Peng, B. Wang, S. Yao, and Z. Liu, “Review on cyber-physical systems,” IEEE/CAA J. Autom. Sinica, vol.4, no.1, pp.27-40, 2017.
CrossRef

[4] G. Schirner, D. Erdogmus, K. Chowdhury, and T. Padir, “The future of human-in-the-loop cyber-physical systems,” Computer, vol.46, no.1, pp.36-45, 2013.
CrossRef

[5] Y. Zhang, M. Qiu, C.W. Tsai, M.M. Hassan, and A. Alamri, “Health-CPS: Healthcare cyber-physical system assisted by cloud and big data,” IEEE Syst. J., vol.11, no.1, pp.88-95, 2015.
CrossRef

[6] M.H. Cintuglu, O.A. Mohammed, K. Akkaya, and A.S. Uluagac, “A survey on smart grid cyber-physical system testbeds,” IEEE Commun. Surveys Tuts., vol.19, no.1, pp.446-464, 2016.
CrossRef

[7] M. Schranz, G.A. Di Caro, T. Schmickl, W. Elmenreich, F. Arvin, A. Şekercioğlu, and M. Sende, “Swarm intelligence and cyber-physical systems: concepts, challenges and future trends,” Swarm and Evolutionary Computation, vol.60, p.100762, 2021.
CrossRef

[8] W.P. Heemels, K.H. Johansson, and P. Tabuada, “An introduction to event-triggered and self-triggered control,” Proc. 51st IEEE Conference on Decision and Control, pp.3270-3285, IEEE, 2012.
CrossRef

[9] W.H. Heemels, M. Donkers, and A.R. Teel, “Periodic event-triggered control for linear systems,” IEEE Trans. Autom. Control, vol.58, no.4, pp.847-861, 2012.
CrossRef

[10] W. Heemels and M. Donkers, “Model-based periodic event-triggered control for linear systems,” Automatica, vol.49, no.3, pp.698-711, 2013.
CrossRef

[11] M. Mazo and P. Tabuada, “Decentralized event-triggered control over wireless sensor/actuator networks,” IEEE Trans. Autom. Control, vol.56, no.10, pp.2456-2461, 2011.
CrossRef

[12] M. Mazo Jr and M. Cao, “Asynchronous decentralized event-triggered control,” Automatica, vol.50, no.12, pp.3197-3203, 2014.
CrossRef

[13] P. Tallapragada and N. Chopra, “Decentralized event-triggering for control of LTI systems,” 2013 IEEE International Conference on Control Applications, pp.698-703, IEEE, 2013.
CrossRef

[14] K. Nakajima, K. Kobayashi, and Y. Yamashita, “Linear quadratic regulator with decentralized event-triggering,” IEICE Trans. Fundamentals, vol.E100-A, no.2, pp.414-420, Feb. 2017.
CrossRef

[15] F. Blanchini, “Ultimate boundedness control for uncertain discrete-time systems via set-induced Lyapunov functions,” IEEE Trans. Autom. Control, vol.39, no.2, pp.428-433, 1994.
CrossRef

[16] K. Kobayashi, K. Nakajima, and Y. Yamashita, “Uniformly ultimate boundedness control with decentralized event-triggering,” IEICE Trans. Fundamentals, vol.E104-A, no.2, pp.455-461, Feb. 2021.
CrossRef

[17] K. Suzuki, N. Hayashi, and S. Takai, “Uniform ultimate stability for time-varying nonlinear systems with event-triggered control,” 2018 IEEE International Conference on Systems, Man, and Cybernetics, pp.2003-2008, 2018.

[18] W. Wu, S. Reimann, D. Görges, and S. Liu, “Event-triggered control for discrete-time linear systems subject to bounded disturbance,” International Journal of Robust and Nonlinear Control, vol.26, no.9, pp.1902-1918, 2016.
CrossRef

[19] S. Yoshikawa, K. Kobayashi, and Y. Yamashita, “Quantized event-triggered control of discrete-time linear systems with switching triggering conditions,” IEICE Trans. Fundamentals, vol.E101-A, no.2, pp.322-327, Feb. 2018.
CrossRef

[20] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, 1994.
CrossRef

[21] S. Tarbouriech and A. Girard, “LMI-based design of dynamic event-triggering mechanism for linear systems,” 57th IEEE Conference on Decision and Control, pp.121-126, IEEE, 2018.
CrossRef

[22] K. Kitamura, K. Kobayashi, and Y. Yamashita, “LMI-based design of output feedback controllers with decentralized event-triggering,” IEICE Trans. Fundamentals, vol.E105-A, no.5, pp.816-822, May 2022.
CrossRef

[23] I. Masubuchi, A. Ohara, and N. Suda, “LMI-based output feedback controller design-using a convex parametrization of full-order controllers,” 1995 American Control Conference, pp.3473-3477, IEEE, 1995.
CrossRef

[24] F. Leibfritz and W. Lipinski, “Description of the benchmark examples in COMPleib 1.0,” Technical Report, University of Trier, 2003.

Authors

Koichi KITAMURA
  Hokkaido University

received the B.E. degree in 2021 and the M.I.S degree in 2023 from Hokkaido University. His research interests include event-triggered control.

Koichi KOBAYASHI
  Hokkaido University

received the B.E. degree in 1998 and the M.E. degree in 2000 from Hosei University, and the D.E. degree in 2007 from Tokyo Institute of Technology. From 2000 to 2004, he worked at Nippon Steel Corporation. From 2007 to 2015, he was an Assistant Professor at Japan Advanced Institute of Science and Technology. From 2015 to 2022, he was an Associate Professor at Hokkaido University. Since 2023, he has been a Professor at the Faculty of Information Science and Technology, Hokkaido University. His research interests include discrete event and hybrid systems. He is a member of IEEE, IEEJ, IEICE, ISCIE, and SICE.

Yuh YAMASHITA
  Hokkaido University

received his B.E., M.E., and Ph.D. degrees from Hokkaido University, Japan, in 1984, 1986, and 1993, respectively. In 1988, he joined the faculty of Hokkaido University. From 1996 to 2004, he was an Associate Professor at the Nara Institute of Science and Technology, Japan. Since 2004, he has been a Professor of the Graduate School of Information Science and Technology, at Hokkaido University. His research interests include nonlinear control and nonlinear dynamical systems. He is a member of SICE, ISCIE, IEICE, RSJ, and IEEE.

Keyword