1 Introduction

Sparsity is one of the most important notions in recent signal/image processing [1], machine learning [2], communications engineering [3], and high-dimensional statistics [4]. A wide range of applications is shown in works, such as [5].

Recently, sparsity-promoting techniques have been applied to control problems as stated below. Ohlsson et al. have proposed in [6] sum-of-norms regularization for trajectory generation to obtain a compact representation of the control inputs. In [7], Bhattacharya and Başar have adapted compressive sensing techniques to state estimation under incomplete measurements. The sparsity notion is also applied to networked control for reduction of control data size using model predictive control (MPC) [810]. MPC is a very attractive research topic to which sparsity methods are applied; in [11, 12] Gallieri and Maciejowski have proposed asso-MPC to reduce actuator activity, and in [13] Aguilera et al. have discussed minimization of the number of active actuators subject to closed-loop stability by using the 0 norm. Sparse MPC is further investigated based on self-triggered control in [14].

Motivated by these researches, the maximum hands-off control has been proposed in [15, 16] for continuous-time systems. This control maximizes the length of the time duration over which the control value is exactly zero. With such control, actuators can be stopped for a long duration, during which the control system requires much less fuel or electric power, emits less toxic gas such as CO2, and generates less noise. Therefore, the control is also called green control [17]. The optimization is described as a finite-horizon L 0-optimal control, which is discontinuous and highly non-convex, and hence difficult to solve in general. In [15, 16], under a simple assumption of normality, the L 0-optimal control is proved to be equivalent to classical L 1-optimal (or fuel optimal) control, which can be described as a convex optimization. The proof of the equivalence theorem is mainly based on the “bang-off-bang” property (i.e., the control takes values ±1 or 0 almost everywhere) of the L 1-optimal control. Moreover, based on the equivalence, the value function in the maximum hands-off control is shown to be continuous and convex in the reachable set [18], which can be used to prove the stability of an MPC-based closed-loop system.

In this paper, we investigate the hands-off control in discrete time for energy-aware green control. The main difference from the continuous-time hands-off control mentioned above is that the discrete-time maximum hands-off control shows in many cases no “bang-off-bang” property. Instead, we use the restricted isometry property (RIP), e.g., [3], for an equivalence theorem between 0 and 1.

An associated 1-optimal control problem can be described via an 1 optimization problem with linear constraints. This can be equivalently written as a standard linear program, which can be “efficiently” solved by the interior-point method [19]. The efficiency of the interior-point method is true for small or middle-scale problems with offline computation. However, for real-time control applications, problems arise. To improve computational efficiency in the current paper, we adapt the alternating direction method of multipliers (ADMM) to the control problem. ADMM was first introduced in [20] in 1976, and since then, the algorithm has been widely investigated in both theoretical and practical aspects; see the review [21] and the references therein. ADMM has indeed been proved to converge to the exact optimal value under mild conditions, but in some cases it shows quite slow convergence to the optimal value. On the other hand, ADMM often gives very fast convergence to an approximated value ([21], section 3.2). This property is desirable for real-time control application, since the approximation error can often be eliminated by relying upon robustness of the feedback control mechanism. In fact, ADMM has been applied to MPC with a quadratic cost function in [2224]. In particular, an ADMM algorithm for 1-regularized MPC has been proposed in [25] without theoretical stability results.

1.1 Contributions

In this paper, we first analyze discrete-time finite-horizon hands-off control, where we give a feasibility condition based on the system controllability, and also develop an equivalence theorem between 0- and 1-optimal controls based on the idea of RIP. These are different from the case of continuous-time hands-off control in [16], where the concept of normality for an optimal control problem was adopted. Unfortunately, normality cannot be used in the discrete-time case. RIP is often used to prove equivalence theorems, e.g., [1] in signal processing, and we show in this paper that RIP is also useful for discrete-time hands-off control.

To calculate discrete-time hands-off control, we then propose to use ADMM, which is widely applied to signal/image processing [21], and we prove by simulation that ADMM is very effective in feedback control since it requires very few iterations. Finally, we prove a stability theorem for hands-off model predictive control, which has been never given in the literature except for the continuous-time case [18].

1.2 Outline

The paper is organized as follows: in Section 2, we formulate the discrete-time maximum hands-off control, and prove the feasibility property and the 0- 1 equivalence based on the RIP. In Section 3, we briefly review ADMM, and give the ADMM algorithm for maximum hands-off control. The penalty parameter selection in the optimization is also discussed in this section. Section 4 proposes MPC with maximum hands-off control, and establishes a the stability result. We include simulation results in Section 5, which illustrate the advantages of the proposed method. Section 6 draws concluding remarks.

1.3 Notation

We will use the following notation throughout this paper: \({\mathbb {R}}\) denotes the set of real numbers. For positive integers n and m, \({\mathbb {R}}^{n}\) and \({\mathbb {R}}^{m\times n}\) denote the sets of n-dimensional real vectors and m×n real matrices, respectively. We use boldface lowercase letters, e.g., v, to represent vectors, and upper case letters, e.g., A for matrices. For a positive integer n, 0 n denotes the n-dimensional zero vector, that is, \(\boldsymbol {0}_{n} = [0,\ldots,0]^{\top } \in {\mathbb {R}}^{n}\). If the dimension is clear, the zero vector is simply denoted by 0. The superscript (·) means the transpose of a vector or a matrix. For a vector \(\boldsymbol {v}=[v_{1},v_{2},\ldots,v_{n}]^{\top }\in {\mathbb {R}}^{n}\), we define the 1 and 2 norms, respectively, by

$$ \|\boldsymbol{v}\|_{1} \triangleq \sum_{k=1}^{n} |v_{k}|,\quad \|\boldsymbol{v}\|_{2} \triangleq \sqrt{\sum_{k=1}^{n} |v_{k}|^{2}}. $$

Also, we define the 0 norm of v as the number of nonzero elements of v and denote it via ∥v0. A vector v is called s-sparse if ∥v0s, and the set of all s-sparse vectors is denoted by \(\Sigma _{s} \triangleq \{\boldsymbol {v}\in {\mathbb {R}}^{\text {N}}: \|\boldsymbol {v}\|_{0}\leq s\}\). For a given \(\boldsymbol {v} \in {\mathbb {R}}^{\text {N}}\), the 1-distance from v to the set Σ s is defined by

$$ \sigma_{s}(\boldsymbol{v}) \triangleq \min_{\boldsymbol{x}\in\Sigma_{s}}\|\boldsymbol{v}-\boldsymbol{x}\|_{1}. $$

We say a set is non-empty if it contains at least one element. For a non-empty set Ω, the indicator operator for Ω is defined by

$$\mathcal{I}_{\Omega}(\boldsymbol{x}) \triangleq \left\{\begin{array}{ll} 0,& \text{~if~} \boldsymbol{x}\in \Omega,\\ \infty, & \text{~otherwise.} \end{array}\right. $$

2 Discrete-time hands-off control

In this article, we consider discrete-time hands-off control for the following linear time-invariant model:

$$ \boldsymbol{x}[\!k+1] = A \boldsymbol{x}[\!k] + \boldsymbol{b}u[\!k], ~k=0,1,\ldots,N-1, $$
(1)

where \(\boldsymbol {x}[\!k]\in {\mathbb {R}}^{n}\) is the state at time k, \(u[\!k]\in {\mathbb {R}}\) is the discrete-time scalar control input, and \(A\in {\mathbb {R}}^{n\times n}\), \(\boldsymbol {b}\in {\mathbb {R}}^{n}\).

The control (sequence) {u[ 0],u[ 1],…,u[ N−1]} is chosen to drive the state x[ k] from a given initial state x[ 0]=ξ to the origin x[ N]=0 in N steps.

We call such a control feasible, and denote by \({\mathcal {U}}_{\boldsymbol {\xi }}\) the set of all feasible controls. By solving the difference equation in (1) with the boundary conditions, x[ 0]=ξ and x[N]=0, we obtain A N ξ+Φ u=0 with

$$ \Phi \triangleq \left[\begin{aligned} A^{N-1}\boldsymbol{b}&\, A^{N-2}\boldsymbol{b}&\ldots&~A\boldsymbol{b}&\boldsymbol{b} \end{aligned}\right]. $$
(2)

By this, the feasible control set \({\mathcal {U}}_{\boldsymbol {\xi }}\) is represented by

$$ {\mathcal{U}}_{\boldsymbol{\xi}} = \left\{\boldsymbol{u}\in{\mathbb{R}}^{N}: A^{N}\boldsymbol{\xi}+\Phi\boldsymbol{u}=\boldsymbol{0}\right\}. $$
(3)

For the feasible control set \({\mathcal {U}}_{\boldsymbol {\xi }}\), we have the following lemma.

Lemma 1.

Assume that the pair (A,b) is reachable, i.e.,

$$ \text{rank} \left[ \boldsymbol{b} \quad A\boldsymbol{b} \quad \dots \quad A^{n-1}\boldsymbol{b}\right] = n, $$
(4)

and N>n. Then \({\mathcal {U}}_{\boldsymbol {\xi }}\) is non-empty for any \(\boldsymbol {\xi }\in {\mathbb {R}}^{n}\).

Proof.

Since N>n, the matrix Φ in (2) can be written as

$$ \begin{aligned} \Phi & \triangleq \left[\Phi_{1} \quad {\Phi}_{2}\right],\\ {\Phi}_{1} & \triangleq \left[ A^{N-1}\boldsymbol{b}\,A^{N-2} \boldsymbol{b} \quad \dots \quad A^{n}\boldsymbol{b}\right],\\ {\Phi}_{2} &\triangleq \left[ A^{n-1}\boldsymbol{b}\,A^{n-2} \boldsymbol{b} \quad \dots \quad A\boldsymbol{b} \quad \boldsymbol{b}\right]. \end{aligned} $$
(5)

From the reachability assumption in (4), Φ 2 is nonsingular. Then the following vector

$$ \tilde{\boldsymbol{u}} \triangleq \left[\begin{array}{cc}\boldsymbol{0}_{N-n} \\ -\Phi_{2}^{-1}A^{N}\boldsymbol{\xi}\end{array}\right], $$
(6)

satisfies \(A^{N}\boldsymbol {\xi }+\Phi \tilde {\boldsymbol {u}}=\boldsymbol {0}\), and hence \(\tilde {\boldsymbol {u}}\in {\mathcal {U}}_{\boldsymbol {\xi }}\).

For the feasible control set \({\mathcal {U}}_{\boldsymbol {\xi }}\) in (3), we consider the discrete-time maximum hands-off control (or 0-optimal control) defined by

$$ \underset{\boldsymbol{u}\in{{\mathcal{U}}_{\boldsymbol{\xi}}}}{\mathrm{minimize~}}~\|\boldsymbol{u}\|_{0}, $$
(7)

where \(\boldsymbol {u}=\bigl [\!u[\!0],u[\!1],\dots,u[\!N-1]\bigr ]^{\top }\), and ∥u0 is so-called the 0 norm of u, which is defined as the number of nonzero elements of u. We call a vector u s-sparse if ∥u0s. Let Σ s be the set of all s-sparse vectors, that is,

$$ \Sigma_{s} \triangleq \{\boldsymbol{u}\in{\mathbb{R}}^{N}: \|\boldsymbol{u}\|_{0}\leq s\}. $$

For the 0 optimization in (7), we have the following observation:

Lemma 2.

Assume that the pair (A,b) is reachable and N>n. Then, we have \({\mathcal {U}}_{\boldsymbol {\xi }} \cap \Sigma _{n} \neq \emptyset \).

Proof.

From the proof of Lemma 1, there exists a feasible control \(\tilde {\boldsymbol {u}}\in {\mathcal {U}}_{\boldsymbol {\xi }}\) that satisfies \(\|\tilde {\boldsymbol {u}}\|_{0} \leq n\); see (6). It follows that \(\tilde {\boldsymbol {u}}\in \Sigma _{n}\) and hence \(\tilde {\boldsymbol {u}}\in {\mathcal {U}}_{\boldsymbol {\xi }}\cap \Sigma _{n}\).

This lemma assures that the solution of the 0 optimization is at most n-sparse. However, the optimization problem (7) is a combinatorial one, and requires heavy computational burden if n or N is large. This property is undesirable for real-time control systems, and we propose to relax the combinatorial optimization problem to obtain a convex one.

For this purpose, we adopt an 1 relaxation for (7), that is, we consider the following 1-optimal control problem:

$$ \begin{aligned} &\underset{\boldsymbol{u}\in{{\mathcal{U}}_{\boldsymbol{\xi}}}}{\mathrm{minimize~}} \|\boldsymbol{u}\|_{1}, \end{aligned} $$
(8)

where \(\|\boldsymbol {u}\|_{1} \triangleq |u[\!0]|+|u[\!1]|+\dots +|u[\!N-1]|\). The resulting optimization can be described as a linear program, and hence we can solve it efficiently by using numerical software such as CVX in MATLAB [26, 27]. Moreover, an accelerated algorithm is derived by the alternating direction method of multipliers (ADMM) [21]; see Section 3.

To justify the use of the 1 relaxation, we recall the restricted isometry property [1] defined as follows:

Definition 1.

A matrix Φ satisfies the restricted isometry property (RIP for short) of order s if there exists δ s ∈(0,1) such that

$$ (1-\delta_{s})\|\boldsymbol{u}\|_{2}^{2} \leq \|\Phi\boldsymbol{u}\|_{2}^{2} \leq (1+\delta_{s})\|\boldsymbol{u}\|_{2}^{2} $$

holds for all uΣ s .

Then, we have the following theorem.

Theorem 1.

Assume that the pair (A,b) is reachable and that N>n. Suppose that the 0 optimization (7) has a unique s-sparse solution. If the matrix Φ given in (2) satisfies the RIP of order 2s with \(\delta _{2s}<\sqrt {2}-1\), then the solution of the 1-optimal control problem (7) is equivalent to that of the 0-optimal control problem (8).

Proof.

Let u denote the unique s-sparse solution to (7). By ([28], Theorem 1.2) or ([1], Theorem 1.8), the solution to the 1 optimization (8), which we denote by \(\hat {\boldsymbol {u}}\), obeys

$$ \|\hat{\boldsymbol{u}}-\boldsymbol{u}^{\ast}\|_{2} \leq C_{0} \frac{\sigma_{s}(\boldsymbol{u}^{\ast})}{\sqrt{s}}, $$

where C 0 is a constant given by

$$ C_{0} = 2\cdot \frac{1-(1-\sqrt{2})\delta_{2s}}{1-(1+\sqrt{2})\delta_{2s}}, $$

and

$$ \sigma_{s}(\boldsymbol{u}^{\ast}) \triangleq \min_{\boldsymbol{v}\in\Sigma_{s}}\|\boldsymbol{u}^{\ast}-\boldsymbol{v}\|_{1}. $$

Since u is s-sparse, that is, u Σ s , we have σ s (u )=0, and hence \(\hat {\boldsymbol {u}}=\boldsymbol {u}^{\ast }\).

3 Numerical optimization by ADMM

The optimization problem in (8) is convex and can be described as a standard linear program [19]. However, for real-time computation in control such as model predictive control discussed in section 4, a much more efficient algorithm is desired than the standard interior point method for the linear program. For this purpose, we propose to adopt ADMM [20, 21, 29], for the 1 optimization. Although ADMM generally only achieves very slow convergence to the exact optimal value, it is shown in ([21], Section 3.2) that ADMM often converges to modest accuracy within a few tens of iterations. This property is especially favorable in model predictive control, since the computational error generated by the ADMM algorithm can often be reduced by the feedback control mechanism; see the simulation results in Section 5.

3.1 Alternating direction method of multipliers (ADMM)

Here, we briefly review the ADMM algorithm. ADMM is an algorithm to solve the following type of optimization:

$$ \underset{\boldsymbol{y} \in{\mathbb{R}}^{\mu}, \boldsymbol{z} \in{\mathbb{R}}^{\nu}}{\text{minimize }} ~f(\boldsymbol{y}) + g(\boldsymbol{z})~~\text{subject to}~~C\boldsymbol{y}+D\boldsymbol{z}=\boldsymbol{c} $$
(9)

where \(f:{\mathbb {R}}^{\mu }\mapsto {\mathbb {R}}\cup \{\infty \}\) and \(g:{\mathbb {R}}^{\nu }\mapsto {\mathbb {R}}\cup \{\infty \}\) are closed and proper convex functions, and \(C\in {\mathbb {R}}^{\kappa \times \mu }\), \(D\in {\mathbb {R}}^{\kappa \times \nu }\), \(\boldsymbol {c}\in {\mathbb {R}}^{\kappa }\). For this optimization problem, we define the augmented Lagrangian by

$$ \begin{aligned} L_{\rho}(\boldsymbol{y},\boldsymbol{z},\boldsymbol{w}) & \triangleq f(\boldsymbol{y}) + g(\boldsymbol{z}) + \boldsymbol{w}^{\top} (C\boldsymbol{y}+D\boldsymbol{z}-\boldsymbol{c})\\ &+\frac{\rho}{2}\|C\boldsymbol{y}+D\boldsymbol{z}-\boldsymbol{c}\|_{2}^{2}, \end{aligned} $$
(10)

where ρ>0 is called the “penalty parameter” (or the step size; see the third line of the ADMM algorithm below). Then the algorithm of ADMM is described as

$$ \begin{aligned} \boldsymbol{y}[j+1] &:=\underset{\boldsymbol{y}\in{\mathbb{R}^{\mu}}} {\text{arg}\,\text{min}}\, L_{\rho}(\boldsymbol{y},\boldsymbol{z}[j],\boldsymbol{w}[j]),\\ \boldsymbol{z}[j+1] &:= \underset{\boldsymbol{z}\in{\mathbb{R}^{\nu}}} {\text{arg}\,\text{min}}\, L_{\rho}(\boldsymbol{y}[j+1],\boldsymbol{z},\boldsymbol{w}[j]),\\ \boldsymbol{w}[j+1] &:= \boldsymbol{w}[j] + \rho\bigl(C\boldsymbol{y}[j+1]+D\boldsymbol{z}[j+1]-\boldsymbol{c}\bigr),\\ j&=0,1,2,\dots, \end{aligned} $$
(11)

where ρ>0, \(\boldsymbol {y}[\!0]\in {\mathbb {R}}^{\mu }\), \(\boldsymbol {z}[\!0]\in {\mathbb {R}}^{\nu }\), and \(\boldsymbol {w}[\!0]\in {\mathbb {R}}^{\kappa }\) are given before the iterations.

Assuming that the unaugmented Lagrangian L 0 (i.e., L ρ with ρ=0) has a saddle point, the ADMM algorithm is known to converge to a solution of the optimization problem (9) ([21], Section 3.2).

3.2 ADMM for 1-optimal control

Here we derive the ADMM algorithm for the 1-optimal control (8). The optimization (8) can be described in the standard form in (9) as follows:

$$\underset{\boldsymbol{y}, \boldsymbol{z} \in{\mathbb{R}}^{N}}{\text{minimize}} ~~{\mathcal{I}}_{{\mathcal{U}}_{\boldsymbol{\xi}}}(\boldsymbol{y}) + \|\boldsymbol{z}\|_{1}~~\text{subject to}~~\boldsymbol{y}-\boldsymbol{z}=\boldsymbol{0}, $$

where \({\mathcal {I}}_{{\mathcal {U}}_{\boldsymbol {\xi }}}\) is the indicator operator for \({\mathcal {U}}_{\boldsymbol {\xi }}\), that is

$${\mathcal{I}}_{{\mathcal{U}}_{\boldsymbol{\xi}}}(\boldsymbol{y}) \triangleq \left\{\begin{array}{ll} 0,& \text{~if~} \boldsymbol{y}\in{\mathcal{U}}_{\boldsymbol{\xi}},\\ \infty, & \text{~otherwise.} \end{array}\right. $$

Then, the ADMM algorithm for the 1-optimal control (8) is given by

$$ \begin{aligned} \boldsymbol{y}[j+1] &:= \Pi (\boldsymbol{z}[j]-\boldsymbol{w}[j]),\\ \boldsymbol{z}[j+1] &:= S_{1/\rho}(\boldsymbol{y}[j+1]+\boldsymbol{w}[j]),\\ \boldsymbol{w}[j+1] &:= \boldsymbol{w}[j] + \boldsymbol{y}[j+1]-\boldsymbol{z}[j+1],\quad j=0,1,2,\dots, \end{aligned} $$
(12)

where Π is the projection operator onto \({\mathcal {U}}_{\boldsymbol {\xi }}\), that is,

$$ \Pi(\boldsymbol{v}) \triangleq \bigl(I-\Phi^{\top}(\Phi\Phi^{\top})^{-1}\Phi\bigr)\boldsymbol{v}-\Phi^{\top} (\Phi\Phi^{\top})^{-1}A^{N}\boldsymbol{\xi}, $$
(13)

Φ is as in (2), and S 1/ρ is the element-wise soft thresholding operator (see Fig. 1) defined by (for scalars a)

$$ S_{1/\rho}(a) \triangleq \left\{\begin{array}{ll} a-1/\rho, & \text{~if~} a>1/\rho,\\ 0, & \text{~if~} |a|\leq 1/\rho,\\ a+1/\rho, & \text{~if~} a<-1/\rho. \end{array}\right. $$
(14)
Fig. 1
figure 1

Soft-thresholding operator S 1/ρ (a)

The operator S 1/ρ is also known as the proximity operator for the 1-norm term in the augmented Lagrangian L ρ . Note that if the pair (A,b) is reachable and N>n, then the matrix Φ is full row rank (see the proof of Lemma 1), and hence the matrix Φ Φ is non-singular. Note also that the matrix IΦ (Φ Φ )−1 Φ and the vector Φ (Φ Φ )−1 A N ξ in (13) can be computed before the iterations in (12), and hence the computation in (12) is very simple.

3.3 Selection of penalty parameter ρ

To use the ADMM algorithm in (12), we should appropriately determine the penalty parameter (or the step size) ρ. In general, if the penalty parameter is large, then the primal residual y[j]−z[j], or C y[j]+D z[j]−c[j] tends to be small, since it places a large penalty on violations of primal feasibility; see (10). On the other hand, a smaller ρ tends to give a sparser output from the definition of the soft thresholding operator S 1/ρ ; see (14) or Fig. 1. For the selection of ρ, one should rely on trial and error by simulation. One may extend the idea of optimal parameter selection for quadratic problems [24, 30] to the 1 optimization (8), for which we do not have any optimal parameter selection method. Alternatively, one can adopt the varying penalty parameter ([21], Section 3.4), in which one may use possibly different penalty parameters ρ[j] for each iteration. See also [31, 32].

4 Model predictive control

Based on the finite-horizon 1-optimal control in (8), we here extend it to infinite-horizon control by adopting a model predictive control strategy. 1

4.1 Control law

The control law is described as follows. At time k (k=0,1,2,…), we observe the state \(\boldsymbol {x}[\!k]\in {\mathbb {R}}^{n}\) of the discrete-time plant (1). For this state, we compute the 1-optimal control vector

$$ \hat{\boldsymbol{u}}[\!k] \triangleq \left[\begin{array}{c} \hat{u}_{0}[\!k]\\ \hat{u}_{1}[\!k]\\ \vdots\\ \hat{u}_{N-1}[\!k] \end{array}\right] \triangleq \underset{\boldsymbol{u}\in{\mathcal{U}}_{\boldsymbol{\xi}}}{\text{arg}\,\text{min}}\, \|\boldsymbol{u}\|_{1}, \quad \boldsymbol{\xi} = \boldsymbol{x}[\!k]. $$
(15)

Then, as usual in model predictive control [33, 34], we use the first element \(\hat {u}_{0}[\!k]\) for the control input u[ k], that is, we set

$$ u[\!k] = \hat{u}_{0}[\!k] = \left[1\quad 0 \quad\dots \quad 0 \right]\hat{\boldsymbol{u}}[\!k]. $$
(16)

This control law gives an infinite-horizon closed-loop control system characterized by

$$ \boldsymbol{x}[\!k+1] = A\boldsymbol{x}[\!k] + B\hat{u}_{0}[\!k]. $$
(17)

Since the control vector \(\hat {\boldsymbol {u}}[k]\) is designed to be sparse by the 1 optimization as discussed above, the first element, \(\hat {u}_{0}[\!k]\), will often be exactly 0, e.g., the vector shown in (6). A numerical simulation in Section 5 illustrates that the control will often be sparse, when using this model predictive control formulation.

4.2 Stability

We here discuss the stability of the closed-loop system (17) with the model predictive control described above. In fact, we can show the stability of the closed-loop control system by using a standard argument in the stability analysis of model predictive control with a terminal constraint (e.g., ([33], Chapter 6), ([34], Chapter 2), or ([35], Chapter 5)).

The key idea of the stability analysis in model predictive control is to use the value function of the (finite-horizon) optimal control problem as a Lyapunov function. The value function of the 1-optimal control in (8) is defined by (see (15))

$$ V(\boldsymbol{\xi}) \triangleq \min_{\boldsymbol{u}\in{\mathcal{U}}_{\boldsymbol{\xi}}} \|\boldsymbol{u}\|_{1}. $$
(18)

The following lemma shows the convexity, the continuity, and the positive definiteness of the value function V(ξ). These properties are useful to show the value function to be a Lyapunov function (see the proof of Theorem 2 below).

Lemma 3.

Assume that the pair (A,b) is reachable, A is nonsingular, and N>n. Then V(ξ) is a convex, continuous, and positive definite function on \({\mathbb {R}}^{n}\).

Proof.

First, we prove convexity. Fix initial states \(\boldsymbol {\xi },\boldsymbol {\eta }\in {\mathbb {R}}^{n}\) and a scalar λ∈(0,1). From Lemma 1, there exist 1-optimal controls \(\hat {\boldsymbol {u}}_{\boldsymbol {\xi }}\) and \(\hat {\boldsymbol {u}}_{\boldsymbol {\eta }}\) for ξ and η, respectively. Then the control \(\boldsymbol {\nu }\triangleq \lambda \hat {\boldsymbol {u}}_{\boldsymbol {\xi }} + (1-\lambda)\hat {\boldsymbol {u}}_{\boldsymbol {\eta }}\) is feasible for the initial state \(\boldsymbol {\zeta }\triangleq \lambda \boldsymbol {\xi }+(1-\lambda)\boldsymbol {\eta }\), that is, \( \boldsymbol {\nu } \in {\mathcal {U}}_{\boldsymbol {\zeta }}. \) From the convexity of the 1 norm, we have

$$\begin{aligned} V\bigl(\lambda\boldsymbol{\xi}+(1-\lambda)\boldsymbol{\eta}\bigr) \leq \|\boldsymbol{\nu}\|_{1} &=\bigl\|\lambda \hat{\boldsymbol{u}}_{\boldsymbol{\xi}} + (1-\lambda)\hat{\boldsymbol{u}}_{\boldsymbol{\eta}}\bigr\|_{1}\\ &\leq \lambda \|\hat{\boldsymbol{u}}_{\boldsymbol{\xi}}\|_{1} + (1-\lambda)\|\hat{\boldsymbol{u}}_{\boldsymbol{\eta}}\|_{1}\\ & = \lambda V(\boldsymbol{\xi}) + (1-\lambda) V(\boldsymbol{\eta}). \end{aligned} $$

Next, the continuity of V on \({\mathbb {R}}^{n}\) follows from the convexity and the fact that V(ξ)< for any \(\boldsymbol {\xi }\in {\mathbb {R}}^{n}\), due to Lemma 1.

Finally, we prove the positive definiteness of V. It is easily seen that V(ξ)≥0 for any \(\boldsymbol {\xi }\in {\mathbb {R}}^{n}\), and V(0)=0. Assume V(ξ)=0. Then there exists \(\boldsymbol {u}^{\ast }\in {\mathcal {U}}_{\boldsymbol {\xi }}\) such that ∥u 1=0. This implies u =0 and hence \(\boldsymbol {0}\in {\mathcal {U}}_{\boldsymbol {\xi }}\). Since A is nonsingular, ξ should be 0.

By using the properties proved in Lemma 3, we can show the stability of the closed-loop control system.

Theorem 2.

Suppose that the pair (A,b) is reachable, A is nonsingular, and N>n. Then the closed-loop system with the model predictive control defined by (15) and (16) is stable in the sense of Lyapunov.

Proof.

We here show that the value function (18) is a Lyapunov function of the closed-loop control system. From Lemma 3, we have

  • V(0)=0.

  • V(ξ) is continuous in ξ.

  • V(ξ)>0 for any ξ0.

Then, we show V(x[ k+1])≤V(x[ k]) for the state trajectory x[ k], k=0,1,2,…, under the MPC (see (17)). By the assumptions, we have the 1-optimal control vector \(\hat {\boldsymbol {u}}[\!k]\) as given in (15). From this, define

$$ \tilde{\boldsymbol{u}}[\!k] \triangleq \left[\hat{u}_{1}[\!k] \quad \ldots \quad \hat{u}_{N-1}[\!k] \quad 0 \right]^{\top}. $$

Since there are no uncertainties in the plant model (1), we see \(\tilde {\boldsymbol {u}}[\!k]\in {\mathcal {U}}(\boldsymbol {x}[\!k+1])\). Then, we have

$$\begin{aligned} V(\boldsymbol{x}[k+1]) & = \min_{\boldsymbol{u}\in{\mathcal{U}}_{\boldsymbol{x}[k+1]}} \|\boldsymbol{u}\|_{1} \leq \|\tilde{\boldsymbol{u}}[\!k]\|_{1}\\&\quad= -|\hat{u}_{0}[\!k]| + V(\boldsymbol{x}[\!k]) \leq V(\boldsymbol{x}[\!k]). \end{aligned} $$

It follows that V is a Lyapunov function of the closed-loop control system. Therefore, the stability is guaranteed by Lyapunov’s stability theorem.

We should note that if we use the first element of the sparse feasible control given in (6), then the MPC generates the all-zero sequence, which obviously does not stabilize any unstable plants. This shows that not all feasible controls necessarily guarantee closed-loop stability. It is also worth noting that continuity of the value function leads to favorable robustness properties of the closed-loop system, see Section 5.

5 Simulation

Here, we document simulation results of the maximum hands-off MPC described in the previous section in comparison with 2-based quadratic MPC [33]. Let us consider the following continuous-time unstable plant:

$$ \dot{\boldsymbol{x}}_{\mathrm{c}}(t) = A_{\mathrm{c}}\boldsymbol{x}_{\mathrm{c}}(t) + \boldsymbol{b}_{\mathrm{c}}u_{\mathrm{c}}(t), $$

with

$$ A_{\mathrm{c}} = \left[\begin{array}{ccc} 3&-1.5&0.5\\ 2&0&0\\ 0&1&0 \end{array}\right],\quad \boldsymbol{b}_{\mathrm{c}} = \left[\begin{array}{c} 0.5\\ 0\\ 0 \end{array}\right]. $$

Note that this plant has the transfer function 1/(s−1)3. We discretize this plant model with sampling period h=0.1 to obtain a discrete-time model as in (1) using MATLAB function c2d(Ac,Bc,h). The obtained matrix and vector are

$$ A = \left[\begin{array}{ccc} 1.3317& -0.1713& 0.0580\\ 0.2321& 0.9836& 0.0055\\ 0.0111& 0.0995& 1.0002\\ \end{array}\right],\quad \boldsymbol{b} = \left[\begin{array}{c} 0.0580\\ 0.0055\\ 0.0002\\ \end{array}\right]. $$

For the discrete-time plant model, we assume the initial state x[ 0]=[ 1,1,1] and the horizon length N=30. For the ADMM algorithm in (12), we set the penalty parameter ρ=2, which is chosen by trial and error. We also choose the number of iterations in ADMM as N iter=2, so that the computation in (12) is much faster than the interior-point method (see below for details).

For these parameters, we simulate the maximum hands-off MPC. For comparison, we also simulate the quadratic MPC with the following 2 optimization

$$ \underset{\boldsymbol{u}\in{{\mathcal{U}}_{\boldsymbol{\xi}}}}{\mathrm{minimize~}}~\|\boldsymbol{u}\|_{2}^{2}. $$

Figure 2 shows the obtained control sequence u[ k] by both MPC formulations.

Fig. 2
figure 2

Maximum hands-off control (solid line) and L 2-optimal control (dashed line)

In this figure, the maximum hands-off control is sufficiently sparse (i.e., there are long time durations on which the control takes zero) while the L 2-optimal control is smoother but not sparse.

The 2 norm of the resulting state x[ k] is shown in Fig. 3.

Fig. 3
figure 3

The 2 norm of the state, ∥x[k]∥2, by maximum hands-off control (solid line) and L 2-optimal control (dashed line)

From the figure, the maximum hands-off control achieves significantly faster convergence to zero than the L 2-optimal control.

Since we set the number of iterations N iter to 2 for ADMM, there remains the difference between the exact solution, say \(\hat {\boldsymbol {u}}[\!k]\) of (8) with ξ=x[ k], and the approximated solution, say u ADMM[ k] by ADMM. To elucidate this issue, we describe the control system with ADMM as

$${{} {\begin{aligned} \boldsymbol{x}[\!k+1] &= A\boldsymbol{x}[\!k] + \boldsymbol{b}\hat{u}[\!k] + \boldsymbol{w}[\!k], \\ \boldsymbol{w}[\!k] & \triangleq \boldsymbol{b}(u_{\text{ADMM}}[\!k]-\hat{u}[\!k]), \end{aligned}}} $$

where \(\hat {u}[\!k]\) and u ADMM[ k] are the first element of \(\hat {\boldsymbol {u}}[\!k]\) and u ADMM[k], respectively. That is, the ADMM-based control is equivalent to the exact 1-optimal control with perturbation w[ k], which is caused by the inexact ADMM. Figure 4 illustrates the perturbation w[ k], where the exact solution \(\hat {u}[\!k]\) is obtained by directly solving (8) by CVX in MATLAB based on the primal-dual interior point method [19]. The solution by CVX can be taken as the exact solution since the maximum relative primal-dual gap in the iteration is in this case 1.49×10−8. Figure 4 shows that the perturbation also converges to zero thanks to the stabilizing feedback mechanism (recall that, as shown in Lemma 3, the cost function is continuous, hence the feedback loop can be expected to have favorable robustness properties.)

Fig. 4
figure 4

The 2 norm of the perturbation w[k] by ADMM with N iter=2

Finally, we compare the number of iterations between ADMM and the interior-point-based CVX. The averaged number of the CVX iterations is 10.7, which is approximately five times larger than that of ADMM, N iter=2. Note that the interior-point-based algorithm needs to solve linear equations at each iteration, and hence computational times may be much longer than those for the ADMM, since the inverse matrix in (13) can be computed offline.

6 Conclusions

In this paper, we have introduced the discrete-time maximum hands-off control that maximizes the length of time duration on which the control is zero. The design is described by an 0 optimization, which we have proved to be equivalent to convex 1 optimization using the restricted isometry property. The optimization can be efficiently solved by the alternating direction method of multipliers (ADMM). The extension to model predictive control has been examined and nominal stability has been proved. Simulation results have been shown to illustrate the effectiveness of the proposed method.

6.1 Future work

Here, we show future directions related to the maximum hands-off control. The maximum hands-off control has been proposed in this paper for linear time-invariant systems. It is desired to extend it to time-varying and nonlinear networked control, such as Markovian jump systems as discussed in [3638], to which “intelligent methods” have been applied in [39, 40]. We believe the sparsity method can be combined with fault detection and reliable control methods, as discussed in [41, 42]. Future work also includes an optimal selection method for the penalty parameter ρ in ADMM which takes into account control performance.

7 Endnote

1It is desirable if one can use an infinite-horizon control like an H control as in e.g. [36]. However, for the maximum hands-off control discussed in this paper, there is no available methods to directly obtain infinite-horizon control, and model predictive control is a convenient way to extend a finite-horizon control to infinite-horizon.