Advertisement

Analysis and control of max-plus linear discrete-event systems: An introduction

  • Bart De SchutterEmail author
  • Ton van den Boom
  • Jia Xu
  • Samira S. Farahani
Open Access
Article
  • 55 Downloads

Abstract

The objective of this paper is to provide a concise introduction to the max-plus algebra and to max-plus linear discrete-event systems. We present the basic concepts of the max-plus algebra and explain how it can be used to model a specific class of discrete-event systems with synchronization but no concurrency. Such systems are called max-plus linear discrete-event systems because they can be described by a model that is “linear” in the max-plus algebra. We discuss some key properties of the max-plus algebra and indicate how these properties can be used to analyze the behavior of max-plus linear discrete-event systems. Next, some control approaches for max-plus linear discrete-event systems, including residuation-based control and model predictive control, are presented briefly. Finally, we discuss some extensions of the max-plus algebra and of max-plus linear systems.

Keywords

Max-plus linear systems Max-plus algebra Analysis of discrete-event systems Model-based control of max-plus linear systems Residuation-based control Model predictive control Survey 

1 Introduction

In recent years both industry and the academic world have become more and more interested in techniques to model, analyze, and control complex discrete-event systems (DESs) such as flexible manufacturing systems, telecommunication networks, multiprocessor operating systems, railway networks, traffic control systems, logistic systems, intelligent transportation systems, computer networks, multi-level monitoring and control systems, and so on. Although in general DESs lead to a nonlinear description in conventional algebra, there exists a subclass of DESs for which this model becomes “linear” when it is formulated in the max-plus algebra (Baccelli et al. 1992; Cuninghame-Green 1979; Heidergott et al. 2006; Butkovič 2010), which has maximization and addition as its basic operations. More specifically, DESs in which only synchronization and no concurrency or choice occur can be modeled using the operations maximization (corresponding to synchronization: a new operation starts as soon as all preceding operations have been finished) and addition (corresponding to the duration of activities: the finishing time of an operation equals the starting time plus the duration). This leads to a description that is “linear” in the max-plus algebra. Therefore, DESs with synchronization but no concurrency are called max-plus linear DESs.

In the early sixties the fact that certain classes of DESs can be described by max-linear models was discovered independently by a number of researchers, among whom Cuninghame-Green (1961) and Cuninghame-Green (1962) and Giffler (1960), Giffler (1963), and Giffler (1968). An account of the pioneering work of Cuninghame-Green on max-plus-algebraic system theory for DESs has been given in (Cuninghame-Green 1979). Related work on dioid theory and its applications has been undertaken by Gondran and Minoux (1976), Gondran and Minoux (1984b), and Gondran and Minoux (1987). In the late eighties and early nineties the topic attracted new interest due to the research of Cohen et al. (1985), Cohen et al. (1989), Olsder (1986), Olsder and Roos (1988), and Olsder et al. (1990a), and Gaubert (1990, 1992, 1993), which resulted in the publication of Baccelli et al. (1992). Since then, several other researchers have entered the field.

The class of DESs that can be described by a max-plus linear time-invariant model is only a small subclass of the class of all DESs. However, for max-plus linear DESs there are many efficient analytic methods available to assess the characteristics and the performance of the system since one can use the properties of the max-plus algebra to analyze max-plus linear models in a very efficient way (as opposed to, e.g., computer simulation where, before being able to determine the steady-state behavior of a given DES, one may first have to simulate the transient behavior, which in some cases might require a large amount of computation time).

As will be illustrated later on in the paper, there exists a remarkable analogy between the basic operations of the max-plus algebra (maximization and addition) on the one hand, and the basic operations of conventional algebra (addition and multiplication) on the other hand. As a consequence, many concepts and properties of conventional algebra also have a max-plus analogue. This analogy also allows to translate many concepts, properties, and techniques from conventional linear system theory to system theory for max-plus linear DESs. However, there are also some major differences that prevent a straightforward translation of properties, concepts, and algorithms from conventional linear algebra and linear system theory to max-plus algebra and max-plus linear system theory for DESs. Hence, there is a need for a dedicated theory and dedicated methods for max-plus linear DESs.

In this paper we give an introduction to the max-plus algebra and to max-plus linear systems. We highlight the most important properties and analysis methods of the max-plus algebra, discuss some important characteristics of max-plus linear DES, and give a concise overview of performance analysis and control methods for max-plus linear DESs. More extensive overviews of the max-plus algebra and max-plus linear systems can be found in Baccelli et al. (1992), Cuninghame-Green (1979), Gaubert (1992), Heidergott et al. (2006), and Hardouin et al. (2018). The history of how max-plus algebra became an important tool in discrete event systems since the late seventies is described in Komenda et al. (2018).

The main feature of the current survey compared to these previous works is its compactness and its focus on analysis and model-based control for max-plus linear systems, in particular residuation-based control and model predictive control. We also include an extensive qualitative comparison between residuation-based control and model predictive control for max-plus linear systems. In addition, we provide several worked examples for basic max-plus concepts, we include several references to recent literature, and we present some results not included in previous surveys (such as, e.g., two-sided systems of linear max-plus equations, systems of max-plus-algebraic polynomial equations and inequalities, and model-based predictive control for max-plus linear systems).

2 Max-plus algebra

2.1 Basic operations of the max-plus algebra

The basic operations of the max-plus algebra (Baccelli et al. 1992; Cuninghame-Green 1979; Heidergott et al. 2006) are maximization and addition, which will be represented by ⊕ and ⊗ respectively:
$$ x \oplus y = \max(x,y) \quad \text{ and } \quad x \otimes y = x + y $$
for \(x,y \in \mathbb {R}_{\varepsilon } \stackrel {\text {def}}{=} \mathbb {R} \cup \{ -\infty \}\). The reason for using these symbols is that there is a remarkable analogy between ⊕ and conventional addition, and between ⊗ and conventional multiplication: many concepts and properties from linear algebra (such as the Cayley-Hamilton theorem, eigenvectors and eigenvalues, and Cramer’s rule) can be translated to the max-plus algebra by replacing + by ⊕ and × by ⊗ (see, e.g., Baccelli et al. (1992, Chapters 2, 3); Heidergott et al. (2006; Chapters 2, 5) Cuninghame-Green (1979), Gaubert (1992), and Olsder and Roos (1988)). Therefore, we also call ⊕ the max-plus-algebraic addition, and ⊗ the max-plus-algebraic multiplication. Note however that one of the major differences between conventional algebra and max-plus algebra is that in general there do not exist inverse elements with respect to ⊕ in \(\mathbb {R}_{\varepsilon }\). The zero element for ⊕ is \(\varepsilon \stackrel {\text {def}}{=} -\infty \): we have xε = x = εx and xε = ε = εx for all \(x \in \mathbb {R}_{\varepsilon }\). The structure \((\mathbb {R}_{\varepsilon },\oplus ,\otimes )\) is called the max-plus algebra.

In the sequel we denote the set of non-negative integers by \(\mathbb {N}=\{0,1,2,\ldots \}\). Let \(r \in \mathbb {R}\). The r th max-plus-algebraic power of \(x \in \mathbb {R}\) is denoted by \({x}^{{\scriptscriptstyle \otimes }^{\scriptstyle {r}}}\) and corresponds to rx in conventional algebra. If \(x \in \mathbb {R}\) then \({x}^{{\scriptscriptstyle \otimes }^{\scriptstyle {0}}} = 0\) and the inverse element of x w.r.t. ⊗ is \({x}^{{\scriptscriptstyle \otimes }^{\scriptstyle {-1}}} = -x\). There is no inverse element for ε w.r.t. ⊗ since ε is absorbing for ⊗. If r > 0 then \({\varepsilon }^{{\scriptscriptstyle \otimes }^{\scriptstyle {r}}} = \varepsilon \). If r < 0 then \({\varepsilon }^{{\scriptscriptstyle \otimes }^{\scriptstyle {r}}}\) is not defined. In this paper we have \({\varepsilon }^{{\scriptscriptstyle \otimes }^{\scriptstyle {0}}} = 0\) by definition.

The rules for the order of evaluation of the max-plus-algebraic operators correspond to those of conventional algebra. So max-plus-algebraic power has the highest priority, and max-plus-algebraic multiplication has a higher priority than max-plus-algebraic addition.

2.2 Max-plus-algebraic matrix operations

The basic max-plus-algebraic operations are extended to matrices as follows. If \(A, B \in \mathbb {R}_{\varepsilon }^{m \times n}\) and \(C \in \mathbb {R}_{\varepsilon }^{n \times p}\) then
$$ \begin{array}{@{}rcl@{}} (A \oplus B )_{i j} &=& a_{i j} \oplus b_{i j} = \max(a_{i j} , b_{i j} ) \end{array} $$
(1)
$$ \begin{array}{@{}rcl@{}} (A \otimes C)_{i j} &=& {\displaystyle\bigoplus_{k=1}^{n}} a_{i k} \otimes c_{k j} = \max_{k=1,\dots,n} (a_{i k} + c_{k j} ) \end{array} $$
(2)
for all i, j. Note the analogy between (1)–(2) and the definitions of matrix sum and product in conventional linear algebra.

Example 1

Consider \(A = \left [\begin {array}{lll} 2 & 3 & \varepsilon \\ 1 & \varepsilon & 0 \\ 2 & -1 & 3 \end {array}\right ]\) and \(B = \left [\begin {array}{lll} \varepsilon & 5 & -1 \\ 3 & \varepsilon & -2 \\ \varepsilon & -4 & 7 \end {array}\right ]\). Following (1)–(2), we have:

$$ \begin{array}{@{}rcl@{}} A \oplus B & =& \left[\begin{array}{lll} 2\oplus\varepsilon & 3\oplus5 & \varepsilon\oplus -1 \\ 1\oplus3 & \varepsilon\oplus\varepsilon & 0\oplus -2 \\ 2\oplus\varepsilon & -1\oplus-4 & 3\oplus7 \end{array}\right] = \left[\begin{array}{lll} 2 & 5 & -1 \\ 3 & \varepsilon & 0 \\ 2 & -1 & 7 \end{array}\right] \\ A \otimes B & =& \left[\begin{array}{llll} 2\otimes\varepsilon \oplus 3\otimes 3 \oplus \varepsilon\otimes\varepsilon & 2\otimes5 \oplus 3\otimes\varepsilon \oplus \varepsilon\otimes -4 & 2\otimes -1 \oplus 3\otimes -2 \oplus \varepsilon\otimes 7 & \\ 1\otimes\varepsilon \oplus \varepsilon\otimes 3 \oplus 0\otimes\varepsilon & 1\otimes5 \oplus \varepsilon\otimes\varepsilon \oplus 0\otimes -4 & 1\otimes -1 \oplus \varepsilon\otimes -2 \oplus 0\otimes 7 \\ 2\otimes\varepsilon \oplus -1\otimes 3 \oplus 3\otimes\varepsilon & 2\otimes5 \oplus -1\otimes\varepsilon \oplus 3\otimes -4 & 2\otimes -1 \oplus -1\otimes -2 \oplus 3\otimes 7 \end{array}\right] \\ & = &\left[\begin{array}{lll} \varepsilon \oplus 6 \oplus \varepsilon & 7 \oplus \varepsilon \oplus \varepsilon & 1 \oplus 1 \oplus \varepsilon \\ \varepsilon \oplus \varepsilon \oplus \varepsilon & 6 \oplus \varepsilon \oplus -4 & 0 \oplus \varepsilon \oplus 7 \\ \varepsilon \oplus 2 \oplus \varepsilon & 7 \oplus \varepsilon \oplus -1 & 1 \oplus -3 \oplus 10 \end{array}\right] = \left[\begin{array}{lll} 6 & 7 & 1 \\ \varepsilon & 6 & 7 \\ 2 & 7 & 10 \end{array}\right] \enspace . \end{array} $$

The matrix εm×n is the m × n max-plus-algebraic zero matrix: (εm×n)ij = ε for all i, j; and the matrix En is the n × n max-plus-algebraic identity matrix: (En)ii = 0 for all i and (En)ij = ε for all i, j with ij. If the size of the max-plus-algebraic identity matrix or the max-plus-algebraic zero matrix is not specified, it should be clear from the context. The max-plus-algebraic matrix power of \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) is defined as follows: \({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {0}}} = E_{n}\) and \({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}} = A \otimes {A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k-1}}}\) for \(k\in \mathbb {N} \setminus \{0\}\).

2.3 Connection with conventional algebra via exponentials

Olsder and Roos (1988) have introduced a link between conventional algebra and the max-plus algebra based on asymptotic equivalences that allows to establish an analogy between the basic operations of the max-plus algebra (\(\max \limits \) and + ) on the one hand, and the basic operations of conventional algebra (addition and multiplication) on the other hand. As a result, many concepts and properties of conventional algebra also have a max-plus analogue. In particular, Olsder and Roos (1988) used this link to show that every matrix has at least one max-plus-algebraic eigenvalue and to prove a max-plus-algebraic version of Cramer’s rule and of the Cayley-Hamilton theorem. In addition, this analogy allows to translate many concepts, properties, and techniques from conventional linear system theory to system theory for max-plus linear DESs.

In De Schutter and De Moor (1997) the link introduced by Olsder and Roos (1988) has been extended and formalized. Now we recapitulate the reasoning of De Schutter and De Moor (1997) but in a slightly different form that is mathematically more rigorous.

First we extend the conventional definition of asymptotic equivalence such that we can also allow asymptotically equivalence to 0. Recall that f is asymptotically equivalent to g in the neighborhood of \(\infty \), denoted by \({f(s)} \sim {g(s)} , {s\rightarrow \infty }\), if \(\displaystyle \lim _{s \rightarrow \infty } \frac { f(s) }{g(s)} = 1\). This definition in principle requires that there is no real number K such that g is identically zero in \([K,\infty )\). However, we also say a function f is asymptotically equivalent to 0 in the neighborhood of \(\infty \): \({f(s)} \sim {0} , {s\rightarrow \infty }\) if there exists a real number L such that f(s) = 0 for all \(s \geqslant L\).

In this section we consider exponentials of the form eνs with s > 0. Since we want to allow exponents that are equal to ε, we set eεs equal to 0 for all positive real values of s by definition. For all \(x, y, z \in \mathbb {R}_{\varepsilon }\) we now have
$$ \begin{array}{@{}rcl@{}} x \oplus y = z &\Leftrightarrow& {e^{x s} + e^{y s}}\sim{(1+\delta_{x=y}) e^{z s}}, {s\rightarrow\infty} \end{array} $$
(3)
$$ \begin{array}{@{}rcl@{}} x \otimes y = z &\Leftrightarrow& e^{x s} \cdot e^{y s} = e^{z s} \qquad \text{for all } s > 0 \end{array} $$
(4)
where δx=y = 0 if xy and δx=y = 1 if x = y. The relations (3) and (4) show that there exists a connection between the operations ⊕ and ⊗ performed on elements of \(\mathbb {R}_{\varepsilon }\) and the operations + and × performed on exponentials.

2.4 Connection with graph theory

There exists a close relation between max-plus algebra (and related structures) and graphs (see, e.g., Baccelli et al. (1992 Chapter 2); Gondran and Minoux (1976, 1984a)).

Definition 1 (Precedence graph)

Consider \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\). The precedence graph of A, denoted by \(\mathcal {G}(A)\), is a weighted directed graph with vertices 1, 2, …, n and an arc (j, i) with weight aij for each aijε.

It easy to verify that every weighted directed graph corresponds to the precedence graph of an appropriately defined matrix with entries in \(\mathbb {R}_{\varepsilon }\).

Now we can give a graph-theoretic interpretation of the max-plus-algebraic matrix power. Let \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\). If \(k \in \mathbb {N} \setminus \{0\}\) then we have
$$ \begin{array}{@{}rcl@{}} ({A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{k}}})_{i j} = {\displaystyle\max_{i_{1},i_{2},\ldots,i_{k-1}\in\{1,\dots,n\}}} \! (a_{i i_{1}} + a_{i_{1} i_{2}} + {\ldots} + a_{i_{k-1} j} ) \end{array} $$
for all i, j. Hence, \(({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}})_{i j}\) is the maximal weight1 of all paths of \(\mathcal {G}(A)\) of length k that have j as their initial vertex and i as their final vertex — where we assume that if there does not exist a path of length k from j to i, then the maximal weight is equal to ε by definition.

Example 2

Consider matrix A defined in Example 1. The precedence graph \(\mathcal {G}(A)\) of A is given in Fig. 1. Let k = 2. By direct computation (cf. Example 1), we get
$$ {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{2}}}= A \otimes A =\left[\begin{array}{lll} 4 & 5 & 3 \\ 3 & 4 & 3 \\ 5 & 5 & 6 \end{array}\right] \enspace . $$
Now we can check that \(({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{i j}\) is the maximal weight of all paths of \(\mathcal {G}(A)\) of length 2 that have j as their initial vertex and i as their final vertex. These paths and their corresponding weights are shown in Table 1. As one can see, the maximum weights are equal to the entries of \({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}}\).□
Fig. 1

Precedence graph of the matrix A of Examples 1 and 2. The vertices are indicated in an encircled bold italic font, and the weights are indicated next to the arcs in a regular font

Table 1

All possible paths with length 2 for the matrix A and the graph \(\mathcal {G}(A)\) of Example 2, and the corresponding weights. Note that the maximum weights are indeed equal to the entries of \(\protect {A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}}\) (listed in the second column of the table)

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{i j}\)

Value of

Path

Weight

Maximum

 

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{i j}\)

  

weight

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{11}\)

4

1 → 2 → 1

4

4

  

1 → 1 → 1

4

 

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{12}\)

5

2 → 1 → 1

5

5

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{13}\)

3

3 → 2 → 1

3

3

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{21}\)

3

1 → 1 → 2

3

3

  

1 → 3 → 2

2

 

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{22}\)

4

2 → 1 → 2

4

4

  

2 → 3 → 2

− 1

 

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{23}\)

3

3 → 3 → 2

3

3

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{31}\)

5

1 → 1 → 3

4

5

  

1 → 2 → 3

0

 
  

1 → 3 → 3

5

 

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{32}\)

5

2 → 3 → 3

2

5

  

2 → 1 → 3

5

 

\(\protect ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {2}}})_{33}\)

6

3 → 2 → 3

− 1

6

  

3 → 3 → 3

6

 

A directed graph \(\mathcal {G}\) is called strongly connected if for any two different vertices i, j of the graph, there exists a path from i to j.

Definition 2 (Irreducible matrix)

A matrix \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) is called irreducible if its precedence graph \(\mathcal {G}(A)\) is strongly connected.

If we reformulate this in the max-plus algebra then a matrix \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) is irreducible if
$$ \begin{array}{@{}rcl@{}} (A \oplus {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{2}}} \oplus {\ldots} \oplus {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{n-1}}})_{i j} \neq \varepsilon \quad \text{for all } i,j \text{ with } i \neq j , \end{array} $$
since this condition means that for two arbitrary vertices i and j of \(\mathcal {G}(A)\) with ij there exists at least one path (of length 1,2,… or n − 1) from j to i.

Example 3

Let A be defined as in Example 1. The precedence graph \(\mathcal {G}(A)\) of A is given in Fig. 1. Clearly, \(\mathcal {G}(A)\) is strongly connected as there exists a path from any node in \(\mathcal {G}(A)\) to any other node, and hence A is irreducible.□

3 Some basic problems in the max-plus algebra

In this section we present some basic max-plus-algebraic problems and some methods to solve them.

3.1 Max-plus-algebraic eigenvalue problem

Definition 3 (Max-plus-algebraic eigenvalue)

Let \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\). If there exist \(\lambda \in \mathbb {R}_{\varepsilon }\) and \(v \in \mathbb {R}_{\varepsilon }^{n}\) with vεn×1 such that Av = λv then we say that λ is a max-plus-algebraic eigenvalue of A and that v is a corresponding max-plus-algebraic eigenvector of A.

It can be shown that matrix \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) has at least one max-plus-algebraic eigenvalue (Baccelli et al. 1992, Section 3.2.4). However, in contrast to linear algebra, the total number (multiplicities taking into account) of max-plus-algebraic eigenvalues of an n by n matrix is in general less than n. Moreover, if a matrix is irreducible, it has only one max-plus-algebraic eigenvalue (see, e.g., Cohen et al. 1985).

The max-plus-algebraic eigenvalue has the following graph-theoretic interpretation. If \(\lambda _{\max \limits }\) is the maximal average weight2 over all elementary circuits of \(\mathcal {G}(A)\), then \(\lambda _{\max \limits }\) is a max-plus-algebraic eigenvalue of A. An elementary circuit is a circuit in which no vertex appears more than once, except for the initial vertex which appears exactly twice.

There exist several efficient algorithms to determine max-plus-algebraic eigenvalues such as the Karp’s algorithm (Karp 1978; Cohen et al. 1985) or the power algorithm of Cochet-Terrasson et al. (1998).

To determine the max-plus-algebraic eigenvectors corresponding to a given max-plus-algebraic eigenvalue, the following procedure can be applied (Karp 1978; Cohen et al. 1985).

First we introduce the Kleene star operator3 of the matrix A:
$$ \begin{array}{@{}rcl@{}} A^{\star} = E_{n} \oplus A \oplus {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{2}}} \oplus \dots \end{array} $$
(5)
The entries of A have the following meaning: (A)ij is the maximal weight of any path of arbitrary length in \(\mathcal {G}(A)\) between node j and node i. We also define
$$ A^{+} = A \otimes A^{\star} = A \oplus {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{2}}} \oplus {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{3}}} \oplus \dots $$
Let λ be4 a non-ε max-plus-algebraic eigenvalue of \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\). Now consider the matrix Aλ defined by (Aλ)ij = aijλ. Since all paths in \(\mathcal {G}(A_{\lambda })\) will have a non-positive weight, the matrix \(A_{\lambda }^{+}\) will have entries in \(\mathbb {R}_{\varepsilon }\). Now if \((A_{\lambda }^{+})_{ii} = 0\) for some i then \((A_{\lambda }^{+})_{\cdot i}\), the i th column of \(A_{\lambda }^{+}\), will be a max-plus-algebraic eigenvector of A for the eigenvalue λ. This can be verified as follows: Note that in general A = EnA+. Since \((A_{\lambda }^{+})_{ii} = 0\) this implies that \((A_{\lambda }^{\star })_{\cdot i} = (A_{\lambda }^{+})_{\cdot i} \) or equivalently \(A_{\lambda } \otimes (A_{\lambda }^{+})_{\cdot i} = (A_{\lambda }^{+})_{\cdot i}\) or, since Aλ = Aλ or A = Aλ + λ = λA, also \(A \otimes (A_{\lambda }^{+})_{\cdot i} = \lambda \otimes (A_{\lambda }^{+})_{\cdot i}\). Hence, the i th column of \(A_{\lambda }^{+}\) is indeed a max-plus-algebraic eigenvector of A.

Example 4

Consider the (irreducible) matrix A of Examples 1 and 3. The elementary circuits of \(\mathcal {G}(A)\) are listed in Table 2. The maximum average weight is 3. Hence, λ = 3 is a max-plus-algebraic eigenvalue of A. We have \(A_{\lambda } = \left [\begin {array}{lll} -1 & 0 & \varepsilon \\ -2 & \varepsilon & -3 \\ -1 & -4 & 0 \end {array}\right ]\) and \(A_{\lambda }^{+} = \left [\begin {array}{lll} -1 & 0 & -3 \\ -2 & -2 & -3 \\ -1 & -1 & 0 \end {array}\right ]\). Since \((A_{\lambda }^{+})_{33}=0\), the third column of \(A_{\lambda }^{+}\) is a max-plus-algebraic eigenvector of A. Indeed, with \(v=\left [\begin {array}{lll} -3 & -3 & 0 \end {array}\right ]^{\mathrm {T}}\), we find \(A \otimes v = 3 \otimes v = \left [\begin {array}{lll} 0 & 0 & 3 \end {array}\right ]^{T}\).□
Table 2

The elementary circuits of the precedence graph of the matrix A of Examples 1, 3, and 4

Circuit

Length

Weight

Average weight

1 → 1

1

2

2

3 → 3

1

3

3

1 → 2 → 1

2

4

2

2 → 3 → 2

2

− 1

− 0.5

We also have the following property (see, e.g., Baccelli et al. (1992, Chapter 3) Cohen et al. (1985) and Gaubert (1994)):

Theorem 1

If\(A \in \mathbb {R}_{\varepsilon }\)is irreducible, then
$$ \begin{array}{@{}rcl@{}} \exists k_{0} \in \mathbb{N}, \exists c \in \mathbb{N} \setminus \{0\} \text{ such that } \forall k \geqslant k_{0} : {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{k+c}}} = {\lambda}^{{\scriptscriptstyle\otimes}^{\scriptstyle{c}}} \otimes {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{k}}} \end{array} $$
where λis the (unique) max-plus-algebraic eigenvalue of A.

In the case where A is not irreducible the behavior of \({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\) is more complex (see, e.g., Baccelli et al. ((Baccelli et al. 1992), Chapter 3); Heidergott et al. (2006, Chapters 3, 4); De Schutter (2000)).

Example 5

For the matrix A of Example 1 we have
$$ \begin{array}{@{}rcl@{}} A &=& \left[\begin{array}{rrr} 2 & 3 & \varepsilon \\ 1 & \varepsilon & 0 \\ 2 & -1 & 3 \end{array}\right],\qquad {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{2}}} = \left[\begin{array}{rrr} 4 & 5 & 3 \\ 3 & 4 & 3 \\ 5 & 5 & 6 \end{array}\right], \\ {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{3}}} &=& \left[\begin{array}{rrr} 6 & 7 & 6 \\ 5 & 6 & 6 \\ 8 & 8 & 9 \end{array}\right],\qquad\quad {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{4}}} = \left[\begin{array}{rrr} 8 & 9 & 9 \\ 8 & 8 & 9 \\ 11 & 11 & 12 \end{array}\right], \\ {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{5}}} &=& \left[\begin{array}{rrr} 11 & 11 & 12 \\ 11 & 11 & 12 \\ 14 & 14 & 15 \end{array}\right],\qquad {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{6}}} = \left[\begin{array}{rrr} 14 & 14 & 15 \\ 14 & 14 & 15 \\ 17 & 17 & 18 \end{array}\right], \\ {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{7}}} &=& \left[\begin{array}{rrr} 17 & 17 & 18 \\ 17 & 17 & 18 \\ 20 & 20 & 21 \end{array}\right], \qquad {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{8}}} = \left[\begin{array}{rrr} 20 & 20 & 21 \\ 20 & 20 & 21 \\ 23 & 23 & 24 \end{array}\right], \ldots \end{array} $$
It can be verified that for \(k \geqslant 5\) we have \(\big ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k+1}}}\big )_{i j} = \big ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\big )_{i j} + 3 = 3 \otimes \big ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\big )_{i j}\) for all i, j ∈{1,2,3}. So \({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k+1}}} = 3 \otimes {A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\) for c = 1, k0 = 5andk = 5,6,…□

For given matrices \(A, B \in \mathbb {R}_{\varepsilon }^{n \times n}\) the generalized or two-sided max-plus-algebraic eigenproblem (Cuninghame-Green and Butkovič 2008; Gaubert and Sergeev 2013; Butkovič and Jones 2016) consists in finding \(\lambda \in \mathbb {R}_{\varepsilon }\) and a vector \(v \in \mathbb {R}_{\varepsilon }^{n}\) with non-ε entries such that Av = λBv.

Another generalized eigenvalue problem is considered by Cochet-Terrasson et al. (1998), who define the generalized max-plus-algebraic eigenproblem for \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) as finding λ and v such that \(\bigoplus _{t \in \mathbb {N}} A_{t} \otimes {\lambda }^{{\scriptscriptstyle \otimes }^{\scriptstyle {-t}}} \otimes v = v\).

Heidergott et al. (2006, Chapter 3) use the concept of generalized eigenmode of a regular matrix A, which is defined by the pair of vectors (η, v) with \(\eta ,v \in \mathbb {R}^{n}\) such that A ⊗ (kη + v) = (k + 1) ⋅ η + v for all \(k \in \mathbb {N}\). The vector η coincides with the cycle time vector and can be seen as an extended eigenvalue, where v still remains the eigenvector. In Subiono and van der Woude (2017) a generalized power algorithm has been presented that computes the generalized eigenmode.

3.2 Systems of max-plus linear equations

In this section we consider three types of systems of max-plus linear equations, namely Ax = b, x = Axb, and Axb = Cxd.

3.2.1 Ax = b

Let \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) and \(b \in \mathbb {R}_{\varepsilon }^{n}\). In general, the system of max-plus linear equations Ax = b will not always have a solution, even if A is square or if it has more columns than rows. Therefore, the concept of subsolution has been introduced (see Cuninghame-Green(1979, Chapter 14), Baccelli et al. (1992, Section 3.2.3)).

Definition 4 (Subsolution)

Let \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) and \(b \in \mathbb {R}_{\varepsilon }^{n}\). We say that \(x \in \mathbb {R}_{\varepsilon }^{n}\) is a subsolution of the system of max-plus linear equations Ax = b if \(A \otimes x \leqslant b\).

Although the system Ax = b does not always have a solution, it always possible to determine the largest subsolution if we allow components that are equal to \(\infty \) in the solution and if we assume that \(\varepsilon \otimes \infty = \infty \otimes \varepsilon = \varepsilon \) by definition.5 The largest subsolution \(\hat {x}\) of Ax = b is then given by
$$ \begin{array}{@{}rcl@{}} \hat{x}_{j} = {\displaystyle\min_{i}} (b_{i} - a_{i j} ) \qquad \text{for } j=1,2,\ldots,n , \end{array} $$
(6)
or equivalently,
$$ \hat{x} = -(A^{\mathrm{T}}\otimes (-b)) $$

Example 6

Consider the matrix A of Example 1 and let \(b = \left [\begin {array}{lll} 1 & 2 & 3 \end {array}\right ]^{\mathrm {T}}\). The system of equations Ax = b does not have a solution. However, the largest subsolution is given by \(\hat {x}= \left [\begin {array}{lll} -1 & -2 & 0 \end {array}\right ]^{\mathrm {T}}\), and we have \(A \otimes \hat {x} = \left [\begin {array}{lll} 1 & 0 & 3 \end {array}\right ]^{\mathrm {T}}\protect \rule {0mm}{2.6ex}\leqslant b\).□

Note that for the largest subsolution \(\hat {x}\) we have \(A \otimes \hat {x} \leqslant b\). In some cases, one may want to minimize the difference between Ax and b, i.e., to find x such that \(\smash [b]{{\displaystyle \max \limits _{i}} } | b_{i} - (A \otimes x)_{i} |\) is minimized. A solution \(\tilde {x}\) of this problem is given by
$$ \begin{array}{@{}rcl@{}} \tilde{x} = \hat{x} \otimes \frac{\delta}{2} \qquad \text{with } \delta = {\displaystyle\max_{i}} (b_{i} - (A \otimes \hat{x} )_{i} ) \enspace . \end{array} $$
(7)
We then have \({\displaystyle \max \limits _{i}} | b_{i} - (A \otimes \tilde {x})_{i} | = \frac {\delta }{2}\).

3.2.2 x = Axb

Let \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\) and \(b \in \mathbb {R}_{\varepsilon }^{n}\). Since the operation ⊕ is not invertible, an equation of the form x = Axb can in general not be recast into the form \(\tilde {A} \otimes x = b\) for some matrix \(\tilde {A}\).

If the entries of A (see (5)) all belong to \(\mathbb {R}_{\varepsilon }\), then the least solution of x = Axb is given by Baccelli et al. (1992, Section 3.2.3.1):
$$ x = A^{\star} \otimes b \enspace . $$

3.2.3 Axb = Cxd

A system of two-sided max-plus linear equations can be formulated as follows Walkup and Borriello (1998):
$$ \begin{array}{@{}rcl@{}} A \otimes x \oplus b = C \otimes x \oplus d, \end{array} $$
(8)
with \(A, C \in \mathbb {R}_{\varepsilon }^{m \times n}\), \(b,d \in \mathbb {R}_{\varepsilon }^{m \times 1}\), and \(x \in \mathbb {R}_{\varepsilon }^{n \times 1}\). Note that the i th equation in Eq. (8) can be expanded as
$$ \begin{array}{@{}rcl@{}} \Bigg[\bigotimes_{j=0}^{n-1}A_{ij}\otimes x_{j}\Bigg]\oplus b_{i} = \Bigg[\bigotimes_{j=0}^{n-1}C_{ij} \otimes x_{j}\Bigg]\oplus d_{i}. \end{array} $$
The maximum solution to an arbitrary system of linear max-plus equation can be obtained using the following three steps:
  • translate each linear max-plus equation into a small set of upper bound constraints, each of which bounds the values of a single variable from above (see Walkup and Borriello 1998, Section 2.1).

  • employ the max-plus closure operation to find the maximum solution to a special subset of the upper bound constraints (see Walkup and Borriello 1998, Section 2.2).

  • use that subset’s maximum solution to guide the choice of a new constraint subset which will have a smaller maximum solution (see Walkup and Borriello 1998, Section 2.3).

The last two steps are repeated until either the process converges upon a solution which meets all the upper bound constraints, or it is found that the systems is infeasible since some variable has a maximum solution of ε.

The specific case Ax = Cx has been considered in Cuninghame-Green and Butkovic (2003).

3.3 Systems of max-plus-algebraic multivariate polynomial equations and inequalities

A system of multivariate polynomial equations and inequalities in the max-plus algebra is defined as follows:
  • Given a set of integers \(\{m_{k}\}_{k\in \mathcal {K}}\) and sets of coefficients \(\{a_{k i}\}_{k\in \mathcal {K}, i\in \mathcal {I}}, \{b_{k}\}_{k\in \mathcal {K}}\) and set of exponents \(\{c_{k i j}\}_{k\in \mathcal {K}, i\in \mathcal {I}, j\in \mathcal {J}}\) where \(\mathcal {I} = \{{1},\dots ,{m_{k}}\}, \mathcal {J} = \{{1},\dots ,{n}\}\) and \(\mathcal {K} = \{1,\ldots ,p_{\text {eq}},p_{\text {eq}}+1,\ldots ,p_{\text {eq}}+p_{\text {ineq}}\}\), find \(x \in \mathbb {R}^{n}\) such that
    $$ \begin{array}{@{}rcl@{}} && \bigoplus\limits_{i=1}^{m_{k}} a_{k i} \otimes \bigotimes\limits_{j=1}^{n} {x}_{j}^{{\otimes}^{{c}_{k i j}}} = b_{k} \text{for } k=1,2,\ldots,p_{\text{eq}}, \\ && {\bigoplus}_{i=1}^{m_{k}} a_{k i} \otimes {\bigotimes}_{j=1}^{n} {x}_{j}^{{\otimes}^{{c}_{k i j}}} \leqslant b_{k} \text{for } k=p_{\text{eq}}+1,\ldots,p_{\text{eq}}+p_{\text{ineq}}. \end{array} $$
Note that the exponents ckij can be negative or real. In conventional algebra the above equations can be written as
$$ \begin{array}{@{}rcl@{}} \max_{i=1,\dots,m_{k}} \big(a_{k i} + {\displaystyle\sum\limits_{j=1}^{n}} c_{k i j} x_{j} \big) = b_{k} & &\quad \text{for } k=1,2,\ldots,p_{\text{eq}}, \\ \max_{i=1,\dots,m_{k}} \big(a_{k i} + {\displaystyle\sum\limits_{j=1}^{n}} c_{k i j} x_{j} \big) \leqslant b_{k} &&\quad \text{for } k=p_{\text{eq}}+1,\ldots,p_{\text{eq}}+p_{\text{ineq}}. \end{array} $$
In De Schutter and De Moor (1996), De Schutter (1996), and De Schutter and De Moor (1998) it has been shown that the above problem and related max-plus problems such as computing max-plus matrix decompositions, transformation of max-plus linear state space models, state space realization of max-plus linear systems, construction of matrices with a given max-plus characteristic polynomial, and solving systems of max-min-plus equations can be recast as a so-called extended linear complementarity problem (ELCP), which is defined as follows:
  • Given \(A \in \mathbb {R}^{p\times n}\), \(B \in \mathbb {R}^{q \times n}\), \(c \in \mathbb {R}^{p}\), \(d \in \mathbb {R}^{q}\) and m subsets ϕj of \(\{{1,2},\dots ,{p}\}\), find \(x \in \mathbb {R}^{n}\) such that
    $$ \begin{array}{@{}rcl@{}} {\displaystyle\sum\limits_{j=1}^{m}} {\displaystyle{\prod}_{i \in \phi_{j}}^{}} (A x - c)_{i} = 0 \end{array} $$
    (9)
    subject to \(A x \geqslant c\) and Bx = d.
Algorithms for solving ELCPs can be found in De Schutter and De Moor (1995) (for computing the entire solution set) and in De Schutter et al. (2002) (for finding only one solution).

4 Max-plus linear state space models

DESs with only synchronization and no concurrency can be modeled by a max-plus-algebraic model of the following form Baccelli et al. (1992), Cuninghame-Green (1979), and Heidergott et al. (2006):
$$ \begin{array}{@{}rcl@{}} x(k) & =& A \otimes x(k-1) \oplus B \otimes u(k) \end{array} $$
(10)
$$ \begin{array}{@{}rcl@{}} y(k) & =& C \otimes x(k) \end{array} $$
(11)
with \(A \in \mathbb {R}_{\varepsilon }^{n \times n}\), \(B \in \mathbb {R}_{\varepsilon }^{n \times m}\), and \(C \in \mathbb {R}_{\varepsilon }^{l \times n}\), where m is the number of inputs and l the number of outputs. The vector x represents the state, u is the input vector, and y is the output vector of the system. It is important to note that in (10)–(11) the components of the input, the output, and the state are event times, and that the counter k in (10)–(11) is an event counter. For a manufacturing system (see also Example 7 below), u(k) typically represents the time instants at which raw material is fed to the system for the k th time, x(k) the time instants at which the machines start processing the k th batch of intermediate products, and y(k) the time instants at which the k th batch of finished products leaves the system.

Due to the analogy with conventional linear time-invariant systems, a DES that can be modeled by (10)–(11) will be called a max-plus linear time-invariant DES.

Typical examples of systems that can be modeled as max-plus linear DESs are production systems, railroad networks, urban traffic networks, queuing systems, and legged robots (Baccelli et al. 1992; Cuninghame-Green 1979; Heidergott et al. 2006; Lopes et al. 2014). We will now illustrate in detail how the behavior of a simple manufacturing system can be described by a max-plus linear model of the form (10)–(11).

Example 7

Consider the system of Fig. 2. This production system consists of three processing units: P1, P2, and P3. Raw material is fed to P1 and P2, processed, and sent to P3 where assembly takes place. The processing times for P1, P2, and P3 are respectively d1 = 12, d2 = 11, and d3 = 7 time units. We assume that it takes t2 = 2 time units for the raw material to get from the input source to P2 and that it takes t4 = 1 time unit for the finished products of processing unit P2 to reach P3. The other transportation times (t1, t3, and t5) are assumed to be negligible. We assume that at the input of the system and between the processing units there are buffers with a capacity that is large enough to ensure that no buffer overflow will occur. We consider the situation where initially all buffers are empty and none of the processing units contains raw material or intermediate products. This corresponds to in fact to the case where x(0) = ε3×1; if initially, some buffers or some processing units are non-empty, then we will have x(0)≠ε3×1.
Fig. 2

A simple production system

A processing unit can only start working on a new product if it has finished processing the previous product. We assume that each processing unit starts working as soon as all parts are available. Define
  • u(k): time instant at which raw material is fed to the system for the k th time,

  • xi(k): time instant at which the i th processing unit starts working for the k th time,

  • y(k): time instant at which the k th finished product leaves the system.

Let us now determine the time instant at which processing unit P1 starts working for the k th time. If we feed raw material to the system for the k th time, then this raw material is available at the input of processing unit P1 at time t = u(k) + 0. However, P1 can only start working on the new batch of raw material as soon as it has finished processing the previous, i.e., the (k − 1)st, batch. Since the processing time on P1 is d1 = 12 time units, the (k − 1)st intermediate product will leave P1 at time t = x1(k − 1) + 12. Since P1 starts working on a batch of raw material as soon as the raw material is available and the current batch has left the processing unit, this implies that we have
$$ \begin{array}{@{}rcl@{}} x_{1}(k) = \max (x_{1}(k-1)+12, u(k)+0 ) \end{array} $$
(12)
for \(k =1, 2, \dots \) The condition that initially processing unit P1 is empty and idle corresponds to the initial condition x1(0) = ε since then it follows from (12) that x1(1) = u(1), i.e., the first batch of raw material that is fed to the system will be processed immediately.
Using a similar reasoning we find the following expressions for the time instants at which P2 and P3 start working for the (k + 1)st time and for the time instant at which the k th finished product leaves the system:
$$ \begin{array}{@{}rcl@{}} x_{2}(k) & = & \max (x_{2}(k-1)+11, u(k)+2 ) \end{array} $$
(13)
$$ \begin{array}{@{}rcl@{}} x_{3}(k) & = &\max (x_{1}(k)+12+0, x_{2}(k)+11+1, x_{3}(k-1)+7 ) \\ & = &\max (x_{1}(k-1)+24, x_{2}(k-1)+23, x_{3}(k-1)+7, u(k)+14 ) \end{array} $$
(14)
$$ \begin{array}{@{}rcl@{}} y(k) & = &x_{3}(k)+7+0 \end{array} $$
(15)
for \(k=1,2,\dots \) The condition that initially all buffers are empty corresponds to the initial condition x1(0) = x2(0) = x3(0) = ε.
Let us now rewrite the evolution equations of the production system using the symbols ⊕ and ⊗. It is easy to verify that (12) can be rewritten as
$$ x_{1}(k) = 12 \otimes x_{1}(k-1) \oplus 0 \otimes u(k) \enspace . $$
If we also do this for (13)–(15) and if we rewrite the resulting equations in max-plus-algebraic matrix notation, we obtain
$$ \begin{array}{@{}rcl@{}} x(k) & =& \left[\begin{array}{rrr} 12 & \varepsilon & \varepsilon \\ \varepsilon & 11 & \varepsilon \\ 24 & 23 & 7 \end{array}\right] \otimes x(k-1) \oplus \left[\begin{array}{r} 0 \\ 2 \\ 14 \end{array}\right] \otimes u(k) \\ y(k) & =& \left[\begin{array}{rrr} \varepsilon & \varepsilon & 7 \end{array}\right] \otimes x(k) \end{array} $$
where \(x(k) = \left [\begin {array}{lll} x_{1}(k) & x_{2}(k) & x_{3}(k) \end {array}\right ]^{\mathrm {T}}\). Note that this is a model of the form (10)–(11).

In the next section we shall use this production system to illustrate some of the max-plus-algebraic techniques that can be used to analyze max-plus linear time-invariant DESs.

5 Performance analysis and control of max-plus linear systems

5.1 Analysis of max-plus linear systems

Now we present some analysis techniques for max-plus linear DESs that can be described by a model of the form (10)–(11).

First we determine the input-output behavior of the max-plus linear DES. We have
$$ \begin{array}{@{}rcl@{}} x(1) & =& A \otimes x(0) \oplus B \otimes u(1) \\ x(2) & =& A \otimes x(1) \oplus B \otimes u(2) \\ & =& {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{2}}} \otimes x(0) \oplus A \otimes B \otimes u(1) \oplus B \otimes u(2) \end{array} $$
etc., which yields \( x(k) = {A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}} \otimes x(0) \oplus {\displaystyle \bigoplus _{i=1}^{k}} {A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k-i}}} \otimes B \otimes u(i) \) for k = 1,2,… Hence,
$$ \begin{array}{@{}rcl@{}} y(k) = C \otimes {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{k}}} \otimes x(0) \oplus {\displaystyle\bigoplus_{i=1}^{k}} C \otimes {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{k-i}}} \otimes B \otimes u(i) \end{array} $$
(16)
for k = 1,2,…

Consider two input sequences \(u_{1} = \{ u_{1}(k) \}_{k=1}^{\infty \protect \rule [-0.25mm]{0mm}{0mm}}\protect \rule [-1.5mm]{0mm}{0mm}\) and \(u_{2} = \{ u_{2}(k) \}_{k=1}^{\infty \protect \rule [-0.25mm]{0mm}{0mm}}\protect \rule [-1.5mm]{0mm}{0mm}\). Let \(y_{1} = \{ y_{1}(k) \}_{k=1}^{\infty \protect \rule [-0.25mm]{0mm}{0mm}}\protect \rule [-1.6mm]{0mm}{0mm}\) be the output sequence that corresponds to the input sequence u1 (with initial condition x1(0) = x1,0) and let \(y_{2} = \{ y_{2}(k) \}_{k=1}^{\infty \protect \rule [-0.25mm]{0mm}{0mm}}\protect \rule [-1.6mm]{0mm}{0mm}\) be the output sequence that corresponds to the input sequence u2 (with initial condition x2(0) = x2,0). Let \(\alpha , \beta \in \mathbb {R}_{\varepsilon }\). From (16) it follows that the output sequence that corresponds to the input sequence \(\alpha \otimes u_{1} \oplus \beta \otimes u_{2} = \{ \alpha \otimes u_{1}(k) \oplus \beta \otimes u_{2}(k) \}_{k=1}^{\infty \protect \rule [-0.25mm]{0mm}{0mm}}\protect \rule [-1.75mm]{0mm}{0mm}\) (with initial condition αx1,0βx2,0) is given by αy1βy2. This explains why DESs that can be described by a model of the form (10)–(11) are called max-plus linear.

Let \(p \in \mathbb {N} \setminus \{0\}\). If we define
$$ \begin{array}{@{}rcl@{}} Y & =& \left[\begin{array}{llll} y^{\mathrm{T}}(1) & y^{\mathrm{T}}(2) & {\ldots} & y^{\mathrm{T}}(p) \end{array}\right]^{\mathrm{T}} \\ U & =& \left[\begin{array}{llll} u^{\mathrm{T}}(1) & u^{\mathrm{T}}(2) & {\ldots} & u^{\mathrm{T}}(p) \end{array}\right]^{\mathrm{T}} \enspace , \end{array} $$
then (16) results in
$$ \begin{array}{@{}rcl@{}} Y = H \otimes U \oplus G \otimes x(0) \end{array} $$
(17)
with
$$ \begin{array}{@{}rcl@{}} H = \left[\begin{array}{cccc} C \otimes B & \varepsilon & {\ldots} & \varepsilon \\ C \otimes A \otimes B & C \otimes B & {\ldots} & \varepsilon \\ {\vdots} & {\vdots} & {\ddots} & {\vdots} \\ C \otimes {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{p-1}}} \otimes B & C \otimes {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{p-2}}} \otimes B & \ldots & C \otimes B \end{array}\right] , {\kern.7cm} G = \left[\begin{array}{c} C \\ C \otimes A \\ {\vdots} \\ C \otimes {A}^{{\scriptscriptstyle\otimes}^{\scriptstyle{p-1}}} \end{array}\right] . \end{array} $$
For the production system of Example 7 this means that if we know the time instants at which raw material is fed to the system and we know the initial state x(0), we can compute the time instants at which the finished products will leave the system. Often we assume that x(0) = εn×1. For the simple production system of Example 7 this would mean that all the buffers are initially empty.

Example 8

Consider the production system of Example 7. Define \(Y = \left [\begin {array}{llll} y(1) & y(2) & y(3) & y(4) \end {array}\right ]^{\mathrm {T}}\) and \(U = \left [\begin {array}{llll} u(1) & u(2) & u(3) & u(4) \end {array}\right ]^{\mathrm {T}}\). If x(0) = ε3×1 then we have Y = HU with
$$ \begin{array}{@{}rcl@{}} H = \left[\begin{array}{rrrr} 21 & \varepsilon & \varepsilon & \varepsilon \\ 32 & 21 & \varepsilon & \varepsilon \\ 43 & 32 & 21 & \varepsilon \\ 55 & 43 & 32 & 21 \end{array}\right] \enspace . \end{array} $$
If we feed raw material to the system at time instants u(1) = 1, u(2) = 8, u(3) = 15, u(4) = 19, the finished products will leave the system at time instants y(1) = 22, y(2) = 33, y(3) = 44, and y(4) = 56 since
$$ H \otimes \left[\begin{array}{r} 1 \\ 8 \\ 15 \\ 19 \end{array}\right] = \left[\begin{array}{r} 22 \\ 33 \\ 44 \\ 56 \end{array}\right] \enspace . $$
Now we consider the autonomous max-plus linear DES described by
$$ \begin{array}{@{}rcl@{}} x(k+1) & =& A \otimes x(k) \\ y(k) & =& C \otimes x(k) \end{array} $$
with x(0) = x0. For the production system of Example 7 this would mean that we start from a situation in which some internal buffers and all the input buffer are not initially empty (if x0εn×1) and that afterwards the raw material is fed to the system at such a rate that the input buffers never become empty.
If the system matrix A of the autonomous DES is irreducible, then we can compute the “ultimate” behavior of the autonomous DES by solving the max-plus-algebraic eigenvalue problem Av = λv. By Theorem 1 there exist integers \(k_{0} \in \mathbb {N}\) and \(c \in \mathbb {N} \setminus \{0\}\) such that \(x(k+c) = {\lambda }^{{\scriptscriptstyle \otimes }^{\scriptstyle {c}}} \otimes x(k)\) for all \(k \geqslant k_{0}\). This means that
$$ \begin{array}{@{}rcl@{}} x_{i}(k+c) - x_{i}(k) = c \lambda \end{array} $$
(18)
for i = 1,2,…, n and for all \(k \geqslant k_{0}\). From this relation it follows that for a production system the average time duration of a cycle of the production process when the system has reached its cyclic behavior will be given by λ. The average production rate will then be equal to \(\frac {1}{\lambda }\). This also enables us to calculate the utilization levels of the various machines in the production process. Furthermore, some algorithms to compute the eigenvalue also yield the critical paths of the production process and the bottleneck machines (Cohen et al. 1985).

Example 9

The system matrix A of the production system of Example 7 is not irreducible and it does not lead to a behavior of the form6 (18). In fact, it can be verified that A has three eigenvalues λ1 = 12, λ2 = 11, and λ3 = 7, with corresponding eigenvectors
$$ v_{1} = \left[\begin{array}{r} 0 \\ \varepsilon \\ 12 \end{array}\right], v_{2}= \left[\begin{array}{r} \varepsilon \\ 0 \\ 12 \end{array}\right], \text{ and } v_{3} = \left[\begin{array}{r} \varepsilon \\ \varepsilon \\ 0 \end{array}\right]. $$
Now consider x(0) = [012]T and compute x(k + 1) = Ax(k) for \(k=0,\dots ,4\). This yields
$$ x(1) = \left[\begin{array}{r} 12 \\ 12 \\ 24 \end{array}\right], x(2) = \left[\begin{array}{r} 24 \\ 23 \\ 36 \end{array}\right], x(3) = \left[\begin{array}{r} 36 \\ 34 \\ 48 \end{array}\right], x(4) = \left[\begin{array}{r} 48 \\ 45 \\ 60 \end{array}\right], \text{ and } x(5) = \left[\begin{array}{r} 60 \\ 56 \\ 72 \end{array}\right]. $$
So by choosing k0 = 1, we see that for kk0 the first and the third element of x satisfies xi(k + c) − xi(k) = cλ1 for i = 1or3 with c = 1, but the second element of x satisfies x2(k + c) − x2(k) = cλ2 with c = 1, which means that (18) does not hold in this case.

5.2 Control of max-plus linear DES

The basic control problem for max-plus linear DESs consists in determining the optimal input times (e.g., feeding times of raw material or starting times of processes or activities) for a given reference signal (e.g., due dates for the finished products or completion dates for processes or activities). In the literature many different approaches are described to solve this problem. Among these the most common ones are based on residuation and on model predictive control (MPC). Residuation essentially consists in finding the largest solution to a system of max-plus inequalities with the input times as variables and the due dates as upper bounds. The MPC approach is essentially based on the minimization of the error between the actual output times and the due dates, possibly subject to additional constraints on the inputs and the outputs.

Remark 1

For simplicity, we only consider single-input single-output (SISO) systems in this section. Note however that MPC can very easily be extended to multi-input multi-output (MIMO) systems.

5.2.1 Residuation-based control

The basic control problem for max-plus linear DESs consists in determining the optimal feeding times of raw material to the system and/or the optimal starting times of the (internal) processes.

Consider (17) with x(0) = εn×1. If we know the vector Y of latest times at which the finished products have to leave the system, we can compute the vector U of (latest) time instants at which raw material has to be fed to the system by solving the system of max-plus linear equations HU = Y, if this system has a solution, or by determining the largest subsolution of HU = Y, i.e., determining the largest U such that \(H \otimes U \leqslant Y\). This approach is also based on residuation (Blyth and Janowitz 1972).

Note that the method above corresponds to just-in-time production. However, if we have perishable goods it is sometimes better to minimize the maximal deviation between the desired and the actual finishing times. In that case we have to solve the problem
$$ {\displaystyle\min_{U}} {\displaystyle\max_{i}} | (Y - H \otimes U )_{i} |. $$
(19)
This problem can be solved using formula (7).

Example 10

Let us again consider the production system of Example 7 and the matrix H and the vectors U and Y as defined in Example 8. If the finished parts should leave the system before time instants 21, 32, 48, and 55 and if we want to feed the raw material to the system as late as possible, then we should feed raw material to the system at time instants 0, 11, 23, 34 since the largest subsolution of \( H \otimes U = Y = \left [\begin {array}{llll} 21 & 32 & 48 & 55 \end {array}\right ]^{\mathrm {T}} \) is \(\hat {U} = \left [\begin {array}{llll} 0 & 11 & 23 & 34 \end {array}\right ]^{\mathrm {T}}\). The actual output times \(\hat {Y}\) are given by \(\hat {Y} = H \otimes \hat {U} = \left [\begin {array}{llll} 21 & 32 & 44 & 55 \end {array}\right ]^{\mathrm {T}}\). Note that \(\hat {Y}\leq Y\). The largest deviation δ between the desired and the actual output times is equal to 4. The input times that minimize this deviation are given by \( \tilde {U} = \hat {U} \otimes \frac {\delta }{ 2 } = \hat {U} \otimes 2 = \left [\begin {array}{llll} 2 & 13 & 25 & 36 \end {array}\right ]^{\mathrm {T}}\). The corresponding output times are given by \(\tilde {Y} = \left [\begin {array}{llll} 23 & 34 & 46 & 57 \end {array}\right ]^{\mathrm {T}}\). Note that the largest deviation between the desired finishing and the actual finishing times is now equal to \(\frac {\delta }{ 2 }=2\), which means that the maximal deviation between the desired (Y) and the actual (\(\tilde {Y}\)) finishing times is minimized.

The residuation-based approach for computing the optimal feeding times is used in one form or another in Boimond and Ferrier (1996); Cottenceau et al. (2001); Goto (2008); Hardouin et al. (2009); Houssin et al. (2013); Lahaye et al. (2008); Maia et al. (2003); Menguy et al. (1997, 2000a, b); Menguy and Boimond (1998).

In particular, Libeaut and Loiseau (1995) have applied the geometric approach and residuation theory in order to find the optimal input. The geometric approach is a collection of mathematical concepts developed to achieve a better and neater insight into the most important features of multi-variable linear dynamical systems in connection with compensator and regulator synthesis problems. It is based on the state space representation and it also easily links SISO and MIMO systems and clarifies in a concise and elegant way some common properties that cannot be obtained by the transform-based techniques usually adopted in the SISO case. Related work can be found in Ilchmann (1989). Using these results, in Libeaut and Loiseau (1995) the set of admissible initial conditions of a linear system is defined and characterized geometrically and the optimal input is computed by applying residuation theory. In Boimond and Ferrier (1996) the Internal Model Control (IMC) structure used in conventional control theory is extended to deterministic max-plus linear DESs. The IMC philosophy relies on the internal model principle, which states that control can be achieved only if the control system encapsulates, either implicitly or explicitly, some representation of the process to be controlled; a comprehensive explanation can be found in Garcia and Morari (1982). The controller design raises the problem of model inversion, where the residuation approach also plays an important role. In Menguy et al. (1997), a feedback control structure is proposed to be able to take into account a possible mismatch between the system and its model. Instead of adopting the commonly used IMC approach for closed-loop systems, the authors proposed another closed-loop control structure consisting in applying an open-loop control approach that is modified by using the system output behavior. In fact, the model is initially supposed to be exact; subsequently, the control structure is modified by using the system output in order to be as close as possible to the optimal system control. The optimization problem is solved using residuation. The method of Menguy et al. (1998) is also based on inverting a dynamic system and applying residuation theory. The proposed control structure is based on adaptive control and encompasses both identification and inversion of a dynamic system. In Lahaye et al. (2008), a just-in-time optimal control for a first-in first-out (FIFO) time event graph is proposed based on residuation theory. The aim is to compute the largest control u such that the firing dates of output transitions (or simply the output signals) occur at the latest before the desired ones. In Brunsch et al. (2012), Brunsch and Raisch (2012), and David-Henriet et al. (2017) residuation-based control is applied for max-plus linear systems arising in the context of manufacturing systems and of high-throughput screening (e.g., for the pharmaceutical industry).

Note that the sequence \(u(1),u(2),\dots ,u(N)\) of the feeding times should be non-decreasing as it corresponds to a sequence of consecutive feeding times. However, in general a residuation-based solution does not always satisfy this property. This problem can be overcome by projection onto the set of non-increasing sequences as in the approach proposed by Menguy et al. (2000a). In addition, an initial state x(0) that is not necessarily equal to εn×1 can be included and an explicit form can be derived for the residuation controller. The resulting expression involves the operations minimization and addition from the min-plus algebra, which is the dual of the max-plus algebra and which has minimization (\(\oplus ^{\prime }\)) and addition (\(\otimes ^{\prime }\)) as basic operations with the zero element \(\varepsilon ^{\prime }=\infty \). More specifically, the residuation controller is now given by the following expression (van den Boom and De Schutter 2014):
$$ \hat{U} = S \otimes^{\prime} (-H^{T}) \otimes^{\prime} \left( G \otimes x(0) \oplus H \otimes U_{0} \oplus Y\right) $$
(20)
where U0 and S are respectively a vector with p components and a p by p matrix defined as7
$$ \begin{array}{@{}rcl@{}} U_{0} & =& \left[\begin{array}{rrrr} u(0) & u(0) & {\cdots} & u(0) \end{array}\right]^{\mathrm{T}} \\ S & =& \left[\begin{array}{cccc} 0 & & {\cdots} & 0 \\ \varepsilon^{\prime} & {\ddots} & {\ddots} & {\vdots} \\ {\vdots} & {\ddots} & {\ddots} & \\ \varepsilon^{\prime} & {\cdots} & \varepsilon^{\prime} & 0 \end{array}\right] \enspace . \end{array} $$

5.2.2 Model predictive control

A somewhat more advanced control approach for max-plus linear DESs has been developed by De Schutter and van den Boom (2001). This approach is an extension to max-plus linear DESs of the model-based predictive control approach called Model Predictive Control (MPC) (Camacho and Bordons 1995; Maciejowski 2002; Rawlings and Mayne 2009) that has originally been developed for time-driven systems.

The main advantage of the MPC method of De Schutter and van den Boom (2001), comparing to other available methods for control design in max-plus linear DES, is that it allows to include general linear inequality constraints on the inputs, states, and outputs of the system.

In MPC for max-plus linear DESs at each event step k the controller computes the input sequence \(u(k),\dots ,u(k+N_{\mathrm {p}}-1)\) that optimizes a performance criterion J over the next Np event steps, where Np is called the prediction horizon, subject to various constraints on the inputs, states, and outputs of the system. Typically, the performance criterion
$$ J = J_{\text{out}} + \lambda J_{\text{in}} $$
aims at minimizing the difference or the tardiness with respect to a due date signal r(⋅), while at the same time making the inputs as large as possible (just-in-time production) where λ ∈ (0,1) is a weighting factor. A typical choice for the output cost function is
$$ \begin{array}{@{}rcl@{}} J_{\text{out}} = \sum\limits_{j=0}^{N_{\mathrm{p}}-1} \sum\limits_{i=1}^{n_{y}} \max(y_{i}(k+j)-r_{i}(k+j),0) \end{array} $$
(21)
and for the input criterion function a typical choice is
$$ \begin{array}{@{}rcl@{}} J_{\text{in}} = -\sum\limits_{j=0}^{N_{\mathrm{p}}-1} \sum\limits_{l=1}^{n_{u}} u_{l}(k+j). \end{array} $$
(22)
Note that Jout penalizes late delivery and Jin penalizes early feeding times (i.e., it favors just-in-time feeding).
This results in an optimization problem that has to be solved at each event step k. In order to reduce the computational complexity, often a control horizon Nc is introduced with Nc < Np and it is assumed that the feeding rate (Δu(k + j) = u(k + j) − u(k + j − 1)) is constant after event step k + Nc, i.e., u(k + Nc + ) = u(k + Nc − 1) for \(\ell =0,\dots ,N_{\mathrm {p}}-N_{\mathrm {c}}-1\). Moreover, we can also have lower and upper bound constraints on Δx(k + j), Δy(k + j), y(k + j), or linear constraints on x(k + j), y(k + j), and u(k + j). In general, this leads to a system of linear inequalities \(A_{\mathrm {c}}(k) \tilde {x}(k) + B_{\mathrm {c}}(k) \tilde {y}(k) + C_{\mathrm {c}}(k) \tilde {u}(k) \leqslant d_{\mathrm {c}}(k)\), for properly defined matrices and vectors Ac(k), Bc(k), Cc(k), and dc(k), and where
$$ \begin{array}{@{}rcl@{}} \tilde{u}(k) & =& \left[\begin{array}{lll}u(k) & {\dots} & u(k+N_{\mathrm{p}}-1) \end{array}\right]^{\text{T}} \\ \tilde{x}(k) & =& \left[\begin{array}{lll}x(k) & {\dots} & x(k+N_{\mathrm{p}}-1) \end{array}\right]^{\text{T}} \\ \tilde{y}(k) & =& \left[\begin{array}{lll}y(k) & {\dots} & y(k+N_{\mathrm{p}}-1) \end{array}\right]^{\text{T}} \enspace . \end{array} $$
Hence, at event step k the following optimization problem has to be solved:
$$ \begin{array}{@{}rcl@{}} && \min_{\tilde{u}(k),\tilde{x}(k), \tilde{y}(k)} J \end{array} $$
(23)
$$ \begin{array}{@{}rcl@{}} && \text{subject to} \\ && \quad x(k+j) = A \otimes x(k+j-1) \oplus B \otimes u(k+j-1) \quad \text{for } j=0,\dots,N_{\mathrm{p}}-1 \end{array} $$
(24)
$$ \begin{array}{@{}rcl@{}} && \quad y(k+j) = C \otimes x(k+j) ~~~\quad\qquad\qquad\qquad\qquad\qquad \text{for } j=0,\dots,N_{\mathrm{p}}-1 \end{array} $$
(25)
$$ \begin{array}{@{}rcl@{}} &&\quad {\Delta} u(k+j) \geq 0 \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{for } j=0,\dots,N_{\mathrm{c}}-1 \end{array} $$
(26)
$$ \begin{array}{@{}rcl@{}} && \quad u(k+N_{\mathrm{c}}+j) = u(k+N_{\mathrm{c}}-1) \qquad\qquad\qquad\quad\qquad \text{for } j=0,\dots,N_{\mathrm{p}}-N_{\mathrm{c}}-1 \end{array} $$
(27)
$$ \begin{array}{@{}rcl@{}} && \quad A_{\mathrm{c}}(k) \tilde{x}(k) + B_{\mathrm{c}}(k) \tilde{y}(k) + C_{\mathrm{c}}(k) \tilde{u}(k) \leqslant d_{\mathrm{c}}(k) \enspace . \end{array} $$
(28)
MPC uses a receding horizon approach. This means that once the optimal input sequence has been determined only the input for the first event step is applied to the system. Next, at event step k + 1 the new state of the system is determined or measured8, the prediction window is shifted, and the whole process is repeated again. This receding horizon approach introduces a feedback mechanism, which allows to reduce the effects of possible disturbances and model mismatch errors.

More information on MPC for max-plus-linear systems can be found in De Schutter and van den Boom (2001); Goto (2009); Necoara et al. (2007, 2009a); van den Boom and De Schutter (2002a). MPC for max-plus-linear systems with partial synchronization is proposed in (David-Henriet et al. 2016).

5.3 Comparison of residuation-based control and MPC for max-plus-linear systems

In this section we compare the control methods based on residuation with the ones based on MPC. We discuss four items: constraint handling, cost functions, computation time, and implementation. We end the section with a worked example.

5.3.1 Constraint handling

First note that the problem of solving the problem (23)–(25) with output criterion (21) and input criterion (22), but without constraints (26)–(28), is equivalent to solving the residuation problem with p = Np in a receding horizon setting. If we also take constraint (26) into account, we obtain the result from Menguy et al. (2000a) with a non-decreasing input signal, for which the explicit solution is given by (20).

As was already mentioned in the previous section, the main advantage of the MPC, compared to other available methods for control design for max-plus linear DES, is that it allows to include general linear inequality constraints on the inputs, states, and outputs of the system. This is done by writing the control law as the result of a constrained optimization problem (23)–(28). MPC is until so far the only known method to handle the constraint (28).

5.3.2 Different cost functions

In Section 5.2.2 we have discussed MPC with the performance criterion J = Jout + λJin where Jout and Jin are given by (21) and (22), respectively. This performance criterion results in a just-in-time control strategy, which is comparable to the ones in residuation-based control.

However, in (De Schutter and van den Boom 2001) it has been shown that MPC can handle a broad range of other performance criteria. Some alternative output criteria are
$$ \begin{array}{@{}rcl@{}} J_{\text{out},1} &=& \sum\limits_{j=1}^{N_{\mathrm{p}}} \sum\limits_{i=1}^{n_{y}} \Big| \>y_{i}(k+j)-r_{i}(k+j) \Big| \end{array} $$
(29)
$$ \begin{array}{@{}rcl@{}} J_{\text{out},2} &= &\sum\limits_{j=1}^{N_{\mathrm{p}}} \sum\limits_{i=1}^{n_{y}} \Big| {\Delta}^{2} y_{i}(k+j) \Big| \end{array} $$
(30)
where (29) does not only penalize late delivery but also any deviation from the due date, and (30) can be used if we want to balance the output rates. Some alternative input criteria are
$$ \begin{array}{@{}rcl@{}} J_{\text{in},1} &=& {\sum}_{j=1}^{N_{\mathrm{p}}} \sum\limits_{i=1,\ldots,n_{y},\>l=1,\ldots,n_{u}} \Bigl| \>y_{i}(k+j)-u_{l}(k+j-1) \Bigr| \end{array} $$
(31)
$$ \begin{array}{@{}rcl@{}} J_{\text{in},2} &=& \sum\limits_{j=1}^{N_{\mathrm{p}}-1} \sum\limits_{l=1}^{n_{u}} \Bigl| {\Delta}^{2} u_{l}(k+j) \Bigr| \end{array} $$
(32)
where (31) minimizes the maximum holding time of products in the system, and (32) balances the input rates.

5.3.3 Computation time

An important advantage of residuation-based control w.r.t. MPC is that of the constraint (28) is not present residuation-based control offers an analytic solution that can be computed very efficiently, while in MPC at every event step an optimization problem has to be solved numerically.

In some cases (De Schutter and van den Boom 2001) the MPC optimization will result in a linear programming problem, which can to be solved numerically and on-line in a finite number of iterative steps using reliable and fast algorithms. This is the case if we, e.g., solve the problem (23)–(28) with output criterion (21) and input criterion (22), and if the matrix Bc in constraint (28) satisfies [Bc]ij ≥ 0 for all i, j.

If the performance criterion is not convex the problem has to be solved with other optimization techniques, such as an ELCP algorithm (De Schutter and De Moor 1995), a mixed integer linear programming problem (Heemels et al. 2001; Bemporad and Morari 1999), or a optimistic programming algorithm (Xu et al. 2014).

The time required for the optimization makes model predictive control not always suitable for fast systems and/or complex problems.

5.3.4 Implementation

Another difference between residuation-based controllers and controllers based on MPC is the complexity in implementation.

The control law of a residuation-based controller can be written as an analytic expression, which can be implemented on a programmable logic controller (PLC), a distributed control system (DCS), or a programmable automation controller (PAC) in a straightforward way.

Most MPC controllers in industry use personal computers or dedicated microprocessor-based systems to manipulate the data and to perform the calculations, usually needing dedicated optimization software. This means that the final step in the controller design (implementation) is more complex for MPC controllers than for residuation-based controllers.

For application in industry it is important that controllers are cheap and therefore the additional performance and scope of the MPC controller should weigh up against the higher implementation costs.

5.3.5 Worked example

To make the above explanations more tangible, let us compare the residuation methods (19) and (20) with the MPC method (23)–(28). We assign u(0) = 15 as the initial input value and x(0) = [0214]T as the initial state. The reference signal is given as \(\{r(k)\}_{k=0}^{15}= 14, 33, 57, 76, 85, 108, 108, 108, 126, 140,\) 154,168,182,196,210,224. We compare three controller implementations:

Residuation-1

Here we use (19) to compute the optimal input sequence. For event step k = 1 the optimal sequence is given by \(\{u_{\text {opt}}^{\text {res-1}}(j)\}_{j=0}^{15}= 15, 12, 29, 41, 53, 65, 76,\) 87,105,119,133,147,161,175,189,203. The corresponding output sequence is given by: \(\{y_{\text {opt}}^{\text {res-1}}(j)\}_{j=0}^{15}= 21, 33, 50, 62, 74, 86, 97, 108, 126, 140, 154, 168,\) 182,196,210, 224.

Residuation-2

Where we use (20) to compute the optimal, non-decreasing input sequence. For event step k = 1 the optimal sequence is given by \(\{u_{\text {opt}}^{\text {res-2}}(j)\}_{j=0}^{15}= 15,\) 15,29,41,53,65,76,87, 105,119,133,147,161,175,189,203. The corresponding output sequence is given by: \(\{y_{\text {opt}}^{\text {res-2}}(j)\}_{j=0}^{15}= 21,\) 36,50,62,74,86, 97,108,126,140,154,168,182,196,210, 224.

Model predictive control

We solve the MPC problem (23)–(28) for Np = 10, Nc = 5 where (28) is a constraint on the increment input: Δu(k + j) ≤ 15, j = 0,…,15. Considering (21) and (22), the performance criterion is defined as J = Jout,1 + λJin with λ = 0.05. The optimal input sequence obtained for event step k = 1 is \(\{u_{\text {opt}}^{\text {MPC}}(j)\}_{j=0}^{15}= 15, 15, 29, 41, 53, 65, 76, 87, 102, 117, 132, 147, 161, 175, 189, 203\). The corresponding output sequence is given by: \(\{y_{\text {opt}}^{\text {MPC}}(j)\}_{j=0}^{15}= 21, 36, 50, 62, 74, 86,\) 97,108, 123,138,153,168,182,196,210,224.

In Fig. 3 the tracking error y(k) − r(k), k = 0,…,15 is given for the two residuation approaches and the MPC approach. All start with a tracking error of y(0) − r(0) = 7. For k = 1 the tracking error of the residuation-1 method decreases to zero, while the residuation-2 method as well as the MPC method still face a tracking error. This is due to the fact that the residuation-1 solution allows a decreasing input signal (note that \(u_{\text {opt}}^{\text {res-1}}(1) = 12 < 15 =u_{\text {opt}}^{\text {res-1}}(0)\)) and is therefore infeasible in reality. This infeasibility is caused by the fact that the solution intends to fulfill the constraint \(\tilde {y}_{\text {opt}}^{\text {res-1}}(k) = \tilde {r}(k)\), where \(y_{\text {opt}}^{\text {res-1}}\) is the obtained output sequence and \(\tilde {r}\) is the desired one, which cannot be met using a non-decreasing input sequence \(u_{\text {opt}}^{\text {res-1}}\). This is due to the fact the output obtained from an autonomous system is always less that or equal to the one obtained from (10)–(11). So, if the output of the autonomous system is not less than or equal to the reference signal, it is impossible for the real system to satisfy this condition.
Fig. 3

Tracking error yr obtained from residuation and MPC

The residuation-2 method (Libeaut and Loiseau 1995; Menguy et al. 1997) includes the non-decreasing input constraint and it yields a non-decreasing — and thus feasible — input sequence. Note that the residuation-2 approach is equivalent to solving MPC problem (23)–(26) for Np = 15.

For k ∈{8,9,10} we see observe that the tracking error in the MPC approach is negative. This due to the fact that the increment input signal hits the constraint Δu(k) ≤ 15. The residuation methods both show input signals with an increment larger than 15, which thus do not satisfy all the imposed constraints.

6 Related work in modeling, performance analysis, identification, and control

6.1 Modeling and performance analysis

Addad et al. (2010) present an approach to evaluate the response time in networked automation systems that use a client/server protocol. The developments introduced are derived from modeling the entire architecture in the form of timed event graphs, as well as from the resulting state space representation in max-plus algebra. Another method for deriving a max-plus linear state space representation for repetitive FIFO systems is presented in Goto and Masuda (2008).

An interesting topic is the use of network calculus as a tool to analyze the performance in communication networks and queuing networks, in particular to obtain deterministic bounds. Although network calculus is originally based on min-plus algebra, alternative formulations can be developed based on max-plus algebra (Liebeherr 2017). In Bouillard and Thierry (2008) some efficient algorithms, implementing network calculus operations for some classical functions, have been provided as well as the analysis of their computational complexity.

Other related work on modeling and analysis different types of systems using max-plus algebra can be found in Declerck (2011), Shang and Sain (2009), Tao et al. (2013), Addad et al. (2012), Lu et al. (2012), Goto and Takahashi (2012), Nait-Sidi-Moh et al. (2009), Addad et al. (2011), Su and Woeginger (2011), Ferreira Cândido et al. (2018), and Adzkiya et al. (2015).

In (van den Boom and De Schutter 2006; 2012) switching max-plus linear systems were introduced as an extension of max-plus linear systems. The system can switch between different modes, where in each mode the system is described by a max-plus-linear state equation and a max-plus-linear output equation, with different system matrices for each mode.

6.2 Identification and verification

In Schullerus et al. (2006) the problem of designing adequate input signals for state space identification of max-plus linear systems is addressed. This work constitutes an improvement compared to the existing methods by adding additional constraints due to safety or performance requirements on input and output signals besides reducing the computational burden of the already existing models.

Observer design for max-plus linear systems is considered in Hardouin et al. (2010) and Hardouin et al. (2017). Stochastic filtering of max-plus linear systems with bounded disturbances is discussed in Santos-Mendes et al. (2019).

Adzkiya et al. (2013) develop a method to generate finite abstractions (i.e., simplified representations that still capture a given behavior or feature of the original system) of max-plus-linear models. This approach enables the study — in a computationally efficient way — of general properties of the original max-plus-linear model by verifying (via model checking) equivalent logical specifications over the finite abstraction. Related work is reported in Esmaeil Zadeh Soudjani et al. (2016).

6.3 Control

In Katz (2007) the extension of the geometric approach to linear systems over the max-plus algebra is presented. This approach is based on the state space representation rather than using residuation methods; however, it is still different from the MPC approach. The aim is to compute the set of initial states for which there exists a sequence of control vectors that makes the state of system (10) converge in the given space. Related work is described in Shang (2013) and Shang et al. (2016).

An important topic in the study of control for max-plus linear system is the incorporation of uncertainty in the system parameters. Note that noise and parameter uncertainty in max-plus linear systems will appear in a max-plus-multiplicative way as perturbations of the system parameters (Olsder et al. 1990b), usually leading to delays in the system, even if the uncertainty has a zero mean. This perturbation can have a bounded character or it can be modeled in a stochastic way. The bounded case has been studied in Lahaye et al. (1999), Lhommeau et al. (2004), van den Boom and De Schutter (2002b), and Necoara et al. (2009b). In (van den Boom and De Schutter 2004) it is shown that the stochastic MPC problem can be recast into a convex optimization problem. To reduce the complexity of the stochastic MPC optimization problem Farahani et al. (2016) use the moments of a random variable to obtain approximate solution using less computation time. Xu et al. (2019) introduced chance constraints in the MPC problem for stochastic max-plus linear systems.

Related work on control of max-plus DESs can be found in Amari et al. (2012), Başar and Bernhard (1995), Commault (1998), Declerck and Alaoui (2010), Hruz and Zhou (2007), Maia et al. (2011), McEneaney (2004), Kordonis et al. (2018), Gonçalves et al. (2017), and Wang et al. (2017).

7 Summary

This paper has presented a survey of the basic notions of the max-plus algebra and max-plus linear discrete-event systems (DESs). We have introduced the basic operations of the max-plus algebra and presented some of the main definitions, theorems, and properties of the max-plus algebra. Next, we have given an introduction to max-plus linear DES, and discussed some elementary analysis and control methods for max-plus linear DESs including worked examples.

Footnotes

  1. 1.

    The weight of a path is defined as the sum of the weights of the arcs in the path.

  2. 2.

    The average weight of a path is the weight of the path divided by the length of the path.

  3. 3.

    Note the analogy between the definition of A and the Taylor series expansion of (IA)− 1 in conventional algebra. This is related to solution of linear equations of the form x = Axb, see Section 3.2.2.

  4. 4.

    If λ = ε then any vector v such that vi = ε if the i th column of A contains a non-ε entry will be a max-plus-algebraic eigenvector of A.

  5. 5.

    Accordingly, εε = ε.

  6. 6.

    Due to the structure of the system matrix A it is easy to verify that \(\big ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\big )_{11}{12}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\) and \(\big ({A}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\big )_{22}={11}^{{\scriptscriptstyle \otimes }^{\scriptstyle {k}}}\) for all \(k \geqslant 1\). Hence, the relation given in Theorem 1 does not hold in this case.

  7. 7.

    Recall that in this section we consider SISO systems (cf. Remark 1).

  8. 8.

    See van den Boom and De Schutter (2002a) for a discussion of causality issues that arise in this context for max-plus linear DESs and that do not play a role for conventional time-driven systems.

Notes

References

  1. Addad B, Amari S, Lesage JJ (2010) Analytic calculus of response time in networked automation systems. IEEE Trans Autom Sci Eng 7(4):858–869CrossRefGoogle Scholar
  2. Addad B, Amari S, Lesage JJ (2011) Client-server networked automation systems reactivity: Deterministic and probabilistic analysis. IEEE Trans Autom Sci Eng 8(3):540–548CrossRefGoogle Scholar
  3. Addad B, Amari S, Lesage JJ (2012) Networked conflicting timed event graphs representation in (max, plus) algebra. Discrete Event Dyn Syst Theory Appl 22 (4):429–449zbMATHCrossRefGoogle Scholar
  4. Adzkiya D, De Schutter B, Abate A (2013) Finite abstractions of max-plus-linear systems. IEEE Trans Autom Control 58(12):3039–3053MathSciNetzbMATHCrossRefGoogle Scholar
  5. Adzkiya D, De Schutter B, Abate A (2015) Computational techniques for reachability analysis of max-plus-linear systems. Automatica 53:293–302MathSciNetzbMATHCrossRefGoogle Scholar
  6. Amari S, Demongodin I, Loiseau JJ, Martinez C (2012) Max-plus control design for temporal constraints meeting in timed event graphs. IEEE Trans Autom Control 57(2):462–467MathSciNetzbMATHCrossRefGoogle Scholar
  7. Baccelli F, Cohen G, Olsder G, Quadrat J (1992) Synchronization and Linearity. Wiley, New YorkzbMATHGoogle Scholar
  8. Başar T, Bernhard P (1995) \(\mathrm {H}_{\infty }\)-optimal Control and Related Minimax Design Problems, 2nd edn. Birkhauser, BostonzbMATHGoogle Scholar
  9. Bemporad A, Morari M (1999) Control of systems integrating logic, dynamics, and constraints. Automatica 35(3):407–427MathSciNetzbMATHCrossRefGoogle Scholar
  10. Blyth T, Janowitz M (1972) Residuation Theory. Pergamon Press, OxfordzbMATHGoogle Scholar
  11. Boimond J, Ferrier J (1996) Internal model control and max-algebra: Controller design. IEEE Trans Autom Control 41(3):457–461MathSciNetzbMATHCrossRefGoogle Scholar
  12. Bouillard A, Thierry E (2008) An algorithmic toolbox for network calculus. Discrete Event Dyn Syst 18(1):3–49MathSciNetzbMATHCrossRefGoogle Scholar
  13. Brunsch T, Raisch J (2012) Modeling and control of high-throughput screening systems in a max-plus algebraic setting. Eng Appl of Artif Intell 25(4):720–727CrossRefGoogle Scholar
  14. Brunsch T, Raisch J, Hardouin L (2012) Modeling and control of high-throughput screening systems. Control Eng Practice 20(1):14–23CrossRefGoogle Scholar
  15. Butkovič P (2010) Max-Linear Systems: Theory and algorithms. Springer Monographs in Mathematics. Springer, BerlinzbMATHCrossRefGoogle Scholar
  16. Butkovič P, Jones D (2016) On special cases of the generalized max-plus eigenproblem. SIAM J Matrix Anal Appl 37(3):1002–1021MathSciNetzbMATHCrossRefGoogle Scholar
  17. Camacho E, Bordons C (1995) Model Predictive Control in the Process Industry. Springer, BerlinCrossRefGoogle Scholar
  18. Cochet-Terrasson J, Cohen G, Gaubert S, McGettrick M, Quadrat J (1998) Numerical computation of spectral elements in max-plus algebra. In: Proceedings of the IFAC Conference on System Structure and Control (SSC’98), Nantes, France, pp 699–706Google Scholar
  19. Cohen G, Dubois D, Quadrat J, Viot M (1985) A linear-system-theoretic view of discrete-event processes and its use for performance evaluation in manufacturing. IEEE Trans Autom Control 30(3):210–220MathSciNetzbMATHCrossRefGoogle Scholar
  20. Cohen G, Moller P, Quadrat J, Viot M (1989) Algebraic tools for the performance evaluation of discrete event systems. Proc IEEE 77(1):39–58CrossRefGoogle Scholar
  21. Commault C (1998) Feedback stabilization of some event graph models. IEEE Trans Autom Control 43(10):1419–1423MathSciNetzbMATHCrossRefGoogle Scholar
  22. Cottenceau B, Hardouin L, Boimond JL, Ferrier JL (2001) Model reference control for timed event graphs in dioids. Automatica 37(9):1451–1458zbMATHCrossRefGoogle Scholar
  23. Cuninghame-Green R (1961) Process synchronisation in a steelworks – a problem of feasibility. In: Banbury J, Maitland J (eds) Proceedings of the 2nd International Conference on Operational Research (Aix-en-Provence, France 1960). English Universities Press, London, pp 323–328Google Scholar
  24. Cuninghame-Green R (1962) Describing industrial processes with interference and approximating their steady-state behaviour. Oper Res Q 13(1):95–100CrossRefGoogle Scholar
  25. Cuninghame-Green R (1979) Minimax Algebra, Lecture Notes in Economics and Mathematical Systems, vol 166. Springer, BerlinGoogle Scholar
  26. Cuninghame-Green R, Butkovic P (2003) The equation a × x = by over \((\max \limits ,+)\). Theor Comput Sci 293:3–12zbMATHCrossRefGoogle Scholar
  27. Cuninghame-Green R, Butkovič P (2008) Generalised eigenproblem in max algebra. In: Proceedings of the 9th International Workshop on Discrete Event Systems (WODES’08), Göteborg, Sweden, pp 236–241Google Scholar
  28. David-Henriet X, Hardouin L, Raisch JBC (2016) Model predictive control for discrete event systems with partial synchronization. Automatica 70:9–13MathSciNetzbMATHCrossRefGoogle Scholar
  29. David-Henriet X, Hardouin L, Raisch J (2017) Max-plus-linear systems for modeling and control of manufacturing problems. In: Ghezzi L, Hömberg D, Landry C (eds) Math for the Digital Factory, Mathematics in Industry, vol 27. Springer, Berlin, pp 37–60Google Scholar
  30. De Schutter B (1996) Max-algebraic system theory for discrete event systems. PhD thesis, Faculty of Applied Sciences K.U.Leuven, Leuven, BelgiumGoogle Scholar
  31. De Schutter B (2000) On the ultimate behavior of the sequence of consecutive powers of a matrix in the max-plus algebra. Linear Algebra Appl 307(1–3):103–117MathSciNetzbMATHCrossRefGoogle Scholar
  32. De Schutter B, De Moor B (1995) The extended linear complementarity problem. Math Program 71(3):289–325MathSciNetzbMATHCrossRefGoogle Scholar
  33. De Schutter B, De Moor B (1996) A method to find all solutions of a system of multivariate polynomial equalities and inequalities in the max algebra. Discrete Event Dyn Syst: Theory Appl 6(2):115–138zbMATHCrossRefGoogle Scholar
  34. De Schutter B, De Moor B (1997) The singular value decomposition in the extended max algebra. Linear Algebra Appl 250:143–176MathSciNetzbMATHCrossRefGoogle Scholar
  35. De Schutter B, De Moor B (1998) The QR decomposition and the singular value decomposition in the symmetrized max-plus algebra. SIAM J Matrix Anal Appl 19 (2):378–406MathSciNetzbMATHCrossRefGoogle Scholar
  36. De Schutter B, van den Boom T (2001) Model predictive control for max-plus-linear discrete event systems. Automatica 37(7):1049–1056zbMATHCrossRefGoogle Scholar
  37. De Schutter B, Heemels W, Bemporad A (2002) On the equivalence of linear complementarity problems. Oper Res Lett 30(4):211–222MathSciNetzbMATHCrossRefGoogle Scholar
  38. Declerck P (2011) From extremal trajectories to token deaths in p-time event graphs. IEEE Trans Autom Control 56(2):463–467MathSciNetzbMATHCrossRefGoogle Scholar
  39. Declerck P, Alaoui MKD (2010) Optimal control synthesis of timed event graphs with interval model specifications. IEEE Trans Autom Control 55(2):518–523MathSciNetzbMATHCrossRefGoogle Scholar
  40. Esmaeil Zadeh Soudjani S, Adzkiya D, Abate A (2016) Formal verification of stochastic max-plus-linear systems. IEEE Trans Automat Control 61(10):2861–2876MathSciNetzbMATHCrossRefGoogle Scholar
  41. Subiono FK, van der Woude J (2017) On a generalization of power algorithms over max-plus algebra. Discrete Event Dyn Syst 27:181–203MathSciNetzbMATHCrossRefGoogle Scholar
  42. Farahani SS, van den Boom TJ, van der Weide H, De Schutter B (2016) An approximation method for computing the expected value of max-affine expressions. Eur J Control 27:17–27MathSciNetzbMATHCrossRefGoogle Scholar
  43. Ferreira Cândido R, Hardouin L, Lhommeau M, Santos Mendes R (2018) Conditional reachability of uncertain max plus linear systems. Automatica 94:426–435MathSciNetzbMATHCrossRefGoogle Scholar
  44. Garcia CE, Morari M (1982) Internal model control: a unifying review and some new results. Ind Eng Chem Process Des Dev 21(2):308–323CrossRefGoogle Scholar
  45. Gaubert S (1990) An algebraic method for optimizing resources in timed event graphs. In: Bensoussan A, Lions J (eds) Proceedings of the 9th International Conference on Analysis and Optimization of Systems (Antibes, France, Jun. 1990), Berlin, Germany: Springer-Verlag, Lecture Notes in Control and Information Sciences, vol 144, pp 957–966Google Scholar
  46. Gaubert S (1992) Théorie des systèmes linéaires dans les dioïdes PhD thesis. Ecole Nationale Supérieure des Mines de, Paris, FranceGoogle Scholar
  47. Gaubert S (1993)Google Scholar
  48. Gaubert S (1994) On rational series in one variable over certain dioids. Tech. Rep. 2162 INRIA, Le Chesnay, FranceGoogle Scholar
  49. Gaubert S, Sergeev S (2013) The level set method for the two-sided eigenproblemGoogle Scholar
  50. Giffler B (1960) Mathematical solution of production planning and scheduling problems. Tech. rep., IBM ASDDGoogle Scholar
  51. Giffler B (1963) Scheduling general production systems using schedule algebra. Nav Res Logist Quart 10(3):237–255CrossRefGoogle Scholar
  52. Giffler B (1968) Schedule algebra: a progress report. Nav Res Logist Quart 15 (2):255–280CrossRefGoogle Scholar
  53. Gonçalves V, Maia C, Hardouin L (2017) On max-plus linear dynamical system theory: The regulation problem. Automatica 75:202–209MathSciNetzbMATHCrossRefGoogle Scholar
  54. Gondran M, Minoux M (1976) Eigenvalues and eigenvectors in semimodules and their interpretation in graph theory. In: Proceedings of the 9th International Mathematical Programming Symposium, Budapest, Hungary, pp 333–348Google Scholar
  55. Gondran M, Minoux M (1984a) Graphs and Algorithms. Wiley, ChichesterzbMATHGoogle Scholar
  56. Gondran M, Minoux M (1984b) Linear algebra in dioids: A survey of recent results. Annals of Discrete Mathematics 19:147–163MathSciNetzbMATHGoogle Scholar
  57. Gondran M, Minoux M (1987) Dioïd theory and its applications. In: Algèbres exotiques et systèmes à evénements discrets, CNRS/CNET/INRIA seminar, CNET, Issy-les-Moulineaux, France, pp 57–75Google Scholar
  58. Goto H (2008) Dual representation and its online scheduling method for event-varying DESs with capacity constraints. Int J Control 81(4):651–660MathSciNetzbMATHCrossRefGoogle Scholar
  59. Goto H (2009) Robust MPL scheduling considering the number of in-process jobs. Eng Appl Artif Intell 22(4–5):603–607CrossRefGoogle Scholar
  60. Goto H, Masuda S (2008) Monitoring and scheduling methods for MIMO-FIFO systems utilizing max-plus linear representation. Industrial Engineering & Management Systems 7(1):23–33Google Scholar
  61. Goto H, Takahashi H (2012) Efficient representation of the state equation in max-plus linear systems with interval constrained parameters. IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences E95A(2):608–612CrossRefGoogle Scholar
  62. Hardouin L, Cottenceau B, Lhommeau M, Corronc EL (2009) Interval systems over idempotent semiring. Linear Algebra Appl 431(5-7):855–862MathSciNetzbMATHCrossRefGoogle Scholar
  63. Hardouin L, Maia CA, Cottenceau B, Lhommeau M (2010) Observer design for (max,+) linear systems. IEEE Trans Autom Control 55(2):538–543zbMATHCrossRefGoogle Scholar
  64. Hardouin L, Shang Y, Maia CA, Cottenceau B (2017) Observer-based controllers for max-plus linear systems. IEEE Trans Autom Control 62(5):2153–2165MathSciNetzbMATHCrossRefGoogle Scholar
  65. Hardouin L, Cottenceau B, Shang Y, Raisch J (2018) Control and state estimation for max-plus linear systems. Foundations and Trends in Systems and Control 6(1):1–116CrossRefGoogle Scholar
  66. Heemels W, De Schutter B, Bemporad A (2001) Equivalence of hybrid dynamical models. Automatica 37(7):1085–1091zbMATHCrossRefGoogle Scholar
  67. Heidergott B, Olsder G, van der Woude J (2006) Max Plus at Work. Princeton University Press, PrincetonGoogle Scholar
  68. Houssin L, Lahaye S, Boimond JL (2013) Control of (max,+)-linear systems minimizing delays. Discrete Event Dyn Syst 23(3):261–276MathSciNetzbMATHCrossRefGoogle Scholar
  69. Hruz B, Zhou M (2007) Modeling and Control of Discrete-event Dynamic Systems, 1st edn. Springer, LondonzbMATHGoogle Scholar
  70. Ilchmann A (1989) Time-varying control systems : a geometric approach. IMA J Math Control Inf 6:411–440MathSciNetzbMATHCrossRefGoogle Scholar
  71. Karp R (1978) A characterization of the minimum cycle mean in a digraph. Discret Math 23:309–311MathSciNetzbMATHCrossRefGoogle Scholar
  72. Katz R (2007) Max-plus (a, B)-invariant spaces and control of timed discrete-event systems. IEEE Trans Autom Control 52(2):229–241MathSciNetzbMATHCrossRefGoogle Scholar
  73. Komenda J, Lahaye S, Boimond JL, van den Boom T (2018) Max-plus algebra in the history of discrete event systems. Annu Rev Control 45:240–249MathSciNetCrossRefGoogle Scholar
  74. Kordonis I, Maragos P, Papavassilopoulos G (2018) Stochastic stability in max-product and max-plus systems with Markovian jumps. Automatica 92:123–132MathSciNetzbMATHCrossRefGoogle Scholar
  75. Lahaye S, Boimond JL, Hardouin L (1999) Timed event graphs with variable resources: asymptotic behavior, representation in (min,+) algebra. Journal Europé,en des Systèmes Automatisés 33:1015–1032Google Scholar
  76. Lahaye S, Boimond JL, Ferrier JL (2008) Just-in-time control of time-varying discrete event dynamic systems in (max,+) algebra. Int J Prod Res 46(19):5337–5348zbMATHCrossRefGoogle Scholar
  77. Lhommeau M, Hardouin L, Cottenceau B, Jaulin L (2004) Interval analysis and dioid: application to robust controller design for timed event graphs. Automatica 40(11):1923–1930MathSciNetzbMATHCrossRefGoogle Scholar
  78. Libeaut L, Loiseau J (1995) Admissible initial conditions and control of timed event graphs. In: Proceedings of the 34th IEEE Conference on Decision and Control, New Orleans, Louisiana, pp 2011–2016Google Scholar
  79. Liebeherr J (2017) Duality of the max-plus and min-plus network calculus. Foundations and Trends in Networking 11(3–4):139–282zbMATHCrossRefGoogle Scholar
  80. Lopes G, Kersbergen B, van den Boom T, De Schutter B, Babuška R (2014) Modeling and control of legged locomotion via switching max-plus models. IEEE Trans Robot 30(3):652–665CrossRefGoogle Scholar
  81. Lu Q, Madsen M, Milata M, Ravn S, Fahrenberg U, Larsen KG (2012) Reachability analysis for timed automata using max-plus algebra. J Logic and Algebr Progr 81(3):298–313MathSciNetzbMATHCrossRefGoogle Scholar
  82. Maciejowski J (2002) Predictive Control with Constraints. Prentice Hall, HarlowzbMATHGoogle Scholar
  83. Maia C, Hardouin L, Santos-Mendes R, Cottenceau B (2003) Optimal closed-loop control of timed event graphs in dioids. IEEE Trans Autom Control 48 (12):2284–2287MathSciNetzbMATHCrossRefGoogle Scholar
  84. Maia C, Andrade C, Hardouin L (2011) On the control of max-plus linear system subject to state restriction. Automatica 47(5):988–992MathSciNetzbMATHCrossRefGoogle Scholar
  85. McEneaney W (2004) Max-plus eigenvector methods for nonlinear h-infinity problems: Error analysis. SIAM J Control Optim 43:379–412MathSciNetzbMATHCrossRefGoogle Scholar
  86. Menguy E, Boimond J, Hardouin L (1997) A feedback control in max-algebra. In: Proceedings of the European Control Conference (ECC’97), Brussels, Belgium, paper 487Google Scholar
  87. Menguy E, Boimond J, Hardouin L (1998) Adaptive control for linear systems in max-algebra. In: Proceedings of the International Workshop on Discrete Event Systems (WODES’98), Cagliari, Italy, pp 481–488Google Scholar
  88. Menguy E, Boimond J, Hardouin L, Ferrier J (2000a) A first step towards adaptive control for linear systems in max algebra. Discrete Event Dyn Syst: Theory and Applications 10(4):347–367MathSciNetzbMATHCrossRefGoogle Scholar
  89. Menguy E, Boimond J, Hardouin L, Ferrier J (2000b) Just-in-time control of timed event graphs: Update of reference input, presence of uncontrollable input. IEEE Trans Automat Control 45(11):2155–2158MathSciNetzbMATHCrossRefGoogle Scholar
  90. Nait-Sidi-Moh A, Manier MA, EI Moudni A (2009) Spectral analysis for performance evaluation in a bus network. Eur J Oper Res 193(1):289–302MathSciNetzbMATHCrossRefGoogle Scholar
  91. Necoara I, De Schutter B, van den Boom T, Hellendoorn H (2007) Stable model predictive control for constrained max-plus-linear systems. Discrete Event Dyn Syst: Theory and Applications 17(3):329–354MathSciNetzbMATHCrossRefGoogle Scholar
  92. Necoara I, De Schutter B, van den Boom T, Hellendoorn H (2009a) Robust control of constrained max-plus-linear systems. Int J Robust Nonlin Control 19 (2):218–242MathSciNetzbMATHCrossRefGoogle Scholar
  93. Necoara I, De Schutter B, van den Boom T, Hellendoorn H (2009b) Robust control of constrained max-plus-linear systems. Int J Robust Nonlin Control 19 (2):218–242MathSciNetzbMATHCrossRefGoogle Scholar
  94. Olsder G (1986) On the characteristic equation and minimal realizations for discrete-event dynamic systems. In: Bensoussan A, Lions J (eds) Proceedings of the 7th International Conference on Analysis and Optimization of Systems (Antibes, France), Berlin, Germany: Springer-Verlag, Lecture Notes in Control and Information Sciences, vol 83, pp 189–201Google Scholar
  95. Olsder G, Roos C (1988) Cramer and cayley-Hamilton in the max algebra. Linear Algebra Appl 101:87–108MathSciNetzbMATHCrossRefGoogle Scholar
  96. Olsder G, Resing J, de Vries R, Keane M, Hooghiemstra G (1990a) Discrete event systems with stochastic processing times. IEEE Trans Automat Contr 35(3):299–302MathSciNetzbMATHCrossRefGoogle Scholar
  97. Olsder G, Resing J, de Vries R, Keane M, Hooghiemstra G (1990b) Discrete event systems with stochastic processing times. IEEE Transactions on Automatic Control, pp 299–302Google Scholar
  98. Rawlings J, Mayne D (2009) Model predictive control: Theory and design nob hill publishing. Madison, WisconsinGoogle Scholar
  99. Santos-Mendes R, Hardouin L, Lhommeau M (2019) Stochastic filtering of max-plus linear systems with bounded disturbances. IEEE Transactions on Automatic Control To appear.  https://doi.org/10.1109/TAC.2018.2887353
  100. Schullerus G, Krebs V, De Schutter B, van den Boom T (2006) Input signal design for identification of max-plus-linear systems. Automatica 42(6):937–943MathSciNetzbMATHCrossRefGoogle Scholar
  101. Shang Y (2013) Towards geometric control of max-plus linear systems with applications to queueing networks. Int J Syst Sci 44(1):34–49MathSciNetzbMATHCrossRefGoogle Scholar
  102. Shang Y, Sain MK (2009) Fixed poles in the model matching problem for systems over semirings. Linear Algebra Appl 430(8):2368–2388MathSciNetzbMATHCrossRefGoogle Scholar
  103. Shang Y, Hardouin L, Lhommeau M, Maia C (2016) An integrated control strategy to solve the disturbance decoupling problem for max-plus linear systems with applications to a high throughput screening system. Automatica 63:338–348MathSciNetzbMATHCrossRefGoogle Scholar
  104. Su R, Woeginger G (2011) String execution time for finite languages: Max is easy, min is hard. Automatica 47(10):2326–2329MathSciNetzbMATHCrossRefGoogle Scholar
  105. Tao TG, Liu GP, Mu XW (2013) Max-plus matrix method and cycle time assignability and feedback stabilizability for min-max-plus systems. Math Control Signal Syst 25(2):197–229MathSciNetzbMATHCrossRefGoogle Scholar
  106. van den Boom T, De Schutter B (2002a) Properties of MPC for max-plus-linear systems. Eur J Control 8(5):453–462zbMATHCrossRefGoogle Scholar
  107. van den Boom T, De Schutter B (2006) Modelling and control of discrete event systems using switching max-plus-linear systems. Control Engineering Practice 14(10):1199–1211CrossRefGoogle Scholar
  108. van den Boom T, De Schutter B (2012) Modeling and control of switching max-plus-linear systems with random and deterministic switching. Discrete Event Dynamic Systems 12(3):293–332MathSciNetzbMATHCrossRefGoogle Scholar
  109. van den Boom T, De Schutter B (2014) Model predictive control of manufacturing systems with max-plus algebra Campos J, Seatzu C, Xie X (eds), vol 12, CRC Press, ChapGoogle Scholar
  110. van den Boom TJ, De Schutter B (2002b) MPC for perturbed max-plus-linear systems. Systems & Control Letters 45(1):21–33MathSciNetzbMATHCrossRefGoogle Scholar
  111. van den Boom TJ, De Schutter B (2004) Model predictive control for perturbed max-plus-linear systems: a stochastic approach. Int J Control 77(3):302–309MathSciNetzbMATHCrossRefGoogle Scholar
  112. Walkup E, Borriello G (1998) A general linear max-plus solution technique. In: Gunawardena J, Atiyah M, Taylor J (eds) Idempotency Proceedings of the Workshop on Idempotency, Bristol, UK, Oct 1994. Cambridge University Press, CambridgeGoogle Scholar
  113. Wang C, Tao Y, Yang P, Liu Z (2017) Dimension reduction and feedback stabilization for max-plus linear systems and applications in VLSI array processors. IEEE Trans Autom Control 62(12):6353–6368MathSciNetzbMATHCrossRefGoogle Scholar
  114. Xu J, De Schutter B, van den Boom T (2014) Model predictive control for max-plus-linear systems via optimistic optimization. In: Proceedings of the 12th IFAC-IEEE International Workshop on Discrete Event Systems (WODES’14), Cachan, France, pp 111–116Google Scholar
  115. Xu J, van den Boom T, De Schutter B (2019) Model predictive control for stochastic max-plus linear systems with chance constraints. IEEE Trans Autom Control 64(1):337–342MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Delft Center for Systems and ControlDelft University of TechnologyDelftThe Netherlands
  2. 2.Engineering Systems and ServicesDelft University of TechnologyDelftThe Netherlands

Personalised recommendations