1 Introduction

We consider the mathematical modelling and control of phenomena of collective dynamics under uncertainties. These phenomena have been studied in several fields such as socio-economy, biology, and robotics where systems of interacting particles are given by self-propelled particles, such as animals and robots, see e.g. [1, 7, 15, 24, 32]. Those particles interact according to a possibly nonlinear model, encoding various social rules as attraction, repulsion, and alignment. A particular feature of such models is their rich dynamical structure, which includes different types of emerging patterns, including consensus, flocking, and milling [17, 23, 29, 45, 50]. Understanding the impact of control inputs in such complex systems is of great relevance for applications. Results in this direction allow to design optimized actions such as collision-avoidance protocols for swarm robotics [14, 26, 46, 48], pedestrian evacuation in crowd dynamics [16, 22], supply chain policies [18, 34], the quantification of interventions in traffic management [31, 49, 51] or in opinion dynamics [27, 28]. Further, the introduction of uncertainty in the mathematical modelling of real-world phenomena seems to be unavoidable for applications, since often at most statistical information of the modelling parameters is available. The latter has typically been estimated from experiments or derived from heuristic observations [5, 8, 37]. To produce effective predictions and to describe and understand physical phenomena, we may incorporate parameters reflecting the uncertainty in the interaction rules, and/or external disturbances [13].

Here, we are concerned with the robustness of controls influencing the evolution of the collective motion of an interacting agent system. The controls we are considering are aimed to stabilize the system’s dynamics under external uncertainty. From a mathematical point of view, a description of self-organized models is provided by complex system theory, where the overall dynamics are depicted by a large-scale system of ordinary differential equations (ODEs).

More precisely, we consider the control of high-dimensional dynamics accounting N agents with state \(v_i(t,\theta ) \in {\mathbb {R}}^d,\, i=1,\ldots ,N\), evolving according to

$$\begin{aligned} \frac{d}{dt}{v}_i(t,\theta ) = \sum _{j=1}^N a_{ij}(v_j(t,\theta )-v_i(t,\theta ))+ u_i(t,\theta ) +\sum _{k=1}^Z \theta _k, \qquad v_i(0)=v_{i}^0, \end{aligned}$$
(1.1)

where \(A=[a_{ij}]\in \mathbb {R}^{N\times N}\) defines the nature of pairwise interaction among agents, and \(\theta =(\theta _1,\ldots ,\theta _Z)^\top \in \mathbb {R}^{Z\times d}\) is an independent component random input vector with a given probability density distribution on \({\mathbb {R}}\) as \(\rho \equiv \rho _1\otimes \ldots \otimes \rho _Z\). The control signal \(u_i(t,\theta )\in \mathbb {R}^d\) is designed to stabilize the state toward a target state \({\bar{v}}\in \mathbb {R}^{N\times d}\), and its action is influenced by the random parameter \(\theta \). This is also due to the fact, that later we will be interested in closed–loop or feedback controls on the state \((v_1.\dots ,v_N)\) that in turn dependent on the unknown parameter \(\theta .\)

Of particular interest will be controls designed via minimization of linear quadratic (parametric) regulator functional such as

$$\begin{aligned} \min _{u(\cdot ,\theta )} {J}(u;v^0):= \int _0^{+\infty } \exp ( - r \tau ) \left[ v^\top Q v +\nu u^\top R u \right] \,d\tau , \end{aligned}$$
(1.2)

with Q positive semi-definite matrix of order N, R positive definite matrix of order N and r is a discount factor. In this case, the linear quadratic dynamics allow for an optimal control \(u^*\) stabilising the desired state \(v_d=0\), expressed in feedback form, and obtained by solving the associated matrix Riccati equations. Those aspects will be also addressed in more detail below.

In order to assess the performances of controls, and quantify their robustness we propose estimates using the concept of \({\mathcal {H}}_\infty \) control. In this setting different approaches have been studied in the context of \({\mathcal {H}}_\infty \) control and applied to first-order and higher-order multiagent systems, see e.g. [21, 40,41,42,43,44], in particular for an interpretation of \({\mathcal {H}}_\infty \) as dynamic games we refer to [6]. Here we will study an approach based on the derivation of sufficient conditions in terms of linear matrix inequalities (LMIs) for the \({\mathcal {H}}_\infty \) control problem. In this way, consensus robustness will be ensured for a general feedback formulation of the control action. Additionally, we consider the large–agent limit and show that the robustness is guaranteed independently of the number of agents.

Furthermore, we will discuss the numerical realization of system (1.1) employing uncertainty quantification techniques. In general, at the numerical level, techniques for uncertainty quantification can be classified into non-intrusive and intrusive methods. In a non-intrusive approach, the underlying model is solved for fixed samples with deterministic schemes, and statistics of interest are determined by numerical quadrature, typical examples are Monte-Carlo and stochastic collocation methods [19, 54]. While in the intrusive case, the dependency of the solution on the stochastic input is described as a truncated series expansion in terms of orthogonal functions. Then, a new system is deduced that describes the unknown coefficients in the expansion. One of the most popular techniques of this type is based on stochastic Galerkin (SG) methods. In particular, generalized polynomial chaos (gPC) gained increasing popularity in uncertainty quantification (UQ), for which spectral convergence on the random field is observed under suitable regularity assumptions [19, 35, 36, 52, 54]. The methods, here developed, make use of the stochastic Galerkin (SG) for the microscopic dynamics while in the mean-field case we combine SG in the random space with a Monte Carlo method in the physical variables.

The manuscript is organized as follows, in Sect. 2 we introduce the problem setting and propose different feedback control laws; in Sect. 3 we reformulate the problem in the setting of \({\mathcal {H}}_\infty \) control and provide conditions for the robustness of the controls in the microscopic and mean-field case. Section 4 is devoted to the description of numerical strategies for the simulation of the agent systems, and to different numerical experiments, which assess the performances and compare different methods.

2 Control of Interacting Agent System with Uncertainties

The following notation is introduced with the control of high-dimensional systems of interacting agents with random inputs. We consider the evolution of N agents with state \(v(t,\theta )\in {\mathbb {R}}^{ N\times d}\) as follows

$$\begin{aligned} \frac{d}{dt}{v}_i(t,\theta ) = \frac{1}{N}\sum _{j=1}^N \bar{p}(v_j(t,\theta )-v_i(t,\theta )) + u_i(t,\theta ) + \sum _{k = 1}^Z \theta _k\, \end{aligned}$$
(2.1)

with deterministic initial data \(v_i(0)=v_{i}^0\) for \(i=1,\ldots ,N\). For the rest of the article, we focus on the model (2.1), namely Eq. (1.1) where we chose that \(a_{ij} = \bar{p}\) is constant and equal for every agent, with \(\bar{p} \in {\mathbb {R}}\) bounded. \(\theta _k \in \Omega _k\subseteq {\mathbb {R}}^{d}\) for \(k=1,\ldots ,Z\) are random inputs, distributed according to a compactly supported probability density \(\rho \equiv \rho _1 \otimes \dots \otimes \rho _Z\), i.e., \(\rho _k(\theta )\ge 0\) a.e., \(\text {supp}(\rho _k)\subseteq \Omega _k \) and \(\int _{\Omega _k} \rho _k(\theta )\, d\theta =1\). For simplicity, we also assume that the random inputs have zero average \({\mathbb {E}}[\theta _k] =0\), (please refer to Remark 2.1 for the general case \({\mathbb {E}}[\theta _k] \ne 0\)). The control signal \(u(t,\theta )\in \mathbb {R}^{N\times d}\) is designed minimizing the (parameterized) objective

$$\begin{aligned} u^*(\cdot ,\theta )= & {} \arg \min _{u(\cdot ,\theta )} {J}(u;v^0):= \int _0^{+\infty } \exp (-r \tau )\nonumber \\{} & {} \left( \frac{1}{N}\sum _{j=1}^N ( \vert v_j(\tau ,\theta ) - \bar{v} \vert ^2 + \nu \vert u_j(\tau ,\theta )\vert ^2) \right) \,d\tau , \end{aligned}$$
(2.2)

with \(\nu >0\) being a penalization parameter for the control energy, the norm \(\vert \cdot \vert \) being the usual Euclidean norm in \({\mathbb {R}}^d\). The discount factor \(\exp (-r \tau )\) is introduced to have a well-posed integral.

We assume that \(\bar{v}\) is a prescribed consensus point, namely, in the context of this work we are interested in reaching a consensus velocity \(\bar{v}\in \mathbb {R}^d\) such that \(v_{1}=\ldots =v_{N}=\bar{v}\), and w.l.o.g. we can assume \(\bar{v} = 0\). Note that \(\bar{v}=0\) is also the steady state of the dynamics in absence of disturbances. Hence, we may view \(u(\cdot ,\theta )\) as a stabilizing control of the zero steady state of the system. Furthermore, we will be interested in feedback controls u.

Recall that the (deterministic) linear model (2.1), without uncertainties, allows a feedback stabilization by solving the resulting optimal control problem through a Riccati equations [2, 3, 33]. The functional J in (2.2), in the absence of disturbances, reads as follows

$$\begin{aligned} J(u;v^0) = \int _0^{+\infty } \exp (-r \tau ) \left( v^\top Q v+ \nu u^\top R u \right) \, dt \end{aligned}$$

where \(Q \equiv R = \frac{1}{N}\text {Id}_N\). In this case the controlled dynamics (2.1) is reformulated in a matrix–vector notation

$$\begin{aligned} \frac{d}{dt}v(t)= A v(t) + B u(t), \qquad u(t) = - \frac{N}{\nu } K v(t), \end{aligned}$$
(2.3)

with \(B=\text {Id}_N\) the identity matrix of order N, and

$$\begin{aligned} (A)_{ij} = {\left\{ \begin{array}{ll} &{}a_d=\frac{\bar{p}(1-N)}{N},\qquad i=j,\\ &{}a_o=\frac{\bar{p}}{N},\qquad \qquad i\ne j.\\ \end{array}\right. } \end{aligned}$$
(2.4)

The matrix K associated to feedback form of the optimal control has to fulfilll the Riccati matrix-equation of the following form

$$\begin{aligned} 0= - r K +KA+A^\top K-\frac{N}{\nu } KK + Q. \end{aligned}$$
(2.5)

For a general linear system, we need to solve the \(N\times N\) equations to find K, which can be costly for large-scale agent-based dynamics. However, we can use the same argument of [3] and exploit the symmetric structure of the Laplacian matrix A to reduce the algebraic Riccati equation. Unlike in [3], where they investigate the case with finite terminal time, here we state the following proposition for the infinite horizon case with discount factor r.

Proposition 2.1

(Properties of the Algebraic Riccati Equation (ARE)) For the linear dynamics (2.3), the solution of the Riccati equation (2.5) reduces to the solution of

$$\begin{aligned} \begin{aligned} 0&= -r k_d -\frac{2\bar{p}(N-1)}{N}({k}_d -k_o) - \frac{N}{\nu }\left( {k}_d^2+(N-1){k}_o^2\right) + \frac{1}{N}, \, \\ 0&= -r k_o + \frac{ 2\bar{p}}{N}(k_d-k_o) - \frac{N}{\nu }\left( 2{k}_d {k}_o+(N-2){k}_o^2\right) , \end{aligned} \end{aligned}$$
(2.6)

where \(k_d, k_o\) are the entries of the matrix K in the algebraic Riccati equation (2.5). In particular, K is given by

$$\begin{aligned} (K)_{ij}=\delta _{ij}k_d+(1-\delta _{ij})k_o, \end{aligned}$$

\(\delta _{ij}\) is the Kronecker delta function.

In order to allow the limit of infinitely many agents \(N\rightarrow \infty \), we introduce the following scalings

$$\begin{aligned} {k}_d \leftarrow N k_d, \quad {k}_o \leftarrow N^2 k_o,\quad \alpha (N) = \frac{N-1}{N}, \end{aligned}$$

and keeping the same notation also for the scaled variables \(k_d,k_o\), the system (2.6) reads

$$\begin{aligned} \begin{aligned} 0&= -r k_d -2\bar{p}\alpha (N)\left( k_d- \frac{k_o}{N}\right) - \frac{1}{\nu }\left( k_d^2+\frac{\alpha (N)}{N}k_o^2\right) + 1, \\ 0&= -r k_o + 2\bar{p}\left( k_d- \frac{k_o}{N}\right) - \frac{1}{\nu }\left( 2k_dk_o+\alpha (N)k_o^2-\frac{1}{N}k_o^2\right) . \end{aligned} \end{aligned}$$
(2.7)

The previous considerations motivate to extend formula (2.3) to the parametric case (2.1). Hence, in the presence of parametric uncertainty the feedback control is written explicitly as follows

$$\begin{aligned} u_i(t,\theta ) = - \frac{1}{\nu } \left( K v(t,\theta ) \right) _i = - \frac{1}{\nu }\left( \left( k_d-\frac{k_o}{N}\right) v_i(t,\theta ) + \frac{k_o}{N} \sum _{j=1}^N v_j(t,\theta )\right) .\nonumber \\ \end{aligned}$$
(2.8)

The question arises if the given feedback is robust with respect to the uncertainties \(\theta \). In the following, we will provide a measure for the robustness of control (2.8) in the framework of \({\mathcal {H}}_\infty \) control. Some additional remarks allow generalizing formula (2.8).

Remark 2.1

(Non-zero average) In the presence of general uncertainties with known expectations, we modify the control (2.8) for model (2.1) including a correction factor given by the expected values of the random inputs,

$$\begin{aligned} u_i(t,\theta ) = - \frac{1}{\nu }\left( \left( k_d-\frac{k_o}{N}\right) v_i(t,\theta ) + \frac{k_o}{N} \sum _{j=1}^N v_j(t,\theta )\right) - \sum _{k = 1}^Z \mu _k, \end{aligned}$$
(2.9)

for \(\mu _k={\mathbb {E}}[\theta _k]\) for \(k=1,\ldots ,Z\).

Remark 2.2

(Averaged control) In the case of a deterministic feedback control, we may consider the expectation of the objective (2.2) subject to the noisy model (2.1)

$$\begin{aligned} {\bar{u}}^*(\cdot ) = \arg \min _{u(\cdot )} {\mathbb {E}} \left[ \int _0^{+\infty } \frac{1}{2}( v^\top Q v + \nu u^\top R u) \ dt \right] , \end{aligned}$$

where we introduce the matrices \(Q=R= \frac{1}{N} \text {Id}_N\). In this case, we have the following deterministic optimal feedback control is deduced

$$\begin{aligned} {\bar{u}}_i(t) = -\frac{1}{\nu } \left( {k}_d {\mathbb {E}}\left[ v_i(t,\theta ) \right] + \frac{{k}_o}{N} \sum _{j \ne i}^N {\mathbb {E}}\left[ v_j(t,\theta ) \right] \right) - \sum _{k = 1}^Z \mu _k, \end{aligned}$$
(2.10)

where \(\mu _k={\mathbb {E}}[\theta _k]\) for \(k=1,\ldots ,Z\), \({k}_d\), \({k}_o\) satisfy equations (2.7).

We refer to Appendix 1 for detail computations for the synthesis of (2.10).

Remark 2.3

(Time independent uncertainty) In this work we consider the uncertainty \(\theta \) constant in time since we want to use less information as possible about the system, we assume to not know the evolution of \(\theta \) in time. We can see the constant \(\theta \) as the maximum value over time extracted from the support of its distribution

$$\begin{aligned} \theta = \max _t \theta (t). \end{aligned}$$

The maximum value of \(\theta \) is equivalent to consider the extremal cases in the support where the uncertainty is maximal. Doing this we are aiming for a robust control in the worst case scenario. Nevertheless, the results we will derive in this article (in particular Theorem 3.2), hold true also for a time dependent uncertainty at the microscopic case, see e.g. [21, 42]. For the mean-field limit we consider in Sect. 3.1, a time-dependent noise would lead to a diffusion term, this case is not treated in the manuscript but it is planned as future work.

3 Robustness in the \({\mathcal {H}}_\infty \) Setting

In the context of \({\mathcal {H}}_\infty \) theory, controllers are synthesized to achieve stabilization with guaranteed performance. In this section, we exploit the theory of Linear Matrix Inequality (LMI) to show the robustness of the control. The introduction of LMI methods in control has dramatically expanded the types and complexity of the systems we can control. In particular, it is possible to use LMI solvers to synthesize optimal or suboptimal controllers and estimators for multiple classes of state-space systems and without giving a complete list of references, we refer to [9, 20, 38, 47].

Consider the linear system (2.1) with control (2.8) in the following reformulation

$$\begin{aligned} \frac{d}{dt}{v}(t)&= {\widehat{A}} v(t)+ {\widehat{B}} \theta , \end{aligned}$$
(3.1)

where we consider the random input vector \(\theta = (\theta _1,\ldots ,\theta _Z)^\top \in {\mathbb {R}}^{Z\times d}\), and the matrices

$$\begin{aligned} {\widehat{A}} = A -\frac{1}{\nu }{K}, \qquad {\widehat{B}} = \mathbbm {1}_{N\times Z}. \end{aligned}$$
(3.2)

with \(\mathbbm {1}\) a matrix of ones of dimension \(N\times Z\). We introduce the frequency transfer function \(\hat{G}(s):= (s \text {Id}_N-{\widehat{A}})^{-1} {\widehat{B}}\), such that

\({\hat{G}} \in R{\mathcal {H}}_\infty \). The latter is the set of proper rational functions with no poles in the closed complex right half-plane, and the signal norm \(\Vert \cdot \Vert _{{\mathcal {H}}_\infty }\) measuring the size of the transfer function in the following sense

$$\begin{aligned} \Vert \hat{G} \Vert _{{\mathcal {H}}_\infty } = \text {ess}\sup _{\omega \in {\mathbb {R}}} {\bar{\sigma }} (\hat{G}(i \omega )), \end{aligned}$$
(3.3)

where for a given matrix P, \({{\bar{\sigma }}} (P)\) is the largest singular value of P. The general \({\mathcal {H}}_\infty \)-optimal control problem consists of finding a stabilizing feedback controller \(u = -\frac{1}{\nu }K\) which minimizes the cost function (3.3) and we refer to Appendix 1 and to [21] for more details.

However, the direct minimization of the cost \(\Vert \hat{G} \Vert _{{\mathcal {H}}_\infty }\) is in general a very hard task, and possibly unfeasible by direct methods. To reduce the complexity, a possibility consists in finding conditions for the stabilizing controller that achieves a norm bound for a given threshold \(\gamma >0\),

$$\begin{aligned} \Vert \hat{G} \Vert _{{\mathcal {H}}_\infty } \le \gamma . \end{aligned}$$
(3.4)

Hence, robustness of a given control \(u= -\frac{1}{\nu }K\) is measured in terms of the smallest \(\gamma \) satisfying (3.4). In order to provide a quantitative result, we can rely on the following result

Lemma 3.1

Given the frequency transfer function \(\hat{G}\), associated to (3.1), a necessary and sufficient condition to guarantee the \({\mathcal {H}}_\infty \) bound (3.4) for \(\gamma >0\), is to prove, that there exists a positive definite square matrix X of order N, \(X> 0\), such that the following algebraic Riccati equation holds

$$\begin{aligned} A^\top X + XA -\frac{1}{\nu } {K}^\top X -\frac{1}{\nu } X{K} + \frac{1}{\gamma } XX + \frac{1}{\gamma } \text {Id}_N = 0. \end{aligned}$$
(3.5)

For detailed proof of this result, we refer to Appendix 1.

Theorem 3.2

Consider system (3.1) with structure induced by (2.1), and a square matrix X with the following structure

$$\begin{aligned} (X)_{ij}= {\left\{ \begin{array}{ll} x_d,\qquad i=j,\\ x_o,\qquad i\ne j. \end{array}\right. }\qquad \end{aligned}$$

Then for N sufficiently large and finite, control \(u=-\frac{1}{\nu }K\) given by equation (2.8) is \({\mathcal {H}}_\infty \)-robust for any \(\gamma \) and \(c_N\) such that \(\gamma \ge \frac{1}{c_N}\), \(c_N>0\), where

$$\begin{aligned} c_N = \bar{p} + \frac{1}{\nu } (k_d - \frac{k_o}{N}). \end{aligned}$$
(3.6)

Proof

Under the hypothesis of the theorem, (3.5) reduces to the following system of equations

$$\begin{aligned} 0&= \frac{2\bar{p}(1-N)}{N}x_d + \frac{2\bar{p}(N-1)}{N}x_o - \frac{2}{\nu } {k}_d x_d - \frac{2(N-1)}{\nu N}{k}_o x_o \nonumber \\&\quad + \frac{1}{\gamma } x_d^2 + \frac{N-1}{\gamma }x_o^2 +\frac{1}{\gamma }, \end{aligned}$$
(3.7)
$$\begin{aligned} 0&= \frac{2\bar{p}(1-N)}{N}x_o + \frac{2\bar{p}}{N} x_d + \frac{2\bar{p}(N-2)}{N}x_o - \frac{2}{\nu } {k}_d x_o - \frac{2}{\nu N}{k}_o x_d \nonumber \\&\quad - \frac{2(N-2)}{\nu N} {k}_o x_o + \frac{2}{\gamma } x_d x_o + \frac{N-2}{\gamma }x_o^2. \end{aligned}$$
(3.8)

We scale the off-diagonal elements \(x_o\) of X according to

$$\begin{aligned} \quad {\tilde{x}}_o = \sqrt{N} x_o, \end{aligned}$$

which, as we will see later, is also the consistent scaling for a mean-field description of X.

Then, the previous system reads

$$\begin{aligned} \begin{aligned} 0&= \frac{2\bar{p}(1-N)}{N}x_d + \frac{2\bar{p}(N-1)}{N\sqrt{N}}{\tilde{x}}_o - \frac{2}{\nu } k_d x_d\\&\quad - \frac{2(N-1)}{\nu N \sqrt{N}}k_o {\tilde{x}}_o + \frac{1}{\gamma } x_d^2 + \frac{N-1}{\gamma N}{\tilde{x}}_o^2 +\frac{1}{\gamma }, \\ 0&= \frac{2\bar{p}(1-N)}{N\sqrt{N}}{\tilde{x}}_o + \frac{2\bar{p}}{N} x_d + \frac{2\bar{p}(N-2)}{N\sqrt{N}}{\tilde{x}}_o \\&\quad - \frac{2}{\nu \sqrt{N}} k_d {\tilde{x}}_o - \frac{2}{\nu N}k_o x_d - \frac{2(N-2)}{\nu N \sqrt{N}} k_o {\tilde{x}}_o \\&\quad + \frac{2}{\gamma \sqrt{N}} x_d {\tilde{x}}_o + \frac{N-2}{\gamma N}{\tilde{x}}_o^2. \end{aligned} \end{aligned}$$
(3.9)

From these two equations of system (3.9), and setting

$$\begin{aligned} c = \bar{p} + \frac{k_d}{\nu }, \quad \alpha = \bar{p} - \frac{k_o}{\nu }, \quad \beta = \frac{k_d + k_o}{\nu }. \end{aligned}$$
(3.10)

we obtain two second order equations for \(x_d\) and \({\tilde{x}}_o\)

$$\begin{aligned} \begin{aligned} 0&= x_d^2 -2\gamma c x_d + {\tilde{x}}_o^2 + \frac{2\gamma \alpha }{\sqrt{N}} {\tilde{x}}_o + 1 + {\mathcal {O}} \left( \frac{1}{N}\right) , \\ 0&= {\tilde{x}}_o^2 - \frac{2\gamma }{\sqrt{N}} \Bigl ( \beta - \frac{x_d}{\gamma } \Bigr ) {\tilde{x}}_o + {\mathcal {O}}\left( \frac{1}{N}\right) . \end{aligned} \end{aligned}$$
(3.11)

For N sufficiently large, their solutions is given by

$$\begin{aligned} x_d^{\pm }&= \gamma c \pm \sqrt{\gamma ^2c^2 - 1 - {\tilde{x}}_o^2 - \frac{2\gamma \alpha }{\sqrt{N}}{\tilde{x}}_o}, \quad {\tilde{x}}_o^{-} = 0, \quad {\tilde{x}}_o^{+} = \frac{2\gamma }{\sqrt{N}} \Bigl ( \beta - \frac{x_d}{\gamma } \Bigr ). \end{aligned}$$

Note that we are interested in the large particle limit and hence may allow that the previous quantities \({\mathcal {O}}\left( \frac{1}{N}\right) \) are not exactly zero, but tend to zero at a rate \(\frac{1}{N}.\) Hence, we write the matrix X as

$$\begin{aligned} X = \frac{{\tilde{x}}_o^{\pm } }{\sqrt{N}} \ \mathbbm {1}_{N} + (x_d^{\pm } - \frac{{\tilde{x}}_o^{\pm } }{\sqrt{N}}) \ {Id}_{N}, \end{aligned}$$

and the eigenvalues of X are

$$\begin{aligned} \lambda _i = \lambda = x_d^{\pm } - \frac{{\tilde{x}}_o^{\pm } }{\sqrt{N}} \quad \text {for} \quad i = 1,\dots , N-1 \quad \text {and} \quad \lambda _N = x_d^{\pm } + (N-1) \frac{{\tilde{x}}_o^{\pm } }{\sqrt{N}}. \end{aligned}$$

If we subtract equations in (3.9) we find the following second order equation in the variable \(\lambda = \left( x_d - \frac{{\tilde{x}}_o}{\sqrt{N}}\right) \)

$$\begin{aligned} \lambda ^2 - 2\gamma \left( \bar{p} + \frac{1}{\nu } ({k}_d - \frac{k_o}{N})\right) \lambda + 1 = 0, \end{aligned}$$

with its solutions \( \lambda ^{\pm } = x_d - \frac{{\tilde{x}}_o}{\sqrt{N}} = \gamma c_N \pm \sqrt{ \gamma ^2 c_N^2 - 1}, \) and for

$$\begin{aligned} c_N = \bar{p} + \frac{1}{\nu } \left( k_d - \frac{k_o}{N}\right) . \end{aligned}$$
(3.12)

Therefore, we have \( \lambda _i = \lambda ^{\pm } \quad \text {for} \quad i = 1,\dots , N-1\) and \(\lambda _N = \lambda ^{\pm } + N \frac{{\tilde{x}}_o}{\sqrt{N}}.\) Hence, there exists a matrix X satisfying (3.5), for

$$\begin{aligned} x_o = x_o^- = 0. \end{aligned}$$

In this case, the eigenvalues are \(\lambda _i = \lambda _N= \lambda ^{\pm }\). In order to ensure the positivity of the eigenvalue \(\lambda ^{\pm } = \gamma c_N \pm \sqrt{\gamma ^2 c_N^2-1}\) needs to be non–negative. This can be expressed in terms of the choice of the parameters \(\gamma \) and \(c_N:\) First of all, we have \(\gamma \ge \frac{1}{c_N}\) to ensure the existence of the square root. In addition, provided that \(c_N>0\) we obtain

$$\begin{aligned}&\lambda ^+> 0 \text{ and } \lambda ^- >0. \end{aligned}$$

This finishes the proof. \(\square \)

We want to recall and highlight the fact that Theorem 3.2 offers a robustness condition in the case of uncertainty with zero mean. The result can be generalized in the presence of uncertainties with known expectations, adding a correction factor to the matrix \({\widehat{A}}\) in (3.2), given by the expected values of the random inputs.

Remark 3.1

We observe that Theorem 3.2 quantifies the robustness of the feedback control with a lower bound on the parameters of the model. In particular, we can achieve minimal value of \(\gamma \) for large values of \(c_N\) in (3.6), for example if we fix the values \({\bar{p}}, k_d,k_o\) and N for decreasing values of the penalization \(\nu \) the control is more robust.

3.1 Mean-Field Estimates for \({\mathcal {H}}_\infty \) Control

Large system of interacting agents can be efficiently represented at different scales to reduce the complexity of the microscopic description. Of particular interest are models able to describe the evolution of a density of agents and its moments [10, 12, 30].

In this section, we analyse robustness of controls in the case of a large number of agents, i.e. \(N\gg 1\), by means of the mean-field limit of the interacting system. Hence, we consider the density distribution of agents \(f=f(t,v,\theta )\) to describe the collective behaviour of the ensemble of agents. The empirical joint probability distribution of agents for the system (2.1), is given by

$$\begin{aligned} f^N(t,v,\theta ) = \frac{1}{N} \sum _{i=1}^N \delta (v-v_i(t,\theta )), \end{aligned}$$

where \(\delta (\cdot )\) is a Dirac measure over the trajectories \(v_i(t,\theta )\) dependent on the stochastic variable \(\theta = (\theta _1,\ldots ,\theta _Z)\).

Hence, assuming enough regularity assuming that agents remain in a fixed compact domain for all N and in the whole time interval [0, T], the mean-field limit of dynamics (2.1) is obtained formally as

$$\begin{aligned} \partial _t f(t,v,\theta )= & {} - \nabla _v\cdot \left( f(t,v,\theta ) \left( \left( \bar{p}-\dfrac{k_o}{\nu }\right) m_1[f](t,\theta )\right. \right. \nonumber \\{} & {} \left. \left. - \left( \bar{p}+\frac{k_d}{\nu }\right) v + \sum _{k=1}^Z\theta _k \right) \right) , \end{aligned}$$
(3.13)

with initial data \( f(0,v,\theta ) = f^0(v,\theta )\).

The latter is obtained as the limit of \(f^N(0,v,\theta )\) in the Wasserstein distance given a sequence of initial agents, [10, 11]. The quantity \(m_1[f]\) denotes the first moment of f with respect to v

$$\begin{aligned} m_1[f](t,\theta ) = \int _{\mathbb {R}^d} v f(t,v,\theta ) dv. \end{aligned}$$

For the many-particle limit, we recover a mean-field estimate of the \({\mathcal {H}}_\infty \) condition similarly as in 3.2. Indeed, for \(N\rightarrow \infty \) the nonlinear system (3.9) yields

$$\begin{aligned} 0&= -2\bar{p}x_d - \frac{2}{\nu } k_d x_d + \frac{1}{\gamma } x_d^2 + \frac{1}{\gamma }x_o^2 +\frac{1}{\gamma }, \; 0 = \frac{1}{\gamma }x_o^2. \end{aligned}$$

Hence, for any fixed finite N the matrix X is diagonal with the entry

$$\begin{aligned} x_d^{\pm } = \gamma \Bigl ( \bar{p}+\frac{k_d}{\nu }\Bigr ) \pm \sqrt{\gamma ^2\Bigl ( \bar{p}+\frac{k_d}{\nu }\Bigr )^2-1} + O\left( \frac{1}{N}\right) . \end{aligned}$$

To ensure that X is positive definite, we only have to assume that \(\gamma \ge \dfrac{1}{c},\) where \(\gamma \) is the bound of the \({\mathcal {H}}_\infty \) norm, and \(c = \bar{p}+k_d/\nu + O(\frac{1}{N})\) corresponds to the value defined by equation (3.12). This shows that for any N there exists a positive definite matrix that guarantees robust stabilization. Note that the condition for any N is the limit of the finite-dimensional conditions of the previous Lemma 3.1 and Theorem 3.2.

Remark 3.2

For explicit values of the Riccati coefficients we can characterize the previous estimates more precisely. In particular, for \(N\rightarrow \infty \) system (2.7) reduces to

$$\begin{aligned} 0 = \frac{k_d^2}{\nu } +\left( 2\bar{p}+r\right) k_d - 1,\qquad 0 = \frac{k_o^2}{\nu } + \left( \frac{2}{\nu }k_d +r\right) k_o - 2\bar{p}k_d, \end{aligned}$$

with solutions

$$\begin{aligned} k_d^{\pm }= & {} -\nu \left( \bar{p} + \frac{r}{2} \right) \pm \nu \sqrt{\left( \bar{p} + \frac{r}{2} \right) ^2+\frac{1}{\nu }},\\ k_o^{\pm }= & {} -\left( k_d+ \frac{\nu r}{2} \right) \pm \sqrt{\left( k_d+ \frac{\nu r}{2} \right) ^2+2 \nu \bar{p} k_d}. \end{aligned}$$

In this particular case, the condition of Theorem 3.2 becomes

$$\begin{aligned} \gamma \ge \frac{\sqrt{\nu }}{{\sqrt{\left( {\bar{p}} +\frac{r}{2}\right) ^2\nu +1}}-\frac{r\sqrt{\nu }}{2}}. \end{aligned}$$
(3.14)

In Fig. 1 we depict the lower bound of \(\gamma \) for \(r=0\) for different values of \(\nu \) and \(\bar{p}\). As expected, smaller values of \(\gamma \), hence more robustness, is obtained if the penalization factor \(\nu \) is small or when \(\bar{p}\) is large. The latter corresponds to a stronger attraction between agents.

Fig. 1
figure 1

Numerical computation of the lower bound value of \(\gamma \) in (3.14) as function of \(\nu \) and \(\bar{p}\) when \(r=0\), i.e. \(\gamma (\nu , \bar{p}) = \sqrt{{\nu }/(\bar{p}^2 \nu +1)}\)

4 Numerical Approximation of the Uncertain Dynamics

In this section, we present numerical tests based on linear microscopic and mean-field equations in presence of uncertainties. In particular, we give numerical evidence of the robustness of the feedback control (2.8) and illustrate a comparison with the averaged control (A.3). For the numerical approximation of the random space, we employ the Stochastic Galerkin (SG) method belonging to the class of generalized polynomial chaos (gPC) ( [39, 54]). In the mean-field setting, the evolution of the density distribution is approximated with a Monte Carlo (MC) method, in a similar spirt of particle based gPC techniques developed in [13].

4.1 SG Approximation for Robust Constrained Interacting Agent Systems

We approximate the dynamics using a stochastic Galerkin approach applied to the interacting particle system with uncertainties [4, 13]. Polynomial chaos expansion provides a way to represent a random variable with finite variance as a function of an M-dimensional random vector using a polynomial basis that is orthogonal to the distribution of this random vector. Depending on the distribution, different expansion types are distinguished, as shown in Table 1.

Table 1 The different choices for the polynomial expansions

We recall first some basic notions on gPC approximation techniques and for the sake of simplicity we consider a one-dimensional setting for the dynamical state \(v_i,\) i.e., \(d=1\). Let \((\Omega ,{\mathcal {F}}, P)\) be a probability space where \(\Omega \) is an abstract sample space, \({\mathcal {F}}\) a \(\sigma -\)algebra of subsets of \(\Omega \) and P a probability measure on \({\mathcal {F}}\). Let us define a random variable

$$\begin{aligned} \theta : (\Omega , {\mathcal {F}}) \rightarrow (I_\Theta , {\mathcal {B}}({\mathbb {R}}^Z)) \end{aligned}$$
(4.1)

where \(I_\Theta \in {\mathbb {R}}^Z\) is the range of \(\theta \) and \({\mathcal {B}}({\mathbb {R}}^Z)\) is the Borel \(\sigma \)-algebra of subsets of \({\mathbb {R}}^Z\), we recall that Z is the dimension of the random input \(\theta =(\theta _1,\ldots ,\theta _Z)\) and where we assume that each component is independent.

We consider the linear spaces generated by orthogonal polynomials of \(\theta _j\) with degree up to M: \(\lbrace \Phi _{k_j}^{(j)} (\theta )\rbrace _{k_j=0}^M\), with \(j = 1,\dots , Z\). Assuming that the probability law for the function \(v_i(t,\theta )\) has a finite second order moment, the complete polynomial chaos expansion of \(v_i\) is given by

$$\begin{aligned} v_i (t,\theta ) = \sum _{k_1,\ldots ,k_Z \in {\mathbb {N}}} \hat{v}_{i,k_1\dots k_Z} (t) \prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}), \end{aligned}$$

where the coefficients \(\hat{v}_{i,k_1\dots k_Z} (t)\) are defined as

$$\begin{aligned} \hat{v}_{i,k_1\dots k_Z}(t) = {\mathbb {E}}_\theta \left[ v_i(t,\theta ) \prod _{j=1}^Z\Phi ^{(j)}_{k_j} (\theta _j)\right] \end{aligned}$$

where the expectation operator \({\mathbb {E}}_{\theta }\) is computed with respect to the joint distribution \(\rho =\rho _1\otimes \ldots \otimes \rho _Z\), and where \(\{\Phi ^{(j)}_{k}(\theta _j)\}_k\) is a set of polynomials which constitute the optimal basis with respect to the known distribution \(\rho (\theta _j)\) of the random variable \(\theta _j\), such that

$$\begin{aligned} {\mathbb {E}}_{\theta _j} \left[ \Phi ^{(j)}_{k}(\theta _1) \Phi ^{(j)}_h(\theta _j) \right]&= {\mathbb {E}}_{\theta _j} \left[ \Phi ^{(j)}_h(\theta _j)^2\right] \delta _{hk}, \end{aligned}$$

with \(\delta _{hk}\) the Kronecker delta. From the numerical point of view, we may have an exponential order of convergence for the SG series expansion, unlike Monte Carlo techniques for which the order is \({\mathcal {O}}(1/\sqrt{M})\) where M is the number of samples. Considering the noisy model 2.1 with control \(u_i(t)\) in 2.8, we have

$$\begin{aligned} {\dot{v}}_i (t,\theta )= & {} \frac{\bar{p}}{N} \sum _{j = 1}^N \Bigl (v_j (t,\theta ) - v_i (t,\theta )\Bigr )\nonumber \\{} & {} - \frac{1}{N \nu } \sum _{j = 1}^N \Bigl (k_o(t)v_j (t,\theta ) + k_d(t) v_i (t,\theta )\Bigr ) + \sum _{j = 1}^Z \theta _j. \end{aligned}$$
(4.2)

We apply the SG decomposition to the solution of the differential equation \(v_i(t,\theta )\) in (4.2) and to the stochastic variable \(\theta _k\), and for \(i=1,\dots , N, \ l=1,\dots ,Z\), we have

$$\begin{aligned} \begin{aligned} v_i^M (t,\theta )&=\sum _{k_1,\ldots ,k_Z =0}^M \hat{v}_{i,k_1\dots k_Z} (t) \prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}), \\ \theta _l^M (\theta )&= \sum _{k_1,\ldots ,k_Z =0}^M {\hat{\theta }}_{l,k_1\dots k_Z} (t) \prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}), \end{aligned} \end{aligned}$$
(4.3)

with

$$\begin{aligned} {\hat{\theta }}_{l,k_1\dots k_Z} = {\mathbb {E}}_\theta \left[ \theta _l \prod _{j=1}^Z\Phi ^{(j)}_{k_j} (\theta _j)\right] = {\mathbb {E}}_{\theta _l} \left[ \theta _l \Phi ^{(l)}_{k_l} (\theta _l)\right] \prod _{j=1, j\ne l}^Z {\mathbb {E}}_{\theta _j} \left[ \Phi ^{(j)}_{k_j} (\theta _j)\right] . \end{aligned}$$

Then we obtain the following polynomial chaos expansion

$$\begin{aligned}&\frac{d}{dt} \sum _{k_1,\ldots ,k_Z =0}^M \hat{v}_{i,k_1\dots k_Z} \prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}) \nonumber \\&\quad = \frac{1}{N} \sum _{h = 1}^N \sum _{k_1,\ldots ,k_Z =0}^M \left[ \left( \bar{p} - \frac{k_o}{\nu } \right) \hat{v}_{h,k_1\dots k_Z}- \left( \bar{p} + \frac{k_d}{\nu } \right) \hat{v}_{i,k_1\dots k_Z} \right] \prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}) \nonumber \\&\qquad + \sum _{l=1}^Z \sum _{k_1,\ldots ,k_Z =0}^M {\hat{\theta }}_{l,k_1\dots k_Z} (t) \prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}). \end{aligned}$$
(4.4)

Multiplying by \(\prod _{j=1}^Z\Phi _{k_j}^{(j)}(\theta _{j}) \) and integrating with respect to the distribution \(\rho (\theta )\), we end up with

$$\begin{aligned} \frac{d}{dt} \hat{v}_{i,k_1\dots k_Z}&= \frac{1}{N} \sum _{h = 1}^N \left[ \left( \bar{p} - \frac{k_o}{\nu } \right) \hat{v}_{h,k_1\dots k_Z} - \left( \bar{p} + \frac{k_d}{\nu } \right) \hat{v}_{i,k_1\dots k_Z} \right] \end{aligned}$$
(4.5)
$$\begin{aligned}&+ \sum _{l= 1}^Z {\mathbb {E}}_{\theta _l} \left[ \theta _l \Phi ^{(l)}_{k_l} (\theta _l)\right] \prod _{j=1, j\ne l}^Z {\mathbb {E}}_{\theta _l} \left[ \Phi ^{(j)}_{k_j} (\theta _j)\right] . \end{aligned}$$
(4.6)

For the numerical tests, we approximate the integrals using quadrature rules.

Remark 4.1

For model 2.10, where the control is averaged with respect to the random sources,

$$\begin{aligned} u_i = -\frac{1}{\nu } \left( {k}_d {\mathbb {E}}_\theta \left[ v_i \right] + \frac{{k}_o}{N} \sum _{h \ne i}^N {\mathbb {E}}_\theta \left[ v_h\right] + {s}\sum _{j = 1}^Z {\mathbb {E}}_\theta \left[ \theta _j \right] \right) , \end{aligned}$$
(4.7)

the SG approximation is given by

$$\begin{aligned} \begin{aligned} \frac{d}{dt} \hat{v}_{i,k_1\dots k_Z}&= \frac{\bar{p}}{N} \sum _{h = 1}^N \left( \hat{v}_{h,k_1\dots k_Z} - \hat{v}_{i,k_1\dots k_Z} \right) - \frac{ \prod _{j=1}^Z {\mathbb {E}}_{\theta _j} \left[ \Phi ^{(j)}_{k_j} (\theta _j)\right] }{ \prod _{j=1}^Z {\mathbb {E}}_{\theta _j} \left[ \left( \Phi ^{(j)}_{k_j} (\theta _j)\right) ^2\right] } \\&\qquad \left( {k}_d \hat{v}_{i,00\dots 0} + \frac{{k}_o}{N} \sum _{h \ne i}^N {k}_o \hat{v}_{h,00\dots 0} + s \sum _{j = 1}^Z \mu _j \right) \\&\qquad + \sum _{l= 1}^Z {\mathbb {E}}_{\theta _l} \left[ \theta _l \Phi ^{(l)}_{k_l} (\theta _l)\right] \prod _{j=1, j\ne l}^Z {\mathbb {E}}_{\theta _j} \left[ \Phi ^{(j)}_{k_j} (\theta _j)\right] . \end{aligned} \end{aligned}$$
(4.8)

We recover the mean and the variance of the random variable \(v(\theta )\) as

$$\begin{aligned} {\mathbb {E}}_\theta [v_i(\theta )]&= \int _{{\mathbb {R}}^2} v_i(\theta ) d\rho \ \approx \ \hat{v}_{i,00\dots 0} , \\ {\mathbb {V}}_\theta [v_i(\theta )]&= \int _{{\mathbb {R}}^2} ( v_i(\theta ) - \hat{v}_{i,00\dots 0} )^2 d\rho \ \approx \ \sum _{k_1,\ldots ,k_Z =0}^M \hat{v}_{i,k_1\dots k_Z}^2 - \hat{v}_{i,00\dots 0}^2 . \end{aligned}$$

4.2 Numerical Tests

In this section we present different numerical tests on microscopic and mean-field dynamics, to compare the robustness of controls described in Sect. 2. We analyze one- and two-dimensional dynamics, for every test we consider the attractive case with \(\bar{p}=1\). The initial distribution of agents \(v_0\) is chosen such that consensus to the target \({\bar{v}} = 0\) would not be reached without control action. We implement the SG approximations in (4.4) and (4.8) and we perform the time integration until the final time \(T=1\) of the resulting system through a 4th order Runge–Kutta method.

In Test 1 (4.2.1) and Test 2 (4.2.2) we take into account a dynamics with \(Z= 2\) additive uncertanties represented by two different uniform density distributions. While in Test 3 (4.2.3) we present the case of a random variable \(\theta _1\) with Gaussian distribution \({\mathcal {N}}(\mu , \sigma ^2)\), and \(\theta _2\) with uniform distribution \({\mathcal {U}}(a, b)\). Observe that even if the results of this work are theoretically proven for bounded uncertainties, we show in Test 3 that, numerically, they are also valid for Gaussian unbounded noise. This assumption of normal and uniform distributions for the stochastic parameter corresponds to the case of Hermite and Legendre polynomial chaos expansions, respectively, as shown in Table 1. For every test, we have \(M=10\) terms of the SG decomposition.

4.2.1 Test 1: One-Dimensional Microscopic Consensus Dynamics

In the one-dimensional microscopic case we take \(N=100\) agents, and a uniform initial distribution of agents, \(v_0\sim {\mathcal {U}}(10,20)\). Figure 2 shows means, as continuous and dashed lines, and confidence regions of the two noisy dynamics for different distributions \(\rho _1, \ \rho _2\). The shaded region is computed as the region between the values

$$\begin{aligned} \frac{1}{N} \sum _{i=1}^N \left( {\mathbb {E}}_\theta [v_i(\theta )] \right) \ \mp \ \max _{i=1,\dots ,N} \left( \sqrt{{\mathbb {V}}_\theta [v_i(\theta )]} \right) . \end{aligned}$$
Fig. 2
figure 2

Test 1. Comparison between the two controls in (2.8) and (2.10) applied to the multiagent system with uncertainties, in terms of dynamics mean and variance for different \(\rho _1, \rho _2\)

Numerical results show that both introduced controls are capable to drive the agents to the desired state even in the case of a dynamics dependent on random inputs. Moreover, we can observe that, with the \({\mathcal {H}}_\infty \) control, the variance of the uncertain dynamics is stabilized over time, while in the case of a averaged control the variance keeps growing. This is because the averaged control has information only on the mean value of the state and uncertainty, while the \({\mathcal {H}}_\infty \) feedback control directly depends on the state, and as a consequence on the randomness of the dynamics. This is also expected given the robustness estimate on the feedback control.

4.2.2 Test 2: One-Dimensional Mean-Field Consensus Dynamics

In the mean-field limit, the Monte Carlo (MC) method is employed for the approximation of the distribution function \(f(t,v,\theta )\) in the phase space whereas the random space at the particle level is approximated through the SG technique.

Considering this MC-SG scheme, we work on an agent system using Monte Carlo sampling with \(N_s = 10^4\) agents, then we consider the SG scheme at the microscopic level. The probability density \(f(t,v,\theta )\) is then reconstructed as the histogram of \(v(t,\theta )\). The reconstruction step of the mean density has been done with 50 bins. The mean and the variance of the statistical quantity are computed as follows

$$\begin{aligned} \begin{aligned} {\mathbb {E}}_\theta [f(t,v,\theta )]&= \int \int f(t,v,\theta ) d\rho _1(\theta _1) d\rho _2(\theta _2)\\&\quad = \sum _{l,h=1}^L f(t,v,\theta ^{lh}) \rho _1(\theta _1^l) \rho _2(\theta _2^h) \omega _1^l \omega _2^h, \\ {\mathbb {V}}_\theta [f(t,v,\theta )]&= \int \int f(t,v,\theta )^{2} d\rho _1(\theta _1) d\rho _2(\theta _2) - \left( {\mathbb {E}}_\theta [f(t,v,\theta )] \right) ^2 \\&= \sum _{l,h=1}^L f(t,v,\theta ^{lh})^{2} \rho _1(\theta _1^l) \rho _2(\theta _2^h) \omega _1^l \omega _2^h - \left( {\mathbb {E}}_\theta [f(t,v,\theta )] \right) ^2, \end{aligned} \end{aligned}$$
(4.9)

where for \(l,h = 1,\dots , L\) and \(\hat{v}_{k,j} \in {\mathbb {R}}^{N_s}\), \(f(t,v,\theta ^{lh}) \) is reconstructed as the histogram of the data \(v(\theta ^{lh}) = \sum _{k,j=0}^M \hat{v}_{k,j} \Phi _k(\theta _1^l) \Psi _j(\theta _2^h).\)

Fig. 3
figure 3

Test 3. Mean-field one-dimensional case. a Mean and b standard deviation over time for the \({\mathcal {H}}_\infty \) control in (2.8), with parameters \(\bar{p}= 1,\ \nu = 0.01, \ \rho _1 \sim {\mathcal {U}}(-10,10), \ \rho _2\sim {\mathcal {U}}(-25,25)\)

Fig. 4
figure 4

Test 3. Mean-field one-dimensional case. a Mean and b standard deviation over time for the averaged control in (2.10), with parameters \(\bar{p}= 1, \ \nu = 0.01, \ \rho _1\sim {\mathcal {U}}(-10,10), \ \rho _2\sim {\mathcal {U}}(-25,25)\)

We consider the same parameters as in Test 1 for the one dimensional microscopic case, where \(\theta _1, \theta _2\) are uncertainties respectively uniform distributions \({\mathcal {U}}(-10,10)\) and \({\mathcal {U}}(-25,25)\). Hence we approximate the integrals in (4.9) using a Legendre-Gauss quadrature rules with \(L = 40\) quadrature points. Figures 3 and 4 show a consistent behavior with the microscopic case in the right plot of Fig. 2 where we used exactly the same parameters, in particular we observe that less dispersion of the density for control of type (2.8).

4.2.3 Test 3: Two-Dimensional Microscopic Consensus Dynamics

In this section we investigate the two-dimensional microscopic case, observe that unlike the previous tests, here we numerically present the case of an unbounded Gaussian distribution \(\rho _1,\) added to a uniform one \(\rho _2\). We take \(N=100\) agents, and an initial configuration uniformly distributed on a 2D disc, as shown in Fig. 5. 2D means and confidence regions of the two noisy dynamics can be seen in Fig. 6, for different values of the penalization factor \(\nu \) and different distributions \(\rho _1, \ \rho _2\).

Fig. 5
figure 5

Test 2: Initial distribution of the agents in the two-dimensional setting, \(v_0\in \mathbb {R}^{2N}\)

We recall that, for the control u in Eq. (2.8), the size of the transfer function \(\hat{G}\) related to the state-space system (3.1) in terms of the \({\mathcal {H}}_\infty \) signal norm is

$$\begin{aligned} \Vert \hat{G} \Vert _{{\mathcal {H}}_\infty } \le \gamma . \end{aligned}$$

From Theorem 3.2 we know that the \({\mathcal {H}}_\infty \) control u is robust with a constant \(\gamma > \frac{1}{c_N}\), \(c_N>0\). We compute the value \(c_N\) for the two cases in Fig. 6, and we have \(c_N = 14.29\) for a penalization factor \(\nu =0.01\), while \(c_N = 4.55\) for \(\nu = 0.1\). As expected, we observe smaller regions for a smaller value of \(\nu \), interpreted as the control cost.

Fig. 6
figure 6

Test 2. Two-dimensional case. Comparison between the two controls in (2.8) and (2.10) applied to the uncertain model, in terms of dynamics mean and variance for different values of \(\nu , \rho _1, \rho _2\)

5 Conclusions

The introduction of uncertainties in multiagent systems is of paramount importance for the description of realistic phenomena. Here we focused on the mathematical modelling and control of collective dynamics with random inputs and we investigated the robustness of controls proposing estimates based on \({\mathcal {H}}_\infty \) theory in the linear setting. Reformulating the control problem as a robust \({\mathcal {H}}_\infty \) control problem, we derived sufficient conditions in terms of linear matrix inequalities (LMIs) to ensure the control performance, independently of the type of random inputs. Moreover, a robustness analysis is provided also in a mean-field framework, showing consistent results with the microscopic scale. Different numerical tests were proposed to compare the \({\mathcal {H}}_\infty \) control with control synthesized minimizing the expectation of a function with respect to the random inputs. The numerical methods here developed make use of the stochastic Galerkin (SG) expansion for the microscopic dynamics while in the mean-field case we combine an SG expansion in the random space with a Monte Carlo method in the physical variables. The numerical experiments show that both controls are capable to drive the average particle trajectories towards a consensus state considering multiple sources of randomness and in different dimensions. We further observe that, in the \({\mathcal {H}}_\infty \) setting, the variance is stabilized over time, this is not surprising since the \({\mathcal {H}}_\infty \) control accounts for the random state in a feedback form, whereas in the noiseless control setting the uncertainty is averaged out. Nonetheless, these results confirm the goodness of the estimates for the control robustness for the uncertain dynamics. Further analysis is needed to extend these results in the \({\mathcal {H}}_\infty \) setting to non-linear dynamics with uncertainities. This can be studied for example introducing the so-called Hamilton-Jacobi-Isaacs equation, whose solution can be extremely challenging due to the high-dimensionality of multi-agent systems.