The COVID-19 pandemic has affected society significantly. Various actions are taken to mitigate the spread of the infections, such as the travel ban, social distancing, and mask-wearing. The recent invention of the vaccine yields breakthroughs in fighting against this infectious disease. According to the recent effectiveness study [17], vaccines including Pfizer, Moderna, and Janssen (J &J) show approximately 66–\(95\%\) efficacy at preventing both mild and severe symptoms of COVID-19. Therefore, the deployment of COVID-19 vaccines is an urgent and timely task. Many countries have implemented phased distribution plans that prioritize the elderly and healthcare workers getting vaccinated. Meanwhile, the shipping of vaccines is expensive due to the cold chain transportation [30]. An effective distribution strategy is necessary to eliminate infectious diseases and prevent more death.

In this work, we propose a novel mean-field control model based on [28]. We consider two approaches (controls) to control the pandemic: relocation of populations and distribution of vaccines. The first one has been discussed thoroughly in [28], where we address the spatial effect in pandemic modeling by introducing a mean-field control problem into the spatial SIR model. By applying spatial velocity to the classical disease model, the model finds the most optimal strategy to relocate the different populations (susceptible, infected, and recovered), controlling the epidemic’s propagation. We considered several aspects of the vaccine in our model for vaccine distribution, including manufacturing, delivery, and consumption. Our goal is to find an optimal strategy to move the population and distribute vaccines to minimize the total number of infectious, the amount of movement of the people, and the transportation cost of the vaccine with limited vaccine supply. To tackle this question, we ensemble these two controls and propose the following constrained optimization problem:

$$\begin{aligned} \min _{(\rho _i,v_i)_{i\in \{S,I,R,V\}},f} \;{G}\left( (\rho _i,v_i)_{i\in \{S,I,R,V\}},f\right) \quad (G\, \text {defined from} (2.8)) \end{aligned}$$

subject to

$$\begin{aligned} \left\{ \begin{aligned}&\partial _t \rho _S + \nabla \cdot (\rho _S v_S) = - \beta \rho _S K * \rho _I + \frac{\eta _S^2}{2} \Delta \rho _S - \theta _1 \rho _V \rho _S&(t,x)\in (0,T)\times \Omega ,\\&\partial _t \rho _I + \nabla \cdot (\rho _I v_I) = \beta \rho _S K * \rho _I - \gamma \rho _I + \frac{\eta ^2_I}{2} \Delta \rho _I&(t,x) \in (0,T)\times \Omega ,\\&\partial _t \rho _R + \nabla \cdot (\rho _R v_R) = \gamma \rho _I + \frac{\eta ^2_R}{2} \Delta \rho _R + \theta _1 \rho _V \rho _S&(t,x) \in (0,T)\times \Omega ,\\&\partial _t \rho _V = f(t,x) - \theta _2 \rho _V \rho _S&(t,x) \in (0,T')\times \Omega ,\\&\partial _t \rho _V + \nabla \cdot (\rho _V v_V) = - \theta _2 \rho _V \rho _S&(t,x) \in [T',T)\times \Omega ,\\ \end{aligned}\right. \end{aligned}$$


$$\begin{aligned} \left\{ \begin{aligned}&0 \le f(t,x) \le f_{max}&(t,x) \in [0,T'] \times \Omega _{factory},\\&f(t,x) = 0&(t,x) \in [0,T'] \times \Omega \backslash \Omega _{factory},\\&\rho _V(t,x) \le C_{factory}&(t,x) \in [0,T'] \times \Omega _{factory}. \end{aligned}\right. \end{aligned}$$

In our model, different populations are described using \(\rho _i\) (\(i\in \{S, I, R\}\)), representing the susceptible, infectious, and recovered. The term \(\rho _V(x,t)\) describes the density distribution of the vaccine over the spatial domain at location x and time t. The control variables \(v_i\) (\(i\in \{S, I ,R \}\)) create velocity fields over time-space domain that move the corresponding populations. As for vaccines, the control variable \(v_V\) represents the vaccine’s transportation strategy, and the control variable f(tx) describes how many vaccines are produced at a specific time and location. The optimization objective function G is the sum of terminal costs \({\mathcal {E}}_{final}\) and running costs \({\mathcal {E}}_{running}\). The terminal costs \({\mathcal {E}}_{final}\) represent the goal of our control to achieve at the terminal time, such as minimizing the total number of infectious individuals and maximizing the total number of recovered (immune) persons. The running costs \({\mathcal {E}}_{running}\) include the costs of transportation of vaccines and different classes of the populations, etc. We will discuss more details of cost functionals in Sect. 2.3. As for constraints of our optimization problem, the five partial differential equations of \(\rho _i\), \(v_i\) (\(i\in \{S, I, R, V\}\)) describe the dynamics of the different classes of population and vaccines in terms of densities and velocities. The inequalities of f(tx) model the limitation of vaccine manufacturing. Vaccines are produced at particular factory locations \(\Omega _{factory}\) with a daily maximal production rate \(f_{max}\). The dynamics of the vaccine density \(\rho _V\) share some similar aspects to the unnormalized optimal transport [27]. Specifically, they both study mass transportation with a source term that creates masses.

We solve the main problem using the algorithm based on the first-order Primal-Dual Hybrid Gradient (PDHG) method [6, 7]. Due to the multiplicative interaction terms, \(\rho _S K*\rho _I\), \(\rho _I K*\rho _S, \rho _V\rho _S\), the optimization problem is based on nonlinear PDE constraints, whereas the PDHG only considers linear constraints. We use the extension of the PDHG [10] that solves nonsmooth optimization problems with nonlinear operators between function spaces. We extend the method utilizing the preconditioning operator from [21] which provides a suitable choice of variable norms to achieve a convergence rate independent of the nonlinear operator. As a result, the algorithm converges to the saddle point locally with step length parameters independent of the finite-difference mesh size; see Sect. 3.1 for details.

Lots of mathematical models have been invented to predict the future of COVID-19 epidemics. Recently proposed models take more real-world situations into consideration and tend to be more effective in quantitative forecasting. Specifically, there have been studies on the impact of actions such as lockdown, social distancing, wearing a mask [12, 13, 16]. Data-driven approach and machine learning techniques are also integrated to estimate the parameters for the epidemic better and boost the prediction of the trend of the pandemic model [32, 35]. Meanwhile, optimal control serves as an important tool in pandemic control. They seek the optimal strategy to minimize the total number of infected people while keeping certain costs at a minimum. There are work focused on mitigating the epidemic with limited medical supply, such as ICU capacity [8], face masks [31], and vaccines [19, 22, 24, 29, 37]. In [22], an optimal vaccine distribution strategy is proposed with a limited total amount of vaccines and maximal daily supply. [29] first uses an inverse problem to determine the parameters of the SIR model. Then, it formulates two optimal control problems, with mono- and multi-objective, and solves for the optimal strategy of vaccine administration. Other non-pharmaceutical interventions are also considered in the scope of optimal control of epidemics, including social distancing, closing schools, and lockdown [18, 23, 36]. [23] computes the optimal non-pharmaceutical intervention strategy based on an extended SEIR model with the absence of the vaccine. The mean-field control problem can be viewed as a particular type of optimal control applied to an individual in terms of population density.

Mean-field game (control), introduced by [20, 26], describes the deterministic (stochastic) differential games as the number of players tends to infinity, where a given player interacts through the distribution of all players in the state-space. It is a thriving research direction with applications in economics, crowd motion, industrial engineering, and more [3, 14, 25]. Numerical methods are invented to obtain quantitative information of such mean-field game (control) models, especially when the state-space is in high dimensions [1, 2, 5, 34]. Multi-population mean-field game (control) problems have also drawn lots of attention [4, 9, 15]. This type of problem studies the interactions on two levels: between agents of the same population and between populations. Our model is a multi-population mean-field control problem with population dynamics described using reaction-diffusion equations adopted from the epidemic model and the controls over the vaccine production and distribution. Therefore, we obtain a novel mean-field control problem.

The rest of the paper is organized as follows. Section 2 proposes a novel multi-population mean-field control model and explains how population movement and vaccine distribution are integrated into a constrained optimization problem. Section 3 discusses the challenges in numerically solving this mean-field control model, proposes a first-order primal-dual algorithm to solve it, and shows the local convergence of the algorithm. Lastly, in Sect. 4, we present numerical experiments with different model parameter choices and discuss their implications on mean-field controls.


In this section, we review the classical SIR model. Based on it, we formulate the spatial SIR dynamics with vaccine distribution, namely SIRV dynamics. We then introduce a variational problem to control the SIRV dynamics.

Classical SIR model

The SIR epidemic model describes an infectious disease epidemic via an ordinary differential equation system

$$\begin{aligned} \left\{ \begin{aligned}&\frac{dS}{dt} = -\beta IS,\\&\frac{dI}{dt} = \beta IS - \gamma I ,\\&\frac{dR}{dt} = \gamma I . \end{aligned}\right. \end{aligned}$$

The population is divided into three classes: susceptible, infected and recovered. While assuming a closed population without births or deaths, the model uses S(t), I(t), and R(t) to represent the number in each compartment at time t. The SIR model has two parameters: \(\beta \) is the effective contact rate of the susceptible individual being infected and \(\gamma \) is the recovery rate of the infected individual. The simplicity of this model allows people to predict an infectious disease epidemic by only estimating a few parameters. However, it has limitations by assuming the population is homogeneous-mixing, which means that every individual has an equal probability of disease-causing contact. As a result, the predictions will lack spatial information and may not help the (local) governments make policies or relocate medical resources. Therefore, we are motivated to study the spatial SIR model. On the other hand, the SIR model does not consider the latent period between when a person is exposed to a disease and when they become infected. This leads to the extension of the SIR model, such as the SEIR model. Our proposed model has a flexible structure and can naturally be generalized to such epidemiological models.

Spatial SIR variational problem with vaccine distribution

In [28], we add the spatial dimension to the S, I, R functions. Let \(\Omega \subset {\mathbb {R}}^d\) be a bounded domain. Consider the following density functions

$$\begin{aligned} \rho _S, \rho _I, \rho _R:\ [0,T]\times \Omega \rightarrow [0,\infty ). \end{aligned}$$

Here, \(\rho _S\), \(\rho _I\), and \(\rho _R\) represent susceptible, infected, and recovered populations distribution, respectively. We assume \(\rho _i\) for each \(i\in \{S,I,R\}\) moves over a spatial domain \(\Omega \) with a velocity \(v_i\). Here \(v_i, i\in \{S,I,R\}\) are our controls variables. With change of variables \(m_i=\rho _i v_i\), we define the momentum

$$\begin{aligned} m_S, m_I, m_R :[0,T]\times \Omega \rightarrow {\mathbb {R}}^d \end{aligned}$$

that govern the corresponding density flows. In the following, instead of using control variables \(v_i\), we replace them with \(\frac{m_i}{\rho _i}\) and regard \(m_i\) as the control variables.

We can describe the flows of the densities by the following continuity equations.

$$\begin{aligned} \left\{ \begin{aligned}&\partial _t \rho _S + \nabla \cdot m_S = - \beta \rho _S K * \rho _I + \frac{\eta _S^2}{2} \Delta \rho _S,\\&\partial _t \rho _I + \nabla \cdot m_I = \beta \rho _I K * \rho _S - \gamma \rho _I + \frac{\eta ^2_I}{2} \Delta \rho _I,\\&\partial _t \rho _R + \nabla \cdot m_R = {\gamma \rho _I} + \frac{\eta ^2_R}{2} \Delta \rho _R,\\&\rho _S(0,\cdot ), \rho _I(0,\cdot ), \rho _R(0,\cdot ) \text { are given.} \end{aligned}\right. \end{aligned}$$

This system of continuity equations describes the flows of three groups of densities while satisfying the SIR model. The nonnegative constants \(\eta _i\) (\(i\in \{S,I,R\}\)) are the coefficients for viscosity terms. These terms can also be understood as noise terms generated by the data. \(K=K(x,y)\) is a symmetric positive definite kernel with \((K*\rho )(x,t) = \int _{\Omega }K(x,y)\rho (y,t)\,dy\). In this model, we consider the Gaussian kernel

$$\begin{aligned} \begin{aligned} K(x,y)&= \frac{1}{\sqrt{(2\pi )^d}} \prod ^d_{k=1} \frac{1}{\sigma _k} \exp {\left( -\frac{|x_k-y_k|^2}{2\sigma _k^2}\right) }. \end{aligned} \end{aligned}$$

The kernel convolution describes the spreading rate of infectious disease over the spatial domain. In addition, we assume the Neumann boundary conditions on \(\partial \Omega \). Since we don’t consider birth or death in our model, the total population is conserved for all time \(t\in [0, T]\), which leads to the following equality

$$\begin{aligned} \frac{\partial }{\partial t} \int _\Omega \rho _S(t,x)+\rho _I(t,x)+\rho _R(t,x) dx = 0. \end{aligned}$$

In this paper, we consider the optimization problem for the distribution of vaccines. We add an extra function \(\rho _V:[0,T]\times \Omega \rightarrow [0,\infty )\) which represents the vaccine density in \(\Omega \) at each time \(t\in [0,T]\). The vaccine distribution will be described as the following PDE:

$$\begin{aligned} \begin{aligned}&\partial _t \rho _V = f(t,x) - \theta _2 \rho _V \rho _S&t\in (0,T'),\\&\partial _t \rho _V + \nabla \cdot m_V = - \theta _2 \rho _V \rho _S&t\in [T',T), \quad 0<T'<T, \end{aligned} \end{aligned}$$

where \(m_V:[T',T)\times \Omega \rightarrow {\mathbb {R}}^d\) is a momentum, \(\theta _2\) represents the utilization rate of vaccines, and \(f:(0,T')\times \Omega \rightarrow [0,\infty )\) represents the production rate of vaccines in \(x \in \Omega \) at \(0<t<T'\). During \(0<t<T'\), the vaccines are produced with a production rate f and used at a rate \(\theta _2 \rho _V\rho _S\). During \(T'\le t < T\), the vaccines are delivered to the area where the susceptible population is located, and they are used at a rate of \(\theta _2 \rho _V \rho _S\). In summary, the first part of the PDE describes vaccines’ production, and the second part describes the delivery of vaccines. For all time \(0<t<T\), the susceptible population is vaccinated if the vaccines are available in the same area. Now, we are ready to introduce the new system of equations for the SIRV model.

$$\begin{aligned} \left\{ \begin{aligned}&\partial _t \rho _S + \nabla \cdot m_S = - \beta \rho _S K * \rho _I + \frac{\eta _S^2}{2} \Delta \rho _S - \theta _1 \rho _V \rho _S&(t,x)\in (0,T)\times \Omega ,\\&\partial _t \rho _I + \nabla \cdot m_I = \beta \rho _S K * \rho _I - \gamma \rho _I + \frac{\eta ^2_I}{2} \Delta \rho _I&(t,x) \in (0,T)\times \Omega ,\\&\partial _t \rho _R + \nabla \cdot m_R = \gamma \rho _I + \frac{\eta ^2_R}{2} \Delta \rho _R + \theta _1 \rho _V \rho _S&(t,x) \in (0,T)\times \Omega ,\\&\partial _t \rho _V = f(t,x) - \theta _2 \rho _V \rho _S&(t,x) \in (0,T')\times \Omega ,\\&\partial _t \rho _V + \nabla \cdot m_V = - \theta _2 \rho _V \rho _S&(t,x) \in [T',T)\times \Omega ,\\&\rho _S(0,\cdot ), \rho _I(0,\cdot ), \rho _R(0,\cdot ), \rho _V(0,\cdot ) \text { are given.} \end{aligned}\right. \end{aligned}$$

In the first and third equations, we add the terms \( - \theta _1 \rho _V \rho _S\) and \( + \theta _1 \rho _V \rho _S\), respectively. The constant \(\theta _1\) represents the vaccine efficiency and \(\theta _1 \rho _V(t,x) \rho _S(t,x)\) represents the vaccinated population at \((t,x) \in (0,T)\times \Omega \). We denote a set \({\mathbb {S}} := \{S,I,R,V\}\) and define a nonlinear operator A as follows

$$\begin{aligned} \begin{aligned} A((\rho _i,m_i)_{i\in {\mathbb {S}}}, f) := (&\partial _t \rho _S + \nabla \cdot m_S - \frac{\eta _S^2}{2} \Delta \rho _S + \beta \rho _S K * \rho _I + \theta _1 \rho _S \rho _V,\\&\partial _t \rho _I + \nabla \cdot m_I - \frac{\eta _I^2}{2} \Delta \rho _I - \beta \rho _S K * \rho _I + \gamma \rho _I,\\&\partial _t \rho _R + \nabla \cdot m_R - \frac{\eta _R^2}{2} \Delta \rho _R - \gamma \rho _I - \theta _1 \rho _S \rho _V,\\&\partial _t \rho _V - f {\mathcal {X}}_{[0,T')}(t) + \nabla \cdot m_V {\mathcal {X}}_{[T',T]}(t) + \theta _2 \rho _S \rho _V ), \end{aligned} \end{aligned}$$

where \({\mathcal {X}}_C:[0,T]\rightarrow {\mathbb {R}}\) is a step function that equals 1 on C and 0 otherwise.

The cost functional

The cost functional we propose in this paper is the extension of [28]. We design the cost functional so that the solution \((\rho _i, m_i)\), \(i\in {\mathbb {S}}\) satisfies the following criteria:

  1. (i)

    minimize the transportation cost for moving each population;

  2. (ii)

    minimize the total number of infected people and the total number of susceptible people by maximizing the usage of the vaccines at time T;

  3. (iii)

    maximize the total number of recovered people at time T;

  4. (iv)

    avoid high concentration of population and vaccines at each time \(t\in (0,T)\);

  5. (v)

    minimize the amount of vaccines produced during \(t \in (0,T')\);

  6. (vi)

    minimize the transportation cost for delivering vaccines during \(t \in (T',T)\).

Item (i) can be described by

$$\begin{aligned} \int ^T_0 \int _\Omega F_i(\rho _i(t,x), m_i(t,x)) dx\, dt, \end{aligned}$$

for \(i\in \{S,I,R\}\) where

$$\begin{aligned} F_i (\rho _i, m_i) = {\left\{ \begin{array}{ll} \frac{\alpha _i |m_i|^2}{2 \rho _i} &{} \text {if } \rho _i> 0,\\ 0 &{} \text {if } \rho _i = 0 \text { and } |m_i| = 0,\\ \infty &{} \text {if } \rho _i = 0 \text { and } |m_i| > 0, \end{array}\right. } \end{aligned}$$

which is convex, lower semi-continuous, and 1-homogeneous with respect to \((\rho _i,m_i)\). The parameter \(\alpha _i\) characterizes the cost of moving \(\rho _i\) with velocity \(\frac{m_i}{\rho _i}\). Larger \(\alpha _i\) means it is more expensive to move \(\rho _i\). Note that this function comes from the quadratic kinetic energy. To see this, we use the definition \(m_i = \rho _i v_i\) and plug into formula (2.5):

$$\begin{aligned} F_i(\rho _i,m_i) = \frac{\alpha _i |m_i|^2}{2 \rho _i} = \frac{\alpha _i}{2} \rho _i |v_i|^2. \end{aligned}$$

Item (ii) and (iii) can be described by the terminal costs of the cost functional

$$\begin{aligned} {\mathcal {E}}_i(\rho _i(T,\cdot ))&= \int _\Omega e_i(\rho _i(T,x))\, dx \quad (i=S,I,V),\\ {\mathcal {E}}_R(\rho _R(T,\cdot ))&= \int _\Omega e_R\left( 1 - \rho _R(T,x)\right) \, dx, \end{aligned}$$

where functions \(e:[0, \infty )\rightarrow [0, \infty )\) are convex and lower semi-continuous functions. We also minimize the terminal cost for \(\rho _V\) because maximizing the usage of vaccines is equivalent to minimizing the number of vaccines left at the terminal time T. The total number of the recovered can be maximized by penalizing the density at the terminal time if the value of \(\rho _R(T,x)\) is far away from 1 for \(x\in \Omega \). In this paper, we use a quadratic cost function

$$\begin{aligned} e_i(t) = \frac{a_i}{2}t^2, \quad (t\in [0,\infty )), \end{aligned}$$

where \(a_i\) is some constant.

For Item (iv), the cost functional for the concentration of the total population and vaccines can be represented by

$$\begin{aligned} \int ^T_0 {\mathcal {G}}_P(\rho _S(t,\cdot ) + \rho _I(t,\cdot ) + \rho _R(t,\cdot ))\,dt,\quad \int ^T_0 {\mathcal {G}}_V(\rho _V(t,\cdot ))\,dt, \end{aligned}$$


$$\begin{aligned} {\mathcal {G}}_P(u) = \int _\Omega g_P(u(x))\,dx,\quad {\mathcal {G}}_V(u) = \int _\Omega g_V(u(x))\,dx, \end{aligned}$$

for \(u:\Omega \rightarrow [0,\infty )\) and convex and lower semi-continuous functions \(g_P, g_V:[0,\infty )\rightarrow [0,\infty )\). Similar to \(e_i\) (2.6) from Item (ii), we use quadratic functions for \(g_P\) and \(g_V\).

Items (v) and (vi) are criteria specific to the vaccine distribution. From PDE (2.2), the vaccines are produced during \(0<t<T'\) by a function f. We use the similar functional (2.7) to minimize the amount of vaccines produced by f. Thus, we set the functional

$$\begin{aligned} \int ^{T'}_0 {\mathcal {G}}_0 (f(t,\cdot )) \, dt = \int ^{T'}_0 \int _\Omega g_0(f(t,x))\, dx \, dt, \end{aligned}$$

where \(g_0:[0,\infty )\rightarrow [0,\infty )\) is a convex and lower semi-continuous function.

The vaccines are delivered during \(T'< t< T\). Similar to the Item (i), we set

$$\begin{aligned} \int ^T_{T'} \int _\Omega F_V(\rho _V, m_V) \, dx\,dt, \end{aligned}$$

where \(F_V\) has the same definition as (2.5).

The total cost functional we consider is then

$$\begin{aligned} \begin{aligned} {G}((\rho _i,m_i)_{i\in {\mathbb {S}}},f)&= \sum _{i\in {\mathbb {S}}} {\mathcal {E}}_i(\rho _i(T,\cdot )) ,\\&\quad + \int ^T_0 \int _\Omega \sum _{i = S,I,R} F_i(\rho _i, m_i) \,dx\, dt +\int ^T_{T'} \int _\Omega F_V(\rho _V,m_V)\,dx\, dt,\\&\quad + \int ^T_0 {\mathcal {G}}_P((\rho _S + \rho _I + \rho _R)(t,\cdot )) + {\mathcal {G}}_V(\rho _V(t,\cdot )) \, dt,\\&\quad + \int ^{T'}_0 {\mathcal {G}}_0(f(t,\cdot ))\, dt,\\&\quad + \frac{\lambda }{2}\int ^T_0\int _\Omega f^2 + \sum _{i\in {\mathbb {S}}} \rho _i^2 + |m_i|^2 \,dx\,dt. \end{aligned} \end{aligned}$$

In the perspective of a control problem, the first term at the right-hand side in (2.8) is the terminal cost, while the rest of the terms accounts for the running costs. The quadratic terms in the last line is a \(\lambda \)-strongly convex functional. The functional F is \(\lambda \)-strongly convex if for any \(u=((\rho _i,m_i)_{i\in {\mathbb {S}}},f)\), F satisfies

$$\begin{aligned} F({{\tilde{u}}}) \ge F(u) + \partial F(u)({{\tilde{u}}}-u) + \frac{\lambda }{2} \Vert {{\tilde{u}}}-u\Vert _{L^2}^2, \quad \text {for all } {{\tilde{u}}}=(({{\tilde{\rho }}}_i,{{\tilde{m}}}_i)_{i\in {\mathbb {S}}},{{\tilde{f}}}), \end{aligned}$$

where \(\Vert {{\tilde{u}}} - u\Vert _{L^2}^2\) is defined as

$$\begin{aligned} \int ^T_0 \int _\Omega ({{\tilde{f}}} - f)^2 + \sum _{i\in {\mathbb {S}}}(\tilde{\rho }_i - \rho _i)^2+|{{\tilde{m}}}_i - m_i|^2\,dx\,dt \end{aligned}$$

and \(\partial F\) denotes the convex subdifferential of F. Since \({\mathcal {E}}_i\), \(F_i\), \({\mathcal {G}}_i\) are convex and lower-semicontinuous, G is \(\lambda \)-strongly convex as the sum of convex and \(\lambda \)-strongly convex functionals. The strong convexity of G is important as the algorithm of the paper requires the objective cost functional to be strongly convex (Theorem 3.3).

Constraints for vaccine production

In addition to the constraint from (2.3), we adapt the following constraints to reflect the limited vaccination coverage:

$$\begin{aligned} \begin{aligned}&0 \le f(t,x) \le f_{max}&(t,x) \in [0,T'] \times \Omega _{factory},\\&f(t,x) = 0&(t,x) \in [0,T'] \times \Omega \backslash \Omega _{factory},\\&\rho _V(t,x) \le C_{factory}&(t,x) \in [0,T'] \times \Omega _{factory} \end{aligned} \end{aligned}$$

where \(\Omega _{factory} \subset \Omega \) indicates the factory area where vaccines are produced and \(f_{max}\) is a nonnegative constant representing the maximum vaccine production rate. In the third inequality, a nonnegative constant \(C_{factory}\) limits the total number of vaccines produced during \(0< T < T'\).

$$\begin{aligned} \int _0^{T'} \int _{\Omega } \rho _V(t,x)\,dx\,dt \le C_{factory} T' |\Omega _{factory}|. \end{aligned}$$

Constraints (2.9) can be imposed by having the following functionals for \({\mathcal {G}}_V\) and \({\mathcal {G}}_0\).

$$\begin{aligned} \begin{aligned} {\mathcal {G}}_V(\rho _V(t,\cdot ))&= \int _\Omega g_V(\rho _V(t,x))\, dx + i_{[-\infty , C_{factory})}(\rho _V(t,\cdot )),\\ {\mathcal {G}}_0(f(t,\cdot ))&= \int _\Omega g_0 (f(t,x)) + i_{\Omega _{factory}}(x)f(t,x) \, dx + i_{[-\infty , f_{max})}(f(t,\cdot )) \end{aligned} \end{aligned}$$

where \(\Omega _{factory} \subset \Omega \) indicates the factory area where vaccines are produced. The functionals \(i_{[-\infty , C_{factory}]}\) and \(i_{[-\infty , f_{max}]}\) are defined as

$$\begin{aligned} \begin{aligned} i_{[a,b]} (u)= {\left\{ \begin{array}{ll} 0, &{} a \le u(x) \le b\quad \text {for all }x\in \Omega ,\\ \infty , &{} \text {otherwise} \end{array}\right. } \end{aligned} \end{aligned}$$

where ab are constants and \(u:\Omega \rightarrow {\mathbb {R}}\) is a function. The function \(i_{\Omega _{factory}}(x)\) is defined as

$$\begin{aligned} i_{\Omega _{factory}}(x) = {\left\{ \begin{array}{ll} 0, &{} x \in \Omega _{factory},\\ \infty , &{} x \in \Omega \backslash \Omega _{factory}. \end{array}\right. } \end{aligned}$$

This function forces \(f(t,x)=0\) if \((t,x)\in (0,T')\times (\Omega \backslash \Omega _{factory})\), thus vaccines are produced only in \(\Omega _{factory}\).

Remark 2.1

The formulation is not limited to SIR epidemic model. For example, we can describe the SIRD (Susceptible-Infected-Recovered-Deceased) epidemic model by adding an extra population \(\rho _D\) for the deceased population with a mortality rate \(\mu \).

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t \rho _S + \nabla \cdot m_S = - \beta \rho _S K * \rho _I + \frac{\eta _S^2}{2} \Delta \rho _S - \theta _1 \rho _V \rho _S &{} (t,x)\in (0,T)\times \Omega ,\\ \partial _t \rho _I + \nabla \cdot m_I = \beta \rho _S K * \rho _I - \gamma \rho _I - \mu \rho _I + \frac{\eta ^2_I}{2} \Delta \rho _I &{} (t,x) \in (0,T)\times \Omega ,\\ \partial _t \rho _R + \nabla \cdot m_R = \gamma \rho _I + \frac{\eta ^2_R}{2} \Delta \rho _R + \theta _1 \rho _V \rho _S &{} (t,x) \in (0,T)\times \Omega ,\\ \partial _t \rho _D = \mu \rho _I + \frac{\eta ^2_D}{2} \Delta \rho _D &{} (t,x) \in (0,T)\times \Omega ,\\ \partial _t \rho _V = f(t,x) - \theta _2 \rho _V \rho _S &{} (t,x) \in (0,T')\times \Omega ,\\ \partial _t \rho _V + \nabla \cdot m_V = - \theta _2 \rho _V \rho _S &{} (t,x) \in [T',T)\times \Omega ,\\ \rho _S(0,\cdot ), \rho _I(0,\cdot ), \rho _R(0,\cdot ), \rho _D(0,\cdot ), \rho _V(0,\cdot ) \text { are given.} \end{array}\right. } \end{aligned}$$


From the definition of the cost functional and constraint (2.3), we have the following minimization problem:

$$\begin{aligned}&\inf _{(\rho _i,m_i)_{i\in {\mathbb {S}}},f} \Big \{ {G}((\rho _i,m_i)_{i\in {\mathbb {S}}}, f): \text {subject to~(2.3)}\Big \}. \end{aligned}$$

We first define the inner product of vectors of functions in \(L^2\). Given vectors of functions \(u=(u_1(t,x),u_2(t,x),\cdots ,u_k(t,x))\) and \(v=(v_1(t,x),v_2(t,x),\cdots ,v_k(t,x))\) with \(u_i,v_i: [0,T]\times \Omega \rightarrow {\mathbb {R}}\), the \(L^2\) inner product of vectors u and v and \(L^2\) norm of u are defined by

$$\begin{aligned} \langle u, v \rangle _{L^2} = \sum ^k_{i=0} (u_i, v_i)_{L^2},\quad \Vert u\Vert _{L^2}^2 = \langle u, u \rangle _{L^2} \end{aligned}$$

where \((\cdot ,\cdot )_{L^2([0,T]\times \Omega )}\) is a \(L^2\) inner product such that

$$\begin{aligned} (u,v)_{L^2([0,T]\times \Omega )} = \int ^T_0 \int _\Omega u(t,x) v(t,x) \, dx \, dt. \end{aligned}$$

We introduce dual variables \((\phi _i)_{i\in {\mathbb {S}}}\) for each continuity equation from (2.4). Using the dual variables and the definitions of the inner products, we convert the minimization problem into a saddle point problem.

$$\begin{aligned} \inf _{(\rho _i,m_i)_{i\in {\mathbb {S}}},f} \sup _{(\phi _i)_{i\in {\mathbb {S}}}}\, {\mathcal {L}}((\rho _i,m_i,\phi _i)_{i\in {\mathbb {S}}},f), \end{aligned}$$

where \({\mathcal {L}}\) is the Lagrangian functional defined as

$$\begin{aligned}&{\mathcal {L}}((\rho _i,m_i,\phi _i)_{i\in {\mathbb {S}}},f),\\&= {G} ((\rho _i,m_i)_{i\in {\mathbb {S}}},f) -\langle A((\rho _i,m_i)_{i\in {\mathbb {S}}},f) , (\phi _i)_{i\in {\mathbb {S}}} \rangle _{L^2},\\&= {G} ((\rho _i,m_i)_{i\in {\mathbb {S}}},f)\nonumber \\&\quad - \int ^T_0 \int _\Omega \phi _S \left( \partial _t \rho _S + \nabla \cdot m_S + \beta \rho _S K * \rho _I + \theta _1 \rho _S \rho _V - \frac{\eta _S^2}{2} \Delta \rho _S \right) \, dx \, dt\nonumber \\&\quad - \int ^T_0 \int _\Omega \phi _I \left( \partial _t \rho _I + \nabla \cdot m_I - \beta \rho _S K * \rho _I + \gamma \rho _I - \frac{\eta _I^2}{2} \Delta \rho _I \right) \, dx \, dt\nonumber \\&\quad - \int ^T_0 \int _\Omega \phi _R \left( \partial _t \rho _R + \nabla \cdot m_R - \gamma \rho _I - \theta _1 \rho _S \rho _V - \frac{\eta _R^2}{2} \Delta \rho _R \right) \, dx \, dt\nonumber \\&\quad - \int ^{T}_0 \int _\Omega \phi _V \left( \partial _t \rho _V - f {\mathcal {X}}_{[0,T')}(t) + \nabla \cdot m_V {\mathcal {X}}_{[T',T]}(t) + \theta _2 \rho _S \rho _V \right) \, dx \, dt. \end{aligned}$$

For brevity, we denote

$$\begin{aligned} u = ((\rho _i,m_i)_{i\in {\mathbb {S}}},f),\quad p = (\phi _i)_{i\in {\mathbb {S}}}. \end{aligned}$$

We can rewrite the Lagrangian as

$$\begin{aligned} {\mathcal {L}}(u,p) = {G}(u) - \langle A(u), p \rangle _{L^2}, \end{aligned}$$

where the nonlinear operator A(u) is defined as

$$\begin{aligned} A(u) = ( A_S(u), A_I(u), A_R(u), A_V(u) ), \end{aligned}$$
$$\begin{aligned} \begin{aligned} A_S(u)&= \partial _t \rho _S + \nabla \cdot m_S - \frac{\eta _S^2}{2} \Delta \rho _S + \beta \rho _S K * \rho _I + \theta _1 \rho _S \rho _V,\\ A_I(u)&= \partial _t \rho _I + \nabla \cdot m_I - \frac{\eta _I^2}{2} \Delta \rho _I - \beta \rho _I K * \rho _S + \gamma \rho _I,\\ A_R(u)&= \partial _t \rho _R + \nabla \cdot m_R - \frac{\eta _R^2}{2} \Delta \rho _R - \gamma \rho _I,\\ A_V(u)&= \partial _t \rho _V - f {\mathcal {X}}_{[0,T')}(t) + \nabla \cdot m_V {\mathcal {X}}_{[T',T]}(t) + \theta _1 \rho _S \rho _V. \end{aligned} \end{aligned}$$

As noted in [28], the dual gap, the difference between the primal solution and dual solution, may not be zero because the nonconvex functions \((\rho _S,\rho _I) \mapsto \rho _S K * \rho _I\) and \((\rho _S,\rho _V) \mapsto \rho _S \rho _V\) make the feasible set nonconvex. We circumvent the problem by linearizing the nonlinear operator at a base point \({\bar{u}}\)

$$\begin{aligned} A(u) \approx {\bar{A}}_{{{\bar{u}}}}(u) = A({\bar{u}}) + [\nabla A({\bar{u}})](u-{\bar{u}}). \end{aligned}$$

In our formulation, the linearized operator \({{\bar{A}}}_{{{\bar{u}}}}(u)\) can be written as follows.

$$\begin{aligned} \begin{aligned} {{\bar{A}}}_{{{\bar{u}}}}(u)&= ( {{\bar{A}}}_{S {{\bar{u}}}}(u), {{\bar{A}}}_{I {{\bar{u}}}}(u ), {{\bar{A}}}_{R{{\bar{u}}}}(u), {{\bar{A}}}_{V{{\bar{u}}}}(u) ),\\ {{\bar{A}}}_{S {{\bar{u}}}}(u)&= \partial _t \rho _S + \nabla \cdot m_S - \frac{\eta _S^2}{2} \Delta \rho _S + \beta \rho _S K * {{\bar{\rho }}}_I + \theta _1 \rho _S {{\bar{\rho }}}_V,\\ {{\bar{A}}}_{I {{\bar{u}}}}(u)&= \partial _t \rho _I + \nabla \cdot m_I - \frac{\eta _I^2}{2} \Delta \rho _I - \beta \rho _I K * {{\bar{\rho }}}_S + \gamma \rho _I,\\ {{\bar{A}}}_{R {{\bar{u}}}}(u)&= \partial _t \rho _R + \nabla \cdot m_R - \frac{\eta _R^2}{2} \Delta \rho _R - \gamma {{\bar{\rho }}}_I,\\ {{\bar{A}}}_{V {{\bar{u}}}}(u)&= \partial _t \rho _V - f {\mathcal {X}}_{[0,T')}(t) + \nabla \cdot m_V {\mathcal {X}}_{[T',T]}(t) + \theta _1 \rho _V {{\bar{\rho }}}_S, \end{aligned} \end{aligned}$$

where \({{\bar{u}}} = u = (({{\bar{\rho }}}_i,{{\bar{m}}}_i)_{i\in {\mathbb {S}}},\bar{f})\). We define a linearized Lagrangian as

$$\begin{aligned} \bar{{\mathcal {L}}}_{{{\bar{u}}}}(u,p) = {G}(u) - \langle {\bar{A}}_{\bar{u}}(u), p \rangle _{L^2}. \end{aligned}$$

In the paper [10], the author developed a primal-dual algorithm using the linearized Lagrangian (Algorithm (3.5)) and proves that the sequence \((u^{(k)},p^{(k)})^\infty _{k=1}\) from the algorithm converges to the saddle point \((u_*,p_*)\) (in Sect. 3.1, we prove the local convergence to the saddle point given \((u^{(0)},p^{(0)})\) is sufficiently close to the saddle point). By the first-order optimality conditions (also known as Karush–Kuhn–Tucker (KKT) conditions), the saddle point satisfies

$$\begin{aligned} \begin{aligned}{}[\nabla A(u_*)]^T p_*&\in \partial {G}(u_*),\\ A(u_*)&= 0. \end{aligned} \end{aligned}$$

In the next proposition, we present the equations derived from the KKT conditions (2.17).

Proposition 2.2

By KKT conditions, the saddle point \(((\rho _i, m_i, \phi _i)_{i\in {\mathbb {S}}},f)\) of (2.13) satisfies the following equations.

$$\begin{aligned}&\partial _t \phi _S - \frac{\alpha _S}{2} |\nabla \phi _S|^2 + \frac{\eta ^2_S}{2}\Delta \phi _S + \frac{\delta {\mathcal {G}}_P}{\delta \rho }(\rho _S+\rho _I+\rho _R) + \beta (\phi _I - \phi _S) K * \rho _I\\&\quad + \rho _V\bigl ( \theta _1 (\phi _R - \phi _S) - \theta _2 \phi _V)\bigl )= 0 \quad (t,x)\in (0,T)\times \Omega ,\\&\partial _t \phi _I - \frac{\alpha _I}{2} |\nabla \phi _I|^2 + \frac{\eta ^2_I}{2}\Delta \phi _I + \frac{\delta {\mathcal {G}}_P}{\delta \rho } (\rho _S+\rho _I+\rho _R)\\&\quad + \beta K * \left( \rho _S (\phi _I - \phi _S) \right) + \gamma (\phi _R - \phi _I) = 0 \quad (t,x)\in (0,T)\times \Omega ,\\&\partial _t \phi _R - \frac{\alpha _R}{2} |\nabla \phi _R|^2 + \frac{\eta ^2_R}{2}\Delta \phi _R + \frac{\delta {\mathcal {G}}_P}{\delta \rho } (\rho _S+\rho _I+\rho _R)= 0 \quad (t,x)\in (0,T)\times \Omega ,\\&\partial _t \phi _V + \frac{\delta {\mathcal {G}}_V}{\delta \rho } (\rho _V) + \rho _S \bigl ( \theta _1 (\phi _R - \phi _S) - \theta _2 \phi _V)\bigl )= 0 \quad (t,x)\in (0,T')\times \Omega ,\\&\partial _t \phi _V - \frac{\alpha _V}{2}|\nabla \phi _V|^2 + \frac{\delta {\mathcal {G}}_V}{\delta \rho } (\rho _V) + \rho _S \bigl ( \theta _1 (\phi _R - \phi _S) - \theta _2 \phi _V)\bigl )= 0 \quad (t,x)\in (T',T)\times \Omega ,\\&\partial _t \rho _S -\frac{1}{\alpha _S}\nabla \cdot (\rho _S\nabla \phi _S) + \beta \rho _S K * \rho _I + \theta _1 \rho _S \rho _V - \frac{\eta _S^2}{2} \Delta \rho _S = 0 \quad (t,x)\in (0,T)\times \Omega ,\\&\partial _t \rho _I -\frac{1}{\alpha _I}\nabla \cdot (\rho _I \nabla \phi _I) - \beta \rho _S K *\rho _I + \gamma \rho _I - \frac{\eta ^2_I}{2} \Delta \rho _I = 0 \quad (t,x)\in (0,T)\times \Omega ,\\&\partial _t \rho _R -\frac{1}{\alpha _R} \nabla \cdot (\rho _R \nabla \phi _R) - \gamma \rho _I - \theta _1 \rho _S \rho _V - \frac{\eta ^2_R}{2} \Delta \rho _R= 0 \quad (t,x)\in (0,T)\times \Omega ,\\&\partial _t \rho _V - f + \theta _2 \rho _S \rho _V = 0 \quad (t,x)\in (0,T')\times \Omega ,\\&\partial _t \rho _V -\frac{1}{\alpha _V} \nabla \cdot (\rho _V \nabla \phi _V) + \theta _2 \rho _S \rho _V = 0 \quad (t,x)\in (T',T)\times \Omega ,\\&\frac{\delta {\mathcal {G}}_0}{\delta f}(f) + \phi _V = 0 \quad (t,x)\in (0,T')\times \Omega ,\\&\phi _i(T,\cdot ) = \frac{\delta {\mathcal {E}}_i}{\delta \rho (T,\cdot )} (\rho _i(T,\cdot )), \quad i \in {\mathbb {S}}. \end{aligned}$$

The terms \(\frac{\delta {\mathcal {G}}_P}{\delta \rho }\), \(\frac{\delta {\mathcal {G}}_V}{\delta \rho }\), \(\frac{\delta {\mathcal {G}}_P}{\delta \rho }\), \(\frac{\delta {\mathcal {G}}_0}{\delta f}\), and \(\frac{\delta {\mathcal {E}}_i}{\delta \rho (T,\cdot )}\) are the functional derivatives. In other words, given \(F:{\mathcal {H}}\rightarrow {\mathbb {R}}\) be a smooth functional where \({\mathcal {H}}\) is a separable Hilbert space and \(\rho \in {\mathcal {H}}\), we say a map \(\frac{\delta F}{\delta \rho }\) is the functional derivative of F with respect to \(\rho \) if it satisfies

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \frac{F(\rho +\epsilon h) - F(\rho )}{\epsilon } = \int _\Omega \frac{\delta F}{\delta \rho }(\rho (x)) h(x)\, dx, \end{aligned}$$

for any arbitrary function \(h:\Omega \rightarrow {\mathbb {R}}\).

The dynamical system models the optimal vector field strategies for S, I, R populations and the vaccine distribution. It combines both strategies from mean field controls and SIRV models. For this reason, we call it Mean-field control SIRV system. The proof of Proposition 2.2 can be found in the Appendix.


In this section, we propose an algorithm to solve the proposed SIRV variational problem. We use the primal-dual hybrid gradient (PDHG) algorithm [6, 7]. The PDHG can solve the following convex optimization problem.

$$\begin{aligned} \begin{aligned} \min _u\, f(Au) + g(u), \end{aligned} \end{aligned}$$

where f and g are convex functions and A is a continuous linear operator. The algorithm solves the problem by converting the problem into a saddle point problem by introducing a dual variable p.

$$\begin{aligned} \begin{aligned} \min _u\,\max _p\, g(u) + \langle Au, p \rangle _{L^2} - f^*(p) \end{aligned} \end{aligned}$$

with \(L^2\) inner product is defined in (2.12) and

$$\begin{aligned} f^*(p) = \sup _{u} \, \langle u, p \rangle _{L^2} - f(u) \end{aligned}$$

is the Legendre transform of f. The method solves the saddle point problem by iterating

$$\begin{aligned} \begin{aligned} u^{(k+1)}&= \text {arg}\min _u \, g(u) + \langle u, A^T p^{(k)} \rangle _{L^2} + \frac{1}{2\tau } \Vert u-u^{(k)}\Vert ^2_{L^2},\\ {\tilde{u}}^{(k+1)}&= 2 u^{(k+1)} - u^{(k)},\\ p^{(k+1)}&= \text {arg}\max _p \, \langle A {\tilde{u}}^{(k+1)}, p \rangle _{L^2} - f^*(p) - \frac{1}{2\sigma } \Vert p-p^{(k)}\Vert ^2_{L^2}. \end{aligned} \end{aligned}$$

The scheme converges if the step sizes \(\tau \) and \(\sigma \) satisfy

$$\begin{aligned} \tau \sigma \Vert A^T A\Vert _{L^2} < 1, \end{aligned}$$

where \(\Vert \cdot \Vert \) is an operator norm in \(L^2\). However, the SIRV variational problem has a nonlinear function A for the constraint. Thus, we use the extension of the algorithm from [10] which solves the nonlinear constrained optimization problem.

$$\begin{aligned} \min _u\,\max _p\, g(u) + \langle A(u), p \rangle _{L^2} - f^*(p), \end{aligned}$$

where A is a nonlinear function. The scheme iterates algorithm (3.1) with a linear approximation of A at a base point \({{\bar{u}}}\)

$$\begin{aligned} A(u) \approx A({{\bar{u}}}) + [\nabla A({{\bar{u}}})](u - {{\bar{u}}}). \end{aligned}$$

Denote \(A_{u} := \nabla A(u)\). We have a linearized saddle point problem

$$\begin{aligned} \min _u\,\max _p\, g(u) + \langle A({{\bar{u}}}) + A_{{{\bar{u}}}}(u-{{\bar{u}}}), p \rangle _{L^2} - f^*(p) \end{aligned}$$

and the scheme iterates

$$\begin{aligned} \begin{aligned} u^{(k+1)}&= \text {arg}\min _u \, g(u) + \langle u, A_{u^{(k)}}^T p^{(k)} \rangle _{L^2} + \frac{1}{2\tau ^{(k)}} \Vert u-u^{(k)}\Vert ^2_{L^2},\\ {\tilde{u}}^{(k+1)}&= 2 u^{(k+1)} - u^{(k)},\\ p^{(k+1)}&= \text {arg}\max _p \, \langle A(u^{(k)}) + A_{u^{(k)}} ({\tilde{u}}^{(k+1)} - u^{(k)}) , p \rangle _{L^2} - f^*(p) - \frac{1}{2\sigma ^{(k)}} \Vert p-p^{(k)}\Vert ^2_{L^2}. \end{aligned} \end{aligned}$$

The paper [10] proves that the sequence \(\{u^{(k)},p^{(k)}\}^\infty _{k=0}\) of the algorithm converges to some saddle point \((u_*,p_*)\) that satisfies (2.17). However, the scheme converges if the step sizes satisfy

$$\begin{aligned} \sigma ^{(k)} \tau ^{(k)} \Vert \nabla A(u^{(k)})\Vert ^2_{L^2} < 1,\quad k=1,2,\cdots . \end{aligned}$$

Suppose we use an unbounded operator that depends on the grid size, for example, \(A = \nabla \). The discrete approximation of the operator norm of A increases as the grid size increases (Fig. 1 illustrates the relationship between the norm of an unbounded operator and grid sizes). Thus, the scheme can result in a very slow convergence if we use a fine grid resolution. To circumvent the problem, we use the General-proximal Primal-Dual Hybrid Gradient (G-prox PDHG) method from [21] which is another variation of the PDHG algorithm. This variant provides an appropriate choice of norms for the algorithm, and the authors prove that choosing the proper norms allows the algorithm to have larger step sizes than the vanilla PDHG algorithm. The G-prox PDHG iterates

$$\begin{aligned}&\begin{aligned} u^{(k+1)}&= \text {arg}\min _u \, g(u) + \langle u, A_{u^{(k)}}^T p^{(k)} \rangle _{L^2} + \frac{1}{2\tau ^{(k)}} \Vert u-u^{(k)}\Vert ^2_{L^2},\\ {\tilde{u}}^{(k+1)}&= 2 u^{(k+1)} - u^{(k)},\\ p^{(k+1)}&= \text {arg}\max _p \, \langle A(u^{(k)}) + A_{u^{(k)}} ({\tilde{u}}^{(k+1)} - u^{(k)}) , p \rangle _{L^2} - f^*(p) - \frac{1}{2\sigma ^{(k)}} \Vert p-p^{(k)}\Vert ^2_{{\mathcal {H}}^{(k),}}. \end{aligned}\nonumber \\ \end{aligned}$$

where the norm \(\Vert \cdot \Vert _{{\mathcal {H}}^{(k)}}\) is defined as

$$\begin{aligned} \Vert p\Vert ^2_{{\mathcal {H}}^{(k)}} = \Vert A_{u^{(k)}}^T p\Vert ^2_{L^2}. \end{aligned}$$

By choosing the proper norms, the step sizes only need to satisfy

$$\begin{aligned} \sigma ^{(k)} \tau ^{(k)} < 1,\quad k=1,2,\cdots \end{aligned}$$

which are clearly independent of the grid size.

Fig. 1
figure 1

The image a shows u on a unit square domain \([-0.5,0.5]^2\). The plot (b) shows the discrete approximation of \(\Vert \nabla u\Vert ^2_{L^2} = \int _\Omega |\nabla u(x)|^2 \,dx\) with respect to grid sizes. It shows that in the discrete approximation, the norm of an unbounded operator \(\nabla \) increases as grid size increases

Local convergence of the algorithm

In this section, we show the iterations from algorithm (3.6) locally converges to the saddle point. The local convergence theorem in this paper is mainly based on Theorem 2.11 from [10]. However, we add a preconditioning operator from the G-prox PDHG method. We show that the method converges locally to the saddle point with the step sizes independent of the nonlinear operator A.

From algorithm (3.6), \((u^{(k+1)},p^{(k+1)})\) satisfies the following first-order optimality conditions

$$\begin{aligned} \begin{aligned} 0&\in \partial g(u^{(k+1)}) + A_{u^{(k)}}^T p^{(k)} + \frac{1}{\tau ^{(k)}} (u^{(k+1)} - u^{(k)}),\\ 0&\in A(u^{(k)}) + 2 A_{u^{(k)}} (u^{(k+1)} - u^{(k)}) - \partial f^* (p^{(k+1)}) - \frac{1}{\sigma ^{(k)}} A_{u^{(k)}} A_{u^{(k)}}^T (p^{(k+1)}-p^{(k)}), \end{aligned} \end{aligned}$$

which can be rewritten as

$$\begin{aligned} \begin{aligned} 0 \in H_{u^{(k)}} (q^{(k+1)}) + M^{(k)} (q^{(k+1)} - q^{(k)}) \end{aligned} \end{aligned}$$

with \(q=(u,p)\). Here, the monotone operator \(H_{{{\bar{u}}}}\) is defined as

$$\begin{aligned} H_{{{\bar{u}}}} (q) := \begin{pmatrix} \partial g(u) + A_{{{\bar{u}}} }^T p\\ \partial f^*(p) - A({{\bar{u}}}) - A_{{{\bar{u}}}} (u - {{\bar{u}}}) \end{pmatrix} \end{aligned}$$


$$\begin{aligned} M^{(k)} := \begin{pmatrix} \frac{1}{\tau ^{(k)}} Id &{} - A_{u^{(k)}}^T\\ - A_{u^{(k)}} &{} \frac{1}{\sigma ^{(k)}} A_{u^{(k)}} A_{u^{(k)}}^T \end{pmatrix}, \end{aligned}$$

where Id is an identity operator.

Recall that from (2.17), the saddle point \(q_*=(u_*,p_*)\) has to satisfy

$$\begin{aligned} 0 \in H_{u_*}(u_*,p_*). \end{aligned}$$

Throughout, we assume that

$$\begin{aligned} \Vert \nabla A(u_*)\Vert > 0\quad \text {and}\quad u\mapsto A(u) \text {is continuous}. \end{aligned}$$

Lemma 3.1

There exists constants \(0<c<C\) and \(R>0\) such that

$$\begin{aligned} c \le \Vert \nabla A(u)\Vert \le C,\quad (\Vert u-u_*\Vert _{L^2} \le R), \end{aligned}$$

where \(\Vert \cdot \Vert \) is an operator norm.


This follows immediately from (3.9) and the fact that the derivative \(\nabla A(u)\) is continuous with respect to u. \(\square \)

Lemma 3.2

Suppose (3.9) holds and let \(\tau ^{(k)} \sigma ^{(k)} <1\). Then there exist constants \(0<\theta < \Theta \) such that

$$\begin{aligned} \theta ^2 \Vert q\Vert _{L^2}^2 \le \langle q, M^{(k)} q \rangle \le \Theta ^2 \Vert q\Vert _{L^2}^2, \end{aligned}$$


$$\begin{aligned} \Vert q\Vert _{L^2}^2 = \Vert u\Vert _{L^2}^2 + \Vert p\Vert _{L^2}^2. \end{aligned}$$

A proof of Lemma 3.2 is provided in the appendix.

With the above Lemmas, we can use Theorem 2.11 from [10] to show the local convergence of the algorithm.

Theorem 3.3

Let \((u_*, p_*) \in L^2 \times {\mathcal {H}}^{(*)}\) be a solution to (2.17) where \(\Vert p\Vert ^2_{{\mathcal {H}}^{(*)}} = \Vert A_{u_*}^T p\Vert ^2_{L^2}\). Let the step sizes \(\tau ^{(k)}\) and \(\sigma ^{(k)}\) satisfy \(\tau ^{(k)}\sigma ^{(k)}<1\) for all k. Then there exists \(\delta >0\) such that for any initial point \((u^{(0)},p^{(0)})\in L^2 \times {\mathcal {H}}^{(0)}\) satisfying

$$\begin{aligned} \Vert u^{(0)} - u_*\Vert _{L^2}^2 + \Vert p^{(0)} - p_*\Vert _{L^2}^2 < \delta ^2, \end{aligned}$$

the iterates \((u^{(k)}, p^{(k)})\) from (3.6) converges to the saddle point \((u_*, p_*)\).


By Lemma 3.1, Lemma 3.2, and strong convexity of the functional G from (2.8), we can use [10, Theorem 2.11], which proves the theorem. \(\square \)

Remark 3.4

[10, Theorem 2.11] requires \(H_{u_*}\) to satisfy the condition called metric regularity. In our formulation, the constraint \(A(u) = 0\) makes \(H_{u_*}\) metrically regular by [11, Section 5.3]. We refer readers to [10, 11, 33] for further details about metric regularity.

Implementation of the algorithm

To implement the algorithm to the minimization problem (2.8), we set

$$\begin{aligned} \begin{aligned} u&= ((\rho _i, m_i)_{i\in {\mathbb {S}}},f),\\ p&= (\phi _i)_{i\in {\mathbb {S}}},\\ g(u)&= {G} (u),\\ f(A(u))&= {\left\{ \begin{array}{ll} 0 &{} \text {if } A(u) = 0,\\ \infty &{} \text {otherwise}, \end{array}\right. }\\ f^*(p)&= 0. \end{aligned} \end{aligned}$$

We use (2.15) for the definition of the operator A. Define the Lagrangian functional as

$$\begin{aligned} {\mathcal {L}}(u,p) := {G}(u) - \langle A(u), p \rangle _{L^2}, \end{aligned}$$

where \(\langle \cdot , \cdot \rangle _{L^2}\) is defined in (2.12). We summarize the algorithm as follows.

figure a

Here, \(L^2\) and \(H^{(k)}_i\) norms are defined as

$$\begin{aligned} \Vert u\Vert ^2_{L^2}&= (u,u)_{L^2}= \int ^T_0 \int _\Omega u^2 dx\, dt,\quad \Vert p\Vert ^2_{H^{(k)}_i} = \Vert [\nabla A_i(u^{(k)})]^T p\Vert _{L^2}^2,\quad i \in {\mathbb {S}}, \end{aligned}$$

for any \(u : [0,T] \times \Omega \rightarrow [0,\infty )\). Moreover, the relative error is defined as

$$\begin{aligned} \text {relative error} = \frac{|{G}(\rho _i^{(k+1)},m_i^{(k+1)})- {G}(\rho _i^{(k)},m_i^{(k)})|}{|{G}(\rho _i^{(k)},m_i^{(k)})|}. \end{aligned}$$

In sect. 4, We use quadratic functions for \({\mathcal {E}}_i\) \((i\in \{S,I,V\})\), \({\mathcal {G}}_P\), \({\mathcal {G}}_V\), \({\mathcal {G}}_0\). With definition (2.10), we use

$$\begin{aligned} \begin{aligned} {\mathcal {E}}_i(\rho _i(T,\cdot ))&= \int _\Omega \frac{a_i}{2} \rho _i(T,x)^2 \, dx, \quad i=S,I,V,\\ {\mathcal {G}}_P(\rho (t,\cdot ))&= \int _\Omega \frac{d_P}{2} \rho (t,x)^2 \, dx,\\ {\mathcal {G}}_V(\rho (t,\cdot ))&= \int _\Omega \frac{d_V}{2} \rho (t,x)^2\, dx + i_{[-\infty , C_{factory}]}(\rho (t,\cdot )),\\ {\mathcal {G}}_0(f(t,\cdot ))&= \int _\Omega \frac{d_0}{2} f(t,x)^2 + i_{\Omega _{factory}}(x)f(t,x) \, dx + i_{[-\infty , f_{max}]}(f(t,\cdot )). \end{aligned} \end{aligned}$$

Thus, we can write the cost functional as follows

$$\begin{aligned} \begin{aligned} {G}((\rho _i,m_i)_{i\in {\mathbb {S}}},f)&= \int _\Omega \sum _{i=S,I,V}\frac{a_i}{2}\rho _i(T,\cdot )^2 \, dx, \\&\quad + \int ^T_0 \int _\Omega \sum _{i = S,I,R} F_i(\rho _i, m_i) \,dx\, dt +\int ^T_{T'} \int _\Omega F_V(\rho _V,m_V)\,dx\, dt,\\&\quad + \int ^T_0 \int _\Omega \frac{d_P}{2} (\rho _S+\rho _I+\rho _R)^2 + \frac{d_V}{2} \rho _V^2 \,dx\,dt ,\\&\quad + \int ^{T'}_0 \int _\Omega \frac{d_0}{2} f^2 + i_{\Omega _{factory}} f\, dx \, dt,\\&\quad + \int ^T_0 i_{[-\infty , C_{factory}]}(\rho _V(t,\cdot )) + i_{[-\infty , f_{max}]}(f(t,\cdot ))\, dt,\\&\quad + \frac{\lambda }{2}\int ^T_0\int _\Omega f^2 + \sum _{i\in {\mathbb {S}}} \rho _i^2 + |m_i|^2 \,dx\,dt. \end{aligned} \end{aligned}$$

where \(a_i\), \(d_P\), \(d_V\), \(d_0\) are nonnegative constants. With this cost functional, we find explicit formula for each variable \(\rho _i^{(k+1)},m_i^{(k+1)},\phi _i^{(k+1)}\) \((i\in {\mathbb {S}}), f^{(k+1)}\).

Proposition 3.5

The variables \(\rho _i^{(k+1)},m_i^{(k+1)},\phi _i^{(k+1)}\) (\(i\in {\mathbb {S}}\)), and \(f^{(k+1)}\) from Algorithm 3.1 satisfy the following explicit formulas:

$$\begin{aligned} \rho _S^{(k+1)}&= root_+\Biggl (\frac{\tau }{1+ \tau (d_P + \lambda ) } \biggl (\partial _t\phi _S^{(k)} + \frac{\eta _S^2}{2} \Delta \phi _S^{(k)} - \frac{1}{\tau } \rho _S^{(k)} + \beta \left( \phi _I^{(k)} - \phi _S^{(k)} \right) K*\rho _I^{(k)}\\&\quad + \rho _V^{(k)} \left( \theta _1(\phi _R^{(k)} - \phi _S^{(k)}) - \theta _2 \phi _V^{(k)} \right) + d_P (\rho _I^{(k)} + \rho _R^{(k)})\biggl ), 0, - \frac{\tau \alpha _S |m_S^{(k)}|^2}{2(1+\tau (d_P + \lambda ))} \Biggl ),\\ \rho _I^{(k+1)}&= root_+\Biggl (\frac{\tau }{1+ \tau (d_P + \lambda ) } \biggl (\partial _t\phi _I^{(k)} + \frac{\eta _I^2}{2} \Delta \phi _I^{(k)} - \frac{1}{\tau } \rho _I^{(k)} + \beta K * \left( \rho _S^{(k)}(\phi _I^{(k)} - \phi _S^{(k)}) \right) \\&\quad + \gamma (\phi _R^{(k)} - \phi _I^{(k)}) + d_P (\rho _S^{(k)} + \rho _R^{(k)})\biggl ), 0, - \frac{\tau \alpha _I |m_I^{(k)}|^2}{2(1+\tau (d_P + \lambda ))} \Biggl ) \Biggl ),\\ \rho _R^{(k+1)}&= root_+\Biggl (\frac{\tau }{1+ \tau (d_P + \lambda ) } \biggl (\partial _t\phi _R^{(k)} + \frac{\eta _R^2}{2} \Delta \phi _R^{(k)} - \frac{1}{\tau } \rho _R^{(k)} + d_P (\rho _S^{(k)} + \rho _I^{(k)})\biggl ),\\&\quad 0, - \frac{\tau \alpha _R |m_R^{(k)}|^2}{2(1+\tau (d_P + \lambda ))} \Biggl ),\\ \rho _V^{(k+1)}&= \min \left( C_{factory}, \frac{\tau }{1 + \tau (d_V+\lambda )} \Bigl ( - \partial _t \phi _V^{(k)} - \rho _S^{(k)} (\theta _1(\phi _R^{(k)}-\phi _S^{(k)})-\theta _2\phi _V^{(k)}) + \frac{1}{\tau } \rho _V^{(k)} \Bigr )\right) ,\\&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad (t,x)\in [0,T']\times \Omega \\ \rho _V^{(k+1)}&= root_+\Biggl (\frac{\tau }{1+ \tau (d_V+\lambda ) } \biggl (\partial _t\phi _V^{(k)} + \rho _S(\theta _1(\phi _R-\phi _S)-\theta _2\phi _V) - \frac{1}{\tau } \rho _V^{(k)} \biggl ), \\&\qquad \qquad \qquad \qquad \qquad \qquad \quad 0, - \frac{\tau \alpha _V |m_V^{(k)}|^2}{2(1+\tau (d_V+\lambda ))} \Biggl ),(t,x)\in (T',T]\times \Omega \\ m_i^{(k+1)}&= \frac{\rho _i^{(k+1)}}{\tau \alpha _i + (1+\tau \lambda ) \rho _i^{(k+1)}} \left( m_i^{(k)} - \tau \nabla \phi _i^{(k)} \right) , (i\in {\mathbb {S}}),\\ f^{(k+1)}&= \min \left( f_{max}, \frac{\tau }{1 + \tau (d_0+\lambda )} \left( \frac{1}{\tau } f^{(k)} - \phi _V^{(k)} \right) \right) {\mathcal {X}}_{\Omega _{factory}}(x),\\ \phi _S^{(k+\frac{1}{2})}&= \phi _S^{(k)} + \sigma (A_S A_S^T)^{-1} \Bigl ( -\partial _t \rho ^{(k+1)}_S - \nabla \cdot m^{(k+1)}_S - \beta \rho ^{(k+1)}_S K * \rho ^{(k+1)}_I \\&\qquad \qquad \qquad \qquad \qquad \quad - \theta _1 \rho _S^{(k+1)} \rho _V^{(k+1)} + \frac{\eta _S^2}{2} \Delta \rho ^{(k+1)}_S \Bigr ),\\ \phi _I^{(k+\frac{1}{2})}&= \phi _I^{(k)} + \sigma (A_I A_I^T)^{-1} \Bigl ( -\partial _t \rho ^{(k+1)}_I - \nabla \cdot m^{(k+1)}_I + \beta \rho ^{(k+1)}_S K * \rho ^{(k+1)}_I \\&\qquad \qquad \qquad \qquad \qquad \quad - \gamma \rho ^{(k+1)}_I + \frac{\eta _I^2}{2} \Delta \rho ^{(k+1)}_I \Bigr ),&\\ \phi _R^{(k+\frac{1}{2})}&= \phi _R^{(k)} + \sigma (A_R A_R^T)^{-1} \Bigl ( -\partial _t \rho ^{(k+1)}_R - \nabla \cdot m^{(k+1)}_R + \gamma \rho ^{(k+1)}_I \\&\qquad \qquad \qquad \qquad \qquad \quad + \theta _1 \rho _S^{(k+1)} \rho _V^{(k+1)} + \frac{\eta _R^2}{2} \Delta \rho ^{(k+1)}_R\Bigr ),\\ \phi _V^{(k+\frac{1}{2})}&= \phi _V^{(k)} + \sigma (A_V A_V^T)^{-1} \left( -\partial _t \rho _V^{(k+1)} + f^{(k+1)} {\mathcal {X}}_{[0,T')}(t) - \nabla \cdot m_V^{(k+1)} {\mathcal {X}}_{[T',T]}(t) \right. \\&\left. \qquad \qquad \qquad \qquad \qquad \quad - \theta _1 \rho _S^{(k+1)} \rho _V^{(k+1)} \right) \\ \end{aligned}$$

where \(root_+(a,b,c)\) is a positive root of a cubic polynomial \(x^3 + a x^2 + b x +c = 0\) and we approximate the \(A_iA_i^*\) as follows

$$\begin{aligned} A_S A_S^{T}&= -\partial _{tt} + \frac{\eta _S^4}{4} \Delta ^2 - (1 + (\beta +\theta _1) \eta _S^2) \Delta + (\beta +\theta _1)^2,\\ A_I A_I^{T}&= -\partial _{tt} + \frac{\eta _I^4}{4} \Delta ^2 - (1 + (\gamma +\beta ) \eta _I^2) \Delta + (\gamma + \beta )^2,\\ A_R A_R^{T}&= -\partial _{tt} + \frac{\eta _R^4}{4} \Delta ^2 - \Delta ,\\ A_V A_V^{T}&= -\partial _{tt} - \Delta + \theta _2^2. \end{aligned}$$

We use FFTW library to compute \((A_i A_i^T)^{-1}\) (\(i\in {\mathbb {S}}\)) and convolution terms by Fast Fourier Transform (FFT), which is \(O(n\log n)\) operations per iteration with n being the number of points. Thus, the algorithm takes just \(O(n\log n)\) operations per iteration.


In this section, we present several sets of numerical experiments using Algorithm 3.1 with various parameters. We wrote C++ codes to run the numerical experiments. Let \(\Omega = [0,1]^2\) be a unit square in \({\mathbb {R}}^2\) and the terminal time \(T=1\). The domain \([0,1]\times \Omega \) is discretized with the regular Cartesian grid below.

$$\begin{aligned} \Delta x_1= & {} \frac{1}{N_{x_1}},\quad \Delta x_2 = \frac{1}{N_{x_2}},\quad \Delta t = \frac{1}{N_t-1},\\ x_{kl}= & {} \left( (k+0.5) \Delta x_1, (l+0.5)\Delta x_2 \right) , \quad \, k = 0,\cdots ,N_{x_1}-1,\quad l = 0,\cdots ,N_{x_2}-1,\\ t_n= & {} n \Delta t ,\qquad \qquad \qquad \qquad \qquad \qquad n = 0,\cdots ,N_t-1, \end{aligned}$$

where \(N_{x_1}\), \(N_{x_2}\) are the number of discretized points in space and \(N_t\) is the number of discretized points in time. For all the experiments, we use the same set of parameters,

$$\begin{aligned} \alpha _S = 10, \quad \alpha _I = 30, \quad \alpha _R = 20, \quad \alpha _V = 0.005,\\ a_S = 2, \quad a_I = 2, \quad a_R = 0.001, \quad a_V = 0.1,\\ T'=0.5,\quad \sigma =0.01,\quad d_P = 0.4,\quad d_V = 0.4, \quad d_0 = 0.01,\\ \theta _2=0.9 \quad \eta _i = 0.01 \quad (i\in {\mathbb {S}}). \end{aligned}$$

By setting a higher value for \(\alpha _I\), we penalize the infected population’s movement more than other populations. Considering the immobility of the infected individuals, this is a reasonable choice in terms of real-world applications. By setting \(T'=1/2\), the solution will produce the vaccines during \(0\le t < 1/2\) and deliver them during \(1/2 \le t \le 1\). Furthermore, we fix the parameters for the infection rate and recovery rate

$$\begin{aligned} \beta = 0.8,\quad \gamma =0.1. \end{aligned}$$

The paper [28] describes how the parameters \(\beta \) and \(\gamma \) affect the propagation of the populations. In this paper, we focus on the vaccine productions and distributions. Recall that from formulation (3.10), we have terminal functionals

$$\begin{aligned}{\mathcal {E}}_i(\rho _i(T,\cdot )) = \int _\Omega \frac{a_i}{2}\rho _i(T,x)^2\, dx,\quad i\in \{S,I,V\}. \end{aligned}$$

Thus, the solution to the problem has to minimize the total number of susceptible, infected, and vaccines at the terminal time T. The solution reduces the total number of infected by recovering them with a rate \(\gamma \) and decreases the total number of susceptible by transforming the susceptible to the infected with a rate \(\beta \) or to the recovered with a rate \(\theta _1\) (Fig. 2). If the \(\beta \) is large and \(\gamma \) is small, the number of infected will grow since there are more inflows from susceptible than the outflows to the recovered. To minimize the total number of the infected, the solution has to vaccinate the susceptible as much as possible to avoid the susceptible becoming infected. Thus, the vaccines need to be produced and delivered to the susceptible efficiently while satisfying constraint conditions (2.9).

Fig. 2
figure 2

Visualization of the flow of three populations. The susceptible transforms to the infected with a rate \(\beta \) and the recovered with a rate \(\theta _1\). The infected transforms to the recovered with a rate \(\gamma \)

We present two experiments that demonstrate how the various factors in the formulation affect the production and the distribution of vaccines.

Experiment 1

In this experiment, we show that Algorithm 3.1 converges independent of grid sizes when we use the preconditioning operator defined in Proposition 3.5. Consider the initial densities for the \(\rho _i\) (\(i\in {\mathbb {S}}\)) and the factory location \(\Omega _{factory}\) as

$$\begin{aligned} \begin{aligned} \rho _S(0,x)&= \left( 2 \exp (-5[(x_1-0.7)^2 + (x_2 - 0.7)^2]) - 1.5 \right) _+,\\ \rho _I(0,x)&= \left( 2 \exp (-5[(x_1-0.7)^2 + (x_2 - 0.7)^2]) - 1.8 \right) _+,\\ \rho _R(0,x)&= 0,\\ \rho _V(0,x)&= 0,\\ \Omega _{factory}&= B_{0.1}(0.3,0.3), \end{aligned} \end{aligned}$$

where \((x)_+ = \max (x,0)\) and \(B_r(x)\) is a ball of a radius r centered at x. Figure 3 shows the images of initial conditions (4.1).

Fig. 3
figure 3

Experiment 1: Initial densities of \(\rho _S\) (left) and \(\rho _I\) (right). The green circle indicates \(\Omega _{factory}\)

We compute the solution of the SIRV variational problem (2.11) with the above initial conditions using Algorithm 3.1. For simplicity, we assume recovered population density \(\rho _R\) does not move. Thus, we use an arbitrary large number for a parameter \(\alpha _R = 10^4\) to penalize when \(|m_R| > 0\). The rest of the parameters are identical to the parameters defined in the preceding section. We ran four simulations with same initial conditions and same step sizes (\(\tau =0.05\), \(\sigma =0.2\)) with four different grid sizes:

\(N_{x_1}\) \(N_{x_2}\) \(N_t\)
32 32 32
64 64 32
128 128 32
256 256 32

The result of the experiment is depicted in Fig. 4. The figure shows the convergence plot of the algorithm with respect to the number of iteration for each grid size. The x-axis indicates the iteration number and the y-axis indicates the value of the following Lagrangian functional:

$$\begin{aligned} \begin{aligned}&\tilde{{\mathcal {L}}}((\rho _i,m_i,\phi _i)_{i\in {\mathbb {S}}},f) \\&\quad = \int _\Omega \sum _{i=S,I,V}\frac{a_i}{2}\rho _i(T,\cdot )^2 \, dx + \int ^T_0 \int _\Omega \sum _{i = S,I,R} F_i(\rho _i, m_i) \,dx\, dt \\&\qquad +\int ^T_{T'} \int _\Omega F_V(\rho _V,m_V)\,dx\, dt\\&\qquad + \int ^T_0 \int _\Omega \frac{d_P}{2} (\rho _S+\rho _I+\rho _R)^2 + \frac{d_V}{2} \rho _V^2 \,dx\,dt + \int ^{T'}_0 \int _\Omega \frac{d_0}{2} f^2 \, dx \, dt\\&\qquad - \int ^T_0 \int _\Omega \phi _S \left( \partial _t \rho _S + \nabla \cdot m_S + \beta \rho _S K * \rho _I + \theta _1 \rho _S \rho _V - \frac{\eta _S^2}{2} \Delta \rho _S \right) \, dx \, dt\nonumber \\&\qquad - \int ^T_0 \int _\Omega \phi _I \left( \partial _t \rho _I + \nabla \cdot m_I - \beta \rho _S K * \rho _I + \gamma \rho _I - \frac{\eta _I^2}{2} \Delta \rho _I \right) \, dx \, dt\nonumber \\&\qquad - \int ^T_0 \int _\Omega \phi _R \left( \partial _t \rho _R + \nabla \cdot m_R - \gamma \rho _I - \theta _1 \rho _S \rho _V - \frac{\eta _R^2}{2} \Delta \rho _R \right) \, dx \, dt\nonumber \\&\qquad - \int ^{T}_0 \int _\Omega \phi _V \left( \partial _t \rho _V - f {\mathcal {X}}_{[0,T')}(t) + \nabla \cdot m_V {\mathcal {X}}_{[T',T]}(t) + \theta _2 \rho _S \rho _V \right) \, dx \, dt. \end{aligned} \end{aligned}$$

Note that this Lagrangian functional \(\tilde{{\mathcal {L}}}\) is different from  (2.8) and (2.13). The terms with indicator functions \(i_{\Omega _{factory}}\), \(i_{[-\infty ,t]}\) are removed to avoid representing \(+\infty \) numerically. The absence of the terms may explain that the value \(\tilde{{\mathcal {L}}}\) increases in the first 500 iterations and then decreases afterward. Figure 5 shows the computed solutions at iteration 3000 from four different spatial grid sizes (\(32\times 32\), \(64\times 64\), \(128\times 128\), \(256\times 256\)). Each row of the figure shows the evolution of a vaccine density \(\rho _V\) from time \(t=0\) to \(t=1\) computed from each grid size. These figures clearly show that the algorithm converges to the same saddle point independent of the grid sizes.

Fig. 4
figure 4

Convergence plot of the algorithm for each grid size (\(N_{x_1}=N_{x_2}=32,64,128,256\)) with the same step sizes (\(\tau =0.05\), \(\sigma =0.2\)). The plot shows that the convergence of the algorithm is independent of grid sizes

Fig. 5
figure 5

Computed solution of a vaccine density variable \(\rho _V\) from Experiment 1. Each row shows the evolution of a vaccine density variable from time \(t=0\) to \(t=1\) with different grid sizes. Row 1: \(32\times 32\), Row 2: \(64\times 64\), Row 3: \(128\times 128\), Row 4: \(256\times 256\)

Experiment 2

In this experiment, we show how the parameters related to the vaccine density variable \(\rho _V\) (\(\theta _1, \theta _2, f_{max}, C_{factory}\)) affect the solution. We use the same initial densities for \(\rho _i\) (\(i\in {\mathbb {S}}\)) and f as in Experiment 1. With the initial densities (4.1), we run two simulations with different values for \(\theta _1\), \(\theta _2\), and \(f_{max}\).

Parameters Sim 1 Sim 2 Description
\(\theta _1\) 0.5 0.9 Vaccine efficiency
\(f_{max}\) 0.5 10 Maximum production rate of vaccines
\(C_{factory}\) 0.5 2 Maximum amount of vaccines
    that can be produced at \(x\in \Omega \) during \(0\le t \le \frac{1}{2}\)

Figure 6 shows the comparison between the results from the simulation 1 and the simulation 2. The first three plots (Fig. 6a) show the total mass of \(\rho _i\) (\(i=S,I,R\)), i.e.,

$$\begin{aligned} \int _\Omega \rho _i(t,x)\, dx,\quad i=S,I,R,\quad t\in [0,1], \end{aligned}$$

and the last plot (Fig. 6b) shows the total mass of \(\rho _V\) during \(0\le t \le \frac{1}{2}\)

$$\begin{aligned} \int _\Omega \rho _V(t,x)\, dx,\quad t\in \left[ 0,\frac{1}{2}\right] . \end{aligned}$$

The total number of vaccines produced from the simulation 1 is smaller than that from the simulation 2 because the solution cannot produce a large amount of vaccines due to the low production rate \(f_{max}\). Furthermore, the solution from the simulation 1 cannot vaccinate a large number of susceptible due to a small \(\theta _1\). Thus, there are more susceptible and less recovered at the terminal time in the simulation 1.

Fig. 6
figure 6

Experiment 2: The comparison between the results from the simulation 1 and the simulation 2. The first three plots a show the total mass of \(\rho _i\) (\(i=S,I,R\)) and the fourth plot b shows the total mass of \(\rho _V\) produced at the factory area during the production time \(0\le t < 0.5\)

Experiment 3

This experiment includes the spatial obstacles and shows how the algorithm effectively finds the solution that utilizes the vaccine production and distribution given spatial barriers. Denote a set \(\Omega _{obs} \subset \Omega \) as obstacles. We use the following functionals in the experiment.

$$\begin{aligned} {\mathcal {G}}_P(\rho (t,\cdot ))&= \int _\Omega \sum _{i\in \{S,I,R\}} \frac{d_i}{2} \rho _i^2(t,x) + i_{\Omega _{obs}}(x) \left( \sum _{i\in \{S,I,R\}}\rho _i(t,x) \right) \, dx,\\ {\mathcal {G}}_V(\rho (t,\cdot ))&= \int _\Omega \frac{d_V}{2} \rho _V^2(t,x) + i_{\Omega _{obs}}(x) \rho _V(t,x)\, dx,\\ {\mathcal {E}}_i(\rho (1,\cdot ))&= \int _\Omega \frac{a_i}{2} \rho _i^2(1,x) + i_{\Omega _{obs}}(x) \rho _i(1,x)\, dx,\quad i\in \{S,I,V\},\\ {\mathcal {E}}_R(\rho (1,\cdot ))&= \int _\Omega \frac{a_R}{2} (\rho _R(1,x) - 1)^2 + i_{\Omega _{obs}}(x) \rho _R(1,x)\, dx. \end{aligned}$$

The densities \(\rho _i\) (\(i\in {\mathbb {S}}\)) cannot be positive on \(\Omega _{obs}\) due to \(i_{\Omega _{obs}}\). Thus, the densities transport while avoiding the obstacle \(\Omega _{obs}\). We show two sets of experiments based on this setup.

Single factory

We set the initial densities and \(\Omega _{factory}\) as follows

$$\begin{aligned} \rho _S(0,x)&= \left( 2 \exp (-15((x_1-0.2)^2+(x_2-0.5)^2)) - 1.6 \right) _+\\&\quad +\left( 2 \exp (-15((x_1-0.8)^2+(x_2-0.5)^2)) - 1.6 \right) _+,\\ \rho _I(0,x)&= \left( 2 \exp (-15((x_1-0.2)^2+(x_2-0.5)^2)) - 1.8 \right) _+,\\ \rho _R(0,x)&= 0,\\ \rho _V(0,x)&= 0,\\ \Omega _{factory}&= B_{0.075}(0.5,0.5) \end{aligned}$$

and fix the parameters

$$\begin{aligned} \theta _1 = 0.9,\quad f_{max} = 10, \quad C_{factory} = 2. \end{aligned}$$

The initial densities are shown in Fig. 7.

Fig. 7
figure 7

Experiment 3: The initial densities \(\rho _S\) (left) and \(\rho _I\) (right), and the location of the factory (indicated as a green circle)

Fig. 8
figure 8

Experiment 3: The evolution of densities \(\rho _i\) (\(i\in {\mathbb {S}}\)) without the obstacle over time \(0\le t \le 1\). The first row: the susceptible density \(\rho _S\). The second row: the infected density \(\rho _I\). The third row: the recovered density \(\rho _R\). The fourth row: the vaccine density \(\rho _V\)

Fig. 9
figure 9

Experiment 3: The evolution of densities \(\rho _i\) (\(i\in {\mathbb {S}}\)) with the obstacle (indicated as a yellow block) over time \(0\le t \le 1\). The first row: the susceptible density \(\rho _S\). The second row: the infected density \(\rho _I\). The third row: the recovered density \(\rho _R\). The fourth row: the vaccine density \(\rho _V\)

Figures 8 and 9 show the evolution of densities with and without obstacles, respectively. In both simulations, the density of vaccines \(\rho _V\) (the fourth row) transports to the areas where the susceptible people are present. In Fig. 9, \(\rho _V\) transports while avoiding the obstacle at the right. Figure 10 shows the comparison between these two solutions and how the presence of the obstacle affects the production and delivery of vaccines quantitatively. Figure 10a shows the total mass of the vaccines in the factory area \(\Omega _{factory}\) during the production time

$$\begin{aligned} \int _{\Omega _{factory}} \rho _V(t,x)\, dx, \quad t\in [0,0.5). \end{aligned}$$

Figure 10b shows the total mass of the vaccines during the delivery time at the left side and the right side of the domain

$$\begin{aligned} \begin{aligned} \int _{\Omega \cap \{x_1<0.5\}} \rho _V(t,x)\, dx&, \quad \text {Left}\\ \int _{\Omega \cap \{x_1\ge 0.5\}} \rho _V(t,x)\, dx&, \quad \text {Right} \end{aligned} \end{aligned}$$

during \(t\in [0.5,1]\). When there is no obstacle, the vaccines are delivered more to the right than to the left (Fig. 10b). The number of susceptible people at the left decreases very fast because there are infected people with a high infection rate. When \(\rho _V\) starts to transport at time \(t=0.5\), the number of susceptible is lower at the left. Thus, the solution distributes fewer vaccines to the left with less susceptible people. When there is an obstacle, \(\rho _V\) has to bypass the obstacle to reach the susceptible areas. Thus, the kinetic energy cost during the delivery time \(t\in [0.5,1]\) increases at the right. The solution cannot deliver the vaccines as much as the case without the obstacle. It results in a fewer number of vaccines produced during \(t\in [0,0.5)\) (Fig. 10a) and delivered to the right during \(t\in [0.5,1]\) when there is an obstacle (Fig. 10b).

Fig. 10
figure 10

Experiment 3: The left plot shows the total mass of vaccine density \(\rho _V\) during the production time \(t\in [0,0.5)\). The right plot shows the total mass of \(\rho _V\) at the left side of the domain \(\Omega \cap \{x_1<0.5\}\) and at the right side of the domain \(\Omega \cap \{x_1\ge 0.5\}\)

Multiple factories

Similar to the previous experiment, we show how the obstacles in the spatial domain affect the production and distribution of the vaccines. We use more complex initial densities, an obstacle set \(\Omega _{obs}\), and three factory locations in this experiment. We set the initial densities and \(\Omega _{factory}\) as follows

$$\begin{aligned} \rho _S(0,x)&= \left( 2 \exp (-15((x_1-0.8)^2+(x_2-0.8)^2)) - 1.6 \right) _+\\&\quad +\left( 2 \exp (-15((x_1-0.2)^2+(x_2-0.7)^2)) - 1.6 \right) _+\\&\quad +\left( 2 \exp (-15((x_1-0.8)^2+(x_2-0.3)^2)) - 1.6 \right) _+\\&\quad +\left( 2 \exp (-15((x_1-0.2)^2+(x_2-0.2)^2)) - 1.6 \right) _+,\\ \rho _I(0,x)&= \left( 2 \exp (-15((x_1-0.2)^2+(x_2-0.7)^2)) - 1.8 \right) _+\\&\quad + \left( 2 \exp (-15((x_1-0.2)^2+(x_2-0.2)^2)) - 1.8 \right) _+,\\ \rho _R(0,x)&= 0,\\ \rho _V(0,x)&= 0,\\ \Omega _{factory}&= B_{0.075}(0.5,0.2) \, \cup \, B_{0.075}(0.5,0.5) \, \cup \, B_{0.075}(0.5,0.8) \end{aligned}$$

and fix the parameters

$$\begin{aligned} \theta _1 = 0.9,\quad f_{max} = 10, \quad C_{factory} = 2. \end{aligned}$$

The initial densities are shown in Fig. 11.

Fig. 11
figure 11

Experiment 3: The initial densities \(\rho _S\) (left) and \(\rho _I\) (right), and the location of the factory (indicated as green circles)

Fig. 12
figure 12

Experiment 3: The evolution of densities \(\rho _i\) (\(i\in {\mathbb {S}}\)) without the obstacle over time \(0\le t \le 1\). The first row: the susceptible density \(\rho _S\). The second row: the infected density \(\rho _I\). The third row: the recovered density \(\rho _R\). The fourth row: the vaccine density \(\rho _V\)

Fig. 13
figure 13

Experiment 3: The evolution of densities \(\rho _i\) (\(i\in {\mathbb {S}}\)) with the obstacle (colored yellow) over time \(0\le t \le 1\). The first row: the susceptible density \(\rho _S\). The second row: the infected density \(\rho _I\). The third row: the recovered density \(\rho _R\). The fourth row: the vaccine density \(\rho _V\)

Figures 12 and  13 show the evolution of densities with and without obstacles, respectively. The experiment demonstrates that even with the complex initial densities, the algorithm successfully converges to the reasonable solution that coincides with the previous experiments. The density of vaccines \(\rho _V\) (the fourth row) transports to the areas where the susceptible people are present while avoiding the obstacles.

Figure 14a shows the total mass of the vaccines produced during the production time at each factory location. Without the obstacles, the total mass of \(\rho _V\) at the middle is the lowest at time 0.5 because the factory at the middle is the farthest away from the susceptible people. It is more efficient to produce the vaccines at the factories closer to the susceptible (the top and the bottom) to reduce the kinetic energy cost during the delivery time \(t\in [0.5,1]\). However, the vaccines are produced the most at the middle factory with the obstacles. Since the obstacles block the paths between the top and the bottom factories and the susceptible people, \(\rho _V\) has to bypass them to reach the target area. The pathways from the middle factory to the susceptible people are not blocked as much as from the top and the bottom factories. Thus, producing more vaccines at the middle factory is more efficient.

Figure 14b shows the total mass of the vaccines during the delivery time at different locations. The lines in the plot represent the following quantities:

$$\begin{aligned} \begin{aligned}&\int _{\Omega \cap \{x_1<0.5\} \cap \{x_2\ge 0.5\}} \rho _V(t,x)\,dx, \quad \text {Top Left} ,\\&\int _{\Omega \cap \{x_1\ge 0.5\} \cap \{x_2\ge 0.5\}} \rho _V(t,x)\, dx, \quad \text {Top Right}\\&\int _{\Omega \cap \{x_1<0.5\} \cap \{x_2<0.5\}} \rho _V(t,x)\, dx, \quad \text {Bottom Left} ,\\&\int _{\Omega \cap \{x_1\ge 0.5\} \cap \{x_2<0.5\}} \rho _V(t,x)\, dx, \quad \text {Bottom Right} \end{aligned} \end{aligned}$$

over \(t\in [0.5,1]\). With the obstacles, the kinetic energy cost increases since \(\rho _V\) has to bypass to reach to the targets when it transports from the top and the bottom factories. As a result, the vaccines are not produced as much as the simulation without the obstacles, and there are less vaccines reached to the targets.

Fig. 14
figure 14

Experiment 3: The top plot shows the total mass of vaccine density \(\rho _V\) at three factory locations during the production time \(t\in [0,0.5)\). The bottom plot shows the total mass of \(\rho _V\) at the top left area of the domain \(\Omega \cap \{x_1<0.5\} \cap \{x_2\ge 0.5\}\), at the bottom left area \(\Omega \cap \{x_1<0.5\} \cap \{x_2<0.5\}\), at the top right area \(\Omega \cap \{x_1\ge 0.5\} \cap \{x_2\ge 0.5\}\), and at the bottom right area \(\Omega \cap \{x_1\ge 0.5\}\cap \{x_2 < 0.5\}\) during the distribution time \(t\in [0.5,1]\)

Experiment 4

This experiment compares the vaccine production strategy generated by the algorithm and the strategy with the fixed rates of production without using the algorithm. The initial densities and \(\Omega _{factory}\) are set as follows

$$\begin{aligned} \rho _S(0,x)&= \left( 4 \exp (-15((x_1-0.5)^2+(x_2-0.55)^2)) - 1.6 \right) _+,\\ \rho _I(0,x)&= \left( 4 \exp (-15((x_1-0.5)^2+(x_2-0.55)^2)) - 1.8 \right) _+,\\ \rho _R(0,x)&= 0,\\ \rho _V(0,x)&= 0,\\ \Omega _{factory}&= B_{0.04}(0.1,0.3) \, \cup \, B_{0.04}(0.5,0.3) \, \cup \, B_{0.04}(0.9,0.4). \end{aligned}$$

We fix the parameters

$$\begin{aligned} \theta _1 = 0.9,\quad f_{max} = 5, \quad C_{factory} = 1. \end{aligned}$$

The initial densities and locations of factories are shown in Fig. 15.

Fig. 15
figure 15

Experiment 4: The initial densities \(\rho _S\) (left) and \(\rho _I\) (right), the location of the factory (indicated as green circles), and the obstacle (colored yellow)

To fairly compare the effect of the optimal vaccine production strategy, we remove the momentum of S, I, R groups; thus, removing the spatial movements defined by \(m_S\), \(m_I\), \(m_R\). We consider the following PDEs:

$$\begin{aligned} \begin{aligned}&\partial _t \rho _S = - \beta \rho _S K * \rho _I + \frac{\eta _S^2}{2} \Delta \rho _S - \theta _1 \rho _V \rho _S&(t,x)\in (0,T)\times \Omega ,\\&\partial _t \rho _I = \beta \rho _S K * \rho _I - \gamma \rho _I + \frac{\eta ^2_I}{2} \Delta \rho _I&(t,x) \in (0,T)\times \Omega ,\\&\partial _t \rho _R = \gamma \rho _I + \frac{\eta ^2_R}{2} \Delta \rho _R + \theta _1 \rho _V \rho _S&(t,x) \in (0,T)\times \Omega ,\\&\partial _t \rho _V = f(t,x) - \theta _2 \rho _V \rho _S&(t,x) \in (0,T')\times \Omega ,\\&\partial _t \rho _V + \nabla \cdot m_V = - \theta _2 \rho _V \rho _S&(t,x) \in [T',T)\times \Omega . \end{aligned} \end{aligned}$$

Furthermore, by taking out the momentum terms from S, I, R groups, the cost functional for this experiment is

$$\begin{aligned} \begin{aligned} {G}((\rho _i,m_i)_{i\in {\mathbb {S}}},f)&= \int _\Omega \frac{a_V}{2}\rho _V(T,\cdot )^2 \, dx + \int ^T_{T'} \int _\Omega F_V(\rho _V,m_V)\,dx\, dt\\&\quad + \int ^T_0 \int _\Omega \frac{d_V}{2} \rho _V^2 \,dx\,dt \\&\quad + \int ^{T'}_0 \int _\Omega \frac{d_0}{2} f^2 + i_{\Omega _{factory}} f\, dx \, dt\\&\quad + \int ^T_0 i_{\{\rho (t,\cdot ) \le C_{factory}\}}(\rho (t,\cdot )) + i_{\{f(t,\cdot ) \le f_{max}\}}(f(t,\cdot ))\, dt\\&\quad + \frac{\lambda }{2}\int ^T_0\int _\Omega f^2 + \rho _V^2 + |m_V|^2 \,dx\,dt. \end{aligned} \end{aligned}$$

With the PDEs and the cost functionals above, we compare two results. The first result is using the optimal vaccine production and distribution strategy generated by Algorithm 3.1. The second result is using the fixed vaccine production rate and the algorithm’s distribution strategy. In the second result, the factory variable f is fixed as

$$\begin{aligned} f(t,x) = {\left\{ \begin{array}{ll} 1.2,&{} (t,x)\in [0,T']\times \Omega _{factory},\\ 0,&{} (t,x)\in [0,T']\times \Omega \backslash \Omega _{factory}. \end{array}\right. } \end{aligned}$$

Figure 16 shows the comparison between these two results. The result from the fixed production rate is “without control,” and the result from the optimal vaccine production strategy is “with control.” The labels “left,” “middle,” and “right” are the locations of the factories in Fig. 15. The solid lines, the result with the same fixed rates of production, show that all three factories produce identical amounts of vaccines. The dotted lines show the least amount of vaccines in the middle factory and much more in the left and right factories. When vaccines produce at the middle factory, one needs to pay more transportation costs because they bypass the obstacles. The obstacle does not block the paths from the left and right factories to the susceptible. Thus, it’s an optimal choice to utilize the left and right more than the middle to minimize the transportation costs.

Fig. 16
figure 16

Experiment 4: The plot shows the total mass of vaccine densities \(\int ^t_0\int _\Omega \rho _V\,dx\,dt\) during production \(t\in [0,T']\) at each factory location: left, middle, and right. The dotted lines are from the optimal strategy from Algorithm 3.1, and the solid lines are from the fixed production rates

The table below is the quantitative comparison between the two results.

Quantity Description Algorithm 3.1 Fixed rates
\(\int _\Omega \rho _V(\frac{1}{2},x)\,dx\) The total amount of vaccines produced. \(7.997\times 10^{-3}\) \(8.411\times 10^{-3}\)
\(\int _\Omega \rho _S(1,x)\,dx\) The number of susceptible people at the terminal time. \(1.520\times 10^{-2}\) \(1.525\times 10^{-2}\)
\(\int _\Omega \rho _I(1,x)\,dx\) The number of infected people at the terminal time. \(5.133\times 10^{-3}\) \(5.134\times 10^{-3}\)
\(\int ^1_{\frac{1}{2}}\int _\Omega \frac{|m_V|^2}{2\rho _V}\,dx\,dt\) The transportation cost of vaccines. \(7.339\times 10^{-3}\) \(7.544\times 10^{-3}\)

The first row of the table shows that more vaccines are produced with a fixed rate of production. However, the result of the fixed-rate vaccinizes fewer susceptible people; as a result, more infected people at the terminal time. Furthermore, the result from the fixed rate shows higher transportation costs. The algorithm finds the more efficient strategy with fewer vaccines produced.