Dynamic Games and Applications

, Volume 4, Issue 3, pp 257–289 | Cite as

Robust Control and Hot Spots in Spatiotemporal Economic Systems

  • W. A. Brock
  • A. Xepapadeas
  • A. N. Yannacopoulos


We formulate stochastic robust optimal control problems, motivated by applications arising in interconnected economic systems, or spatially extended economies. We study in detail linear quadratic problems and nonlinear problems. We derive optimal robust controls and identify conditions under which concerns about model misspecification at specific site(s) could cause regulation to break down, to be very costly, or to induce pattern formation and spatial clustering. We call sites associated with these phenomena hot spots. We also provide an application of our methods by studying optimal robust control and the potential break down of regulation, due to hot spots, in a model where utility for in situ consumption is distance dependent.


Stochastic control Robust control Riccati equation Hot spot formation Pattern formation Hamilton-Jacobi Bellman equation Distance-dependent utility 

JEL Classification

C6 R12 Q26 

1 Introduction

Decision making when the decision-making agent has concerns about possible misspecifications of the model used to derive policy rules, relative to the underlying true model, has been associated with the concept of robustness. Whittle [43] characterizes a rule as robust if it continues to behave well even if the model deviates from a specified or a benchmark model and points out that optimality should be supplemented by robustness. Thus, the desire for robustness emerges when the decision maker regards her model not as the correct one but as an approximation of the correct one, or to put it differently, when the decision maker has concerns about possible misspecifications of the reference model and wants to incorporate these concerns into the decision-making rules. Robust control problems have been traditionally analyzed in the context of risk sensitive linear quadratic Gaussian (LEQG) models and the \(H^{\infty }\) models (e.g., [8], [43]). The \(H^{\infty }\) criterion implies decision making for protection against that “worst case” and is related to a minimax approach. More recently [21] interpreted concerns about model misspecification in economics as a situation where a decision maker or a regulator distrusts her model and wants good decisions over a cloud of models that surrounds the regulator’s approximating or benchmark model, which are difficult to distinguish with finite datasets. 1 Then they obtain robust decisions rules by introducing a fictitious “adversarial agent” which we will refer to as Nature. Nature promotes robust decision rules by forcing the regulator, who seeks to maximize (minimize) an objective, to explore the fragility of decision rules to departures from the benchmark model. A robust decision rule means that lower bounds to the rule’s performance are determined by Nature — the adversarial agent — who acts as a minimizing (maximizing) agent when constructing these lower bounds. Hansen et.al. [23] show that robust control theory can be interpreted as a recursive version of maximin expected utility theory [19]. In this context the decision maker cannot or does not formulate a single probability model and maximizes expected utility assuming the probability weights are chosen by Nature, the adversarial agent. Decision making when the spatial dimension of underlying problem is explicitly taken into account and the decision maker or a regulator seeks to determine spatially dependent rules is attracting increasing interest in economics. The spatial dimension has been brought into the picture through new economic geography models (e.g., [10, 18, 29]), but also through models of resource management (e.g. [12, 13, 38, 39, 44]). In biology, systems with spatially distributed parameter aspects in the dynamics have been used to study pattern formation on biological agents (e.g., [34]), while similar approaches have been used to study groundwater management related to agricultural systems (e.g., [30]). There is also a large literature on robust control of spatially distributed systems modeled by partial differential equations (PDE) both from the mathematical point of view and from the point of view of applications in engineering. 2 We only mention the works that have been more influential to our approach. In the mathematics literature one should note the work of McMillan and Triggiani [33] where the robust control problem of an abstract PDE with uncertain parameters is studied within the framework of game theory (but not in a stochastic framework). McMillan and Triggiani [33] mentions a possible breakdown of the robust control procedure, a fact that may be of great importance in applications. It is among the aims of the present work to elaborate on this possible breakdown providing detailed information and conditions on how this breakdown is likely to occur and its implication for regulation. In a similar vein, in the engineering literature one should note the work of Armaou and Christofides [4] where robust control of PDEs with uncertain parameters is treated in detail and approximated by a finite dimensional system using the Galerkin method. This work does not specify the nature of the uncertain parameters but simply states that they are unknown functions taking values in some set. Our work is in the spirit of Armaou and Christofides [4] in that it employs a finite dimensional approximation of the PDE in the study of the robust control problem; it is, however, distinct from Armaou and Christofides [4] in that (i) we make specific assumptions on the uncertainty which is generated by Itō processes; (ii) we derive feedback rules within an explicit optimizing framework where the regulator maximizes her net benefits, while the adversarial agent determines the performance of the rules by acting as a minimizing agent across locations; and (iii) we focus on the possible breakdown of control due to differences in uncertainty across locations. Our specific assumptions for the uncertainty allow us to specify the set of possible models in a form which permits us to express the system in terms of a stochastic differential game and through this formulation we obtain explicit conditions on the breakdown of robust control. In this context we consider as the main contributions of this paper the following. From the point of view of economic theory, to the best of our knowledge there is no work on the robust control of spatial economies with noise and spatially structured uncertainty. While there is a growing literature on optimal control of spatial economies, there is no work yet on the interaction of time, space, noise, and uncertainty, as all of the literature so far has focused on deterministic models. In our approach the regulator designs the robust control rules not only with respect to the spatial characteristics of a specific location but also with respect to the degree to which the regulator distrusts her model across locations. This means that if concerns about the benchmark model in a given site differ from concerns in other sites, a spatially dependent robust rule should capture these differences by incorporating the spatial structure of uncertainty. We believe that this approach is novel in the area of regulation of economic systems. From the point of view of robust control outside the field of spatial economics, the main contribution of this work is the detailed study of the failure (rather than the success) of the robust control procedure. Our analysis provides detailed information on how the interaction of uncertainty, noise, and coupling of the various units of the system can lead to breakdown and for all practical purposes to uncontrollability of the system. We also provide detailed conditions on the data of the model, which allow the prediction of when such situations are likely to occur. Therefore, our results are important as indicative of the limitations of robust control regulation.

The introduction of spatially structured uncertainty in the robust control problem allows us to formally identify, for the first time to our knowledge in economics, spatial hot spots which are sites where robust control breaks down, or sites where robust control is very costly as a function of the degree of the regulator’s concerns about model misspecification across all sites. We are also able to identify spatial hot spots where the need to apply robust control induces spatial agglomerations and breaks down spatial symmetry. From the theory point of view this is, as far as we know, a new source for generating spatial patterns as compared to the classic Turing diffusion induced instability [42] and the more recently identified optimal diffusion or spatial-spillover-induced instabilities [12, 13]. In our analysis hot spots are specific sites where uncertainties in these sites are such that when concerns about local misspecifications are incorporated into the decision rules for the entire spatial domain, the global rule could break down, could be very costly or could induce spatial clustering. We study, at the initial stage, translation invariant systems, because they are useful for deriving closed form solutions and providing a clear picture of the mechanisms underlying the emergence of hot spots. We extend our analysis to general linear-quadratic systems, where we completely characterize the solution and derive conditions for the emergence of spatial hot spots. Finally we derive conditions for the emergence of hot spots in nonlinear robust control systems.

Our results regarding robust control in spatiotemporal systems bring up another point which could be associated with applied policy design and regulation. It has been argued recently (e.g., [20]) that increased interconnectedness among networks has made various networks, such as ecological networks, power grid networks, transportation networks, financial networks more unstable. This interconnectedness and the potential instabilities induced can be associated with the hot spots introduced by our model and the impact of local properties on global regulation. 3

In the rest of the paper we formalize local concerns with the help of local entropy constraints, and we derive robust control rules for a general linear quadratic model, a special case of this model where translation invariance allows the derivation of closed form results, and as a general nonlinear model. We also show how robust control can be applied using linear quadratic approximations. Finally we provide an economic application where utility is spatially dependent and consumers consume in situ ecosystem services by traveling to locations. We provide robust decision rules for an optimal linear regulation problem where the objective is to determine the optimal supply of services in each site so that equilibrium local fees are determined.

2 Modeling a Spatial Economy Under Uncertainty

2.1 The Controlled State Equation

Consider the economy as being located on a discrete finite lattice \({\mathfrak {L}}\), taken without loss of generality to be one dimensional, e.g., \({\mathfrak {L}}={\mathbb {Z}}_{N}=\{0,1, \ldots , N-1\}\). By the term “economy” at this point we consider a collection of state variables \(x=\{x_{n}\}\), \(n\in {\mathfrak {L}}\). For fixed \(n\), \(x_{n}\in {\mathbb {R}}\) and corresponds to the state of the economy at lattice site \(n\). We therefore consider the state variable \(x\) as taking values on a (finite dimensional) sequence space. To keep our discussion within a Hilbert space setting we choose to work with the sequence space \(\ell ^{2}:=\ell ^{2}(\mathbb {Z}_{N})=\{\{x_{n}\},\,\,\sum _{n\in {\mathbb {Z}_{N}}}x_{n}^{2}<\infty \}\).4 This space is a Hilbert space with a norm derivable from the inner product \(\langle x,y\rangle =\sum _{n\in {\mathbb {Z}_{N}}}x_{n}y_{n}\) and is in fact equivalent to \({\mathbb {R}}^{N}\). Given this economy we consider a social planning problem modeled as an optimal linear regulator problem (e.g., [31]). The optimal linear regulator problem refers to the optimization of a quadratic objective defined over the whole lattice by exerting on each lattice site a control \(u_{n}\in \mathbb {R}\) where the control for the whole economy is described as a sequence \(u=\{u_{n}\}\), \(n\in {\mathbb {Z}_{N}}\) such that \(u\in \ell ^{2}({\mathbb {Z}}_{N})={\mathbb {R}}^{N}\).5 From now on to simplify notation we will simply denote the state space for the economy by \({\mathbb {R}}^{N}\).

The economy evolves in time, and this is modeled by considering the state of the economy as described by a function \(\check{x}:I\rightarrow {\mathbb {R}} ^{N}\) such that \(\check{x}(t)=\{x_{n}(t)\}\), \(n\in {\mathfrak {L}}\), where \(x_{n}(t) \) is the state of the system at site \(n\) at time \(t\). To ease notation we will use the notation \(x\) for this function and similarly \(u\) for the control exerted on the system. In this paper, we are interested in an infinite horizon economy, and thus, we assume \(I={\mathbb {R}}_{+}\). The evolution of the state of the economy in time is subject to statistical fluctuations (noise), which is introduced into the model via stochastic factors (sources),6 modeled in terms of a stochastic process \(w=\{w_{n}\}\), \(n\in {\mathbb {Z}_{N}}\), which is considered as a vector valued Wiener process on a suitable filtered probability space \((\Omega ,\{\mathcal {F}_{t}\}_{t\in {\mathbb {R}}_{+} },\mathcal {F},{\mathbb {P}})\) (see e.g., [27]). The introduction of noise turns the state of the system for a fixed time \(t\) into an \({\mathbb {R} }^{N}\)-valued random variable; thus, the state of the system can be described as an \({\mathbb {R}}^{N}\)-valued stochastic process. We assume that this stochastic process is the solution of a stochastic differential equation of the form
$$\begin{aligned} dx_{n} = \left( \sum _{m} a_{nm} x_{m} + \sum _{m} b_{nm} u_{m}\right) dt + \sum _{m} c_{nm} dw_{m}, \,\,\, n \in {\mathbb {Z}_{N}} \end{aligned}$$
where the last term, describing the fluctuations of the state due to the stochasticity, is understood in the sense of the Itō theory of stochastic integration. In compact form this can be expressed as
$$\begin{aligned} dx= ({\mathsf {A}} x + {\mathsf {B}} u) \, dt + {\mathsf {C}} dw \end{aligned}$$
where \({\mathsf {A}} ,{\mathsf {B}} ,{\mathsf {C}} : {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}^{N}\) are linear operators, representable by finite matrices with elements \(a_{nm}\),\(b_{nm}\), \(c_{nm}\), respectively. The state equation (1) is an Ornstein–Uhlenbeck equation on the finite dimensional Hilbert space \(\ell ^{2}({\mathbb {Z}}_{N})={\mathbb {R}}^{N}\).

At this point we make some comments concerning the economic intuition behind the state equation (1). Our model is a “spatial” economy where it is considered that the state of the economy at point \(m\) has an effect at the state of the economy at point \(n\). This effect is quantified through an influence “kernel” (or rather a discretized version of an influence kernel) which can be represented in terms of a matrix \({\mathsf {A}}=(a_{nm})\). The entry \(a_{nm}\) provides a measure of the influence of the state of the system at point \(m\) to the state of the system at point \(n\). Network effects knowledge spillovers can be modeled, for example, through a proper choice of \({\mathsf {A}}\). For instance, if the economies do not interact at all then \({\mathsf {A}}=a_{nm}=\delta _{n,m}\) where \(\delta _{n,m}\) is the Kronecker delta. If only next neighborhood effects are possible, then \(a_{nm}\) is nonzero only if \(m\) is a neighbor of \(n\). Such an example is the discrete Laplacian. Similarly, the controls at different points of the lattice \(u_{m}\) are assumed to have an effect at the state of the system at site \(n\), through the term \(\sum _{m}b_{nm}u_{m}\). For example, in a model of a spatial fishery, fishing effort at a given site may affect fish biomass at another sites through biomass movements. A similar interpretation for this term holds as for the term \(\sum _{m}a_{nm}x_{m}\). We will identify the matrices \({\mathsf {A} }=(a_{nm})\) and \({\mathsf {B}}=(b_{nm})\) with operators denoted by the same symbol, acting from \({\mathbb {R}}^{N}\rightarrow {\mathbb {R}}^{N}\).

Finally, the interpretation of the third term \(\sum _{m}c_{nm} dw_{m}\) is a term that tells us how the uncertainty at site \(m\) is affecting the uncertainty concerning the state of the system at site \(n\). The matrix \({\mathsf {C}} =(c_{nm})\) can be thought of as the spatial autocorrelation operator for the system.

2.2 Model Uncertainty

The model of Eq. (1) includes statistical fluctuations that turn the state of the system to a stochastic process. The statistical fluctuations are assumed to be introduced into the model through the stochastic factors \(w=\{w_{n}\}\), \(n \in {\mathbb {Z}}_{N}\), and the probability distribution of these factors (under the probability measure \({\mathbb {P}}\)) is assumed to be the standard multivariate normal distribution \({\mathcal {N}}(0,{\mathsf I}\, t)\), where \({\mathsf I}\) is the \(N\times N\) identity matrix. This means that according to this model the probability that the fluctuations at time \(t\) lie in some Borel subset \(A \subset {\mathbb {R}}^{N}\) is given by \({\mathbb {P}}( w(t) \in A)=\int _{A} (2 \pi t)^{-\frac{N}{2}} \exp \left( -\frac{||x||^2}{2 t} \right) dx\), where \(||x||\) is the Euclidean norm in \({\mathbb {R}}^{N}\). In particular this allows us to calculate the moments of the stochastic factors (under the probability measure \({\mathbb {P}}\)) as \({\mathbb E}_{{\mathbb {P}}}[w_{n}(t)]=0\), \({\mathbb E}_{{\mathbb {P}}}[w_{n}(t)w_{m}(t)]=\delta _{nm} t\), \(n,m \in {\mathbb {Z}}_{N}\) where \(\delta _{nm}\) denotes the Kronecker delta. Upon adopting this assumption concerning the distribution of the stochastic factors \(w\) we may by using techniques from stochastic analysis obtain results concerning the distribution of the state of the system \(x\), i.e., obtain results concerning \({\mathbb {P}}(x \in B)\) for any Borel set \(B \subset {\mathbb {R}}^{N}\).

Assume now that there is some uncertainty concerning the “true” statistical distribution of the state of the system. By that we mean that we are no longer certain as to what the true statistical distribution of the stochastic factors \(w\) is, or in other words, we have doubts as to whether \({\mathbb {P}}\) is the correct probability measure describing the fluctuations of the state of the system around its mean behavior. This uncertainty may arise, for example, by incomplete knowledge or by partial observations of system (1). In mathematical terms this means that the probability measure that describes the stochastic process of the factors \(w\) may be a probability measure \({\mathbb {Q}}\) different from \({\mathbb {P}}\), which up to the level of available information for the system is compatible with our empirical estimations. Then, the probability that the fluctuations at time \(t\) lie in the Borel set \(A \subset {\mathbb {R}}^{N}\), will be given by \({\mathbb {Q}}(w(t) \in A)\) rather that \({\mathbb {P}}(w(t) \in A)\), and this will lead to different estimates for the probability distribution of the state of the system. Of course, not all such probability measures are acceptable as alternative descriptions of the system, a minimal requirement is the equivalence of the two measures (which essentially means that the two measures have the same null sets). We will further restrict the alternative model \({\mathbb {Q}}\), so that it still provides a distribution which is reminiscent of a Gaussian distribution for the stochastic factors, in the sense that under the measure \({\mathbb {Q}}\), for any \(t\), the l expectation of the factor \(w_{n}\) calculated at \(t\) is no longer \(0\) but equal to \({\mathbb E}_{{\mathbb {Q}}}[ w_{n}(t) ]={\mathbb E}_{{\mathbb {Q}}}[\int _{0}^{t} v_{n}(s) ds]\), where \(v_{n}\) is a stochastic process adapted to the filtration \(\{\mathcal{F}_{t}\}\), whereas the variance remains unchanged.7 Therefore, there is a discrepancy in the rate of change of the mean of the factor \(w_{n}\) as predicted by the two models, equal to \(v_{n}\). This quantity \(v_{n}\) can be considered as a measure of model misspecification, which is directly identified in terms of the consequences it induces to the mean state of factor \(w_{n}\). We will call \(v_{n}\) the information drift related to the stochastic factor \(w_{n}\). Since the factors \(w_{n}\) induce fluctuations to the state of the system, model misspecification has as a result a rate of change in the mean behavior of the system compared to what the model in Eq. (1) predicts. The total effect of model misspecification on the mean behavior of the state of the system can be explicitly estimated in terms of the full information drift vector \(v=\{v_{n}\}\). This discrepancy in the mean behavior may have serious consequences for the decision maker; therefore, the possibility that the model is misspecified will have to be taken seriously into account in the optimal control problem, and this is the essence of the robust control formulation.

In fact, we expect more than one such probability measures to be compatible with our empirical estimations; therefore, this corresponds to a family of probability measures \(\mathcal {Q}\) such that each \({\mathbb {Q}}\in \mathcal {Q}\) corresponds to an alternative stochastic model (scenario) concerning the state of the system. Considering for the time being finite horizon \(T\), we restrict ourselves to measures which are equivalent with \({\mathbb {P}}\) and such that the Radon–Nikodym derivatives \(d{\mathbb {Q}}/d{\mathbb {P}}\) are defined through an exponential martingale of the type employed in Girsanov’s theorem,8
$$\begin{aligned} \left. \frac{d{\mathbb {Q}}}{d{\mathbb {P}}}\right| _{\mathcal {F}_{T}}=\mathcal{E}_{T}(v):=\exp \left( \int \limits _{0} ^{T}\sum _{n}v_{n}(t)dw_{n}(t)-\frac{1}{2}\int \limits _{0}^{T}\sum _{n}v_{n} ^{2}(t)dt\right) , \end{aligned}$$
where \(v=\{v_{n}\}\), \(n\in {\mathbb {Z}_{N}}\) is an \({\mathbb {R}}^{N}\)-valued stochastic process which is measurable and adapted with respect to the filtration \(\{\mathcal {F}_{t}\}\) satisfying the Novikov condition \({\mathbb E}_{{\mathbb {P}}}\left[ \exp \left( \frac{1}{2}\int _{0}^{T} \sum _{n}v_{n}^{2}(t)dt\right) \right] <\infty \). Furthermore, the same theorem guarantees that \(\bar{w}_{n}(t)=w_{n} (t)-\int _{0}^{t}v_{n}(s)ds\) is a \({\mathbb {Q}}\)-Brownian motion for all \(n\in {\mathbb {N} }\), where the drift term \(v_{n}\) may be considered as a measure of the model misspecification at lattice site \(n\), in the sense described above. Thus, Girsanov’s theorem (see e.g. [27]) shows that the adoption of the family \(\mathcal {Q}\) of alternative measures concerning the state of the system, leads to a family of differential equations for the state variable
$$\begin{aligned} dx_{n}= \left( \sum _{m}a_{nm}x_{m}+\sum _{m}b_{nm}u_{m}+\sum _{m}c_{nm}v_{m} \right) dt+\sum _{m}c_{nm}d\bar{w}_{m},\,\,\,n\in {\mathbb {Z}_{N}}, \end{aligned}$$
subject to initial conditions \(x(0)=\{x_{n}(0)\}\). We will use the notation \(x^{0}\) for the initial state \(x(0)\). The state variables \(x=\{x_{n}\}\) depend on the choice of \(u=\{u_{n}\}\) and \(v=\{v_{n}\}\) therefore, \(x=x^{u,v}\); however, we choose to avoid this notation for simplicity. We therefore tacitly assume that \(x\) indicates the state of the system when the measure \({\mathbb {Q}}\) corresponding to the “information drift” \(v=\{v_{n}\}\) and the control procedure \(u=\{u_{n}\}\) is adopted. In compact form this equation becomes the Ornstein–Uhlenbeck equation
$$\begin{aligned} \begin{aligned}&dx=({\mathsf {A}}x+{\mathsf {B}}u+{\mathsf {C}}v)dt+{\mathsf {C}} d\bar{w},\\&x(0)=x^{0}, \end{aligned} \end{aligned}$$
where for notational convenience the superscripts \(u\), \(v\) are omitted from \(x\). The well posedness of the state equation (3) follows from standard results in the theory of stochastic differential equations.

The infinite horizon limit \(T=\infty \) is more delicate, since the validity of Girsanov’s theorem requires further technical assumptions. In the infinite horizon limit, the equivalence of the measures \({\mathbb {P}}\) and \({\mathbb {Q}}\) requires more strict conditions than the finite horizon case, in particular it requires progressive measurability of the stochastic processes involved and the uniform integrability of the exponential martingale \(\{\mathcal{E}_{t}(v)\}_{t \in {\mathbb {R}}_{+}}\) (defined in (2) by setting \(T=t\) for any \(t\)). A detailed discussion of this issue, providing illustrating counterexamples as well as possible pitfalls can be found in pp. 192–193 of [27] as well as in Chapt. 3, pp. 136–151, of [28] where the issue is addressed within the context of infinite horizon stochastic optimal control.

2.3 The Control Objective

We now define the control objective. Let us first fix a model \({\mathbb {Q}}\), i.e., let us assume that the drift \(v\) is fixed. Then, the control procedure \(u\) is designed so that the distance of the state of the system provided by the solution of the stochastic differential equation (3), from a desired target, chosen without loss of generality to be \(x^{T}=0 \in {\mathbb {R}}^{N}\) is minimized at the minimum possible cost, as measured by the amplitude of the control variable \(u\). We may define the cost functional
$$\begin{aligned} J(u;{\mathbb {Q}}):=J(u,v)&={\mathbb {E}}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}\sum _{n,m}\left( p_{nm}x_{n}(t)x_{m}(t)+q_{nm}u_{n}(t)u_{m}(t)\right) dt\right] \\&={\mathbb {E}}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}(\langle {\mathsf {P}}x(t),x(t)\rangle +\langle {\mathsf {Q}}u(t),u(t)\rangle )dt\right] , \end{aligned}$$
where \(\langle \cdot ,\cdot \rangle \) is the inner product in the Hilbert space \({\mathbb {R}}^{N}\), and \({\mathsf {P}},{\mathsf {Q}}:{\mathbb {R}}^{N} \rightarrow {\mathbb {R}}^{N}\) are symmetric positive operators represented (and identified) by the symmetric positive \(N \times N\) matrices \({\mathsf {P}}=\{p_{nm}\}\) and \({\mathsf {Q}}=\{q_{nm}\}\), respectively, and assume that the decision maker solves the optimal control problem
$$\begin{aligned} \min _{u} J(u;{\mathbb {Q}}) \,\,\, \text{ subject } \text{ to } \,\,\, (3). \end{aligned}$$
The first term in the above cost functionals is the total spatio-temporal cost of deviation from a target, and the second is the total spatio-temporal cost of intervention of the decision maker, properly discounted by a temporal discount factor \(r>0\).
This problem corresponds to the optimal control problem assuming the validity of the model \({\mathbb {Q}}\) for the fluctuations (or equivalently the information drift \(v\)) and a solution to the corresponding optimal control problem will provide a solution leading to a value function \(V(\cdot ;{\mathbb {Q}}):=V(\cdot ;v) : {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}\), where \(V(x;{\mathbb {Q}})=V(x;v)\) is the minimum of the cost functional attained for the above optimal control problem if the model \({\mathbb {Q}}\) is chosen and if the initial state of the system was \(x\), i.e., the best possible performance of the decision maker under model \({\mathbb {Q}}\). Being uncertain about the true model, the decision maker will opt to choose the strategy that will work in the worst case scenario \({\mathbb {Q}}_{w} \in {\mathcal Q}\); this being the one that maximizes \(V(\cdot ;{\mathbb {Q}})\), i.e., \(V(x;{\mathbb {Q}}_{w})=\sup _{{\mathbb {Q}}\in {\mathcal Q}} V(x;{\mathbb {Q}})\), for every possible initial state \(x \in {\mathbb {R}}^{N}\). Since the family of models \({\mathcal Q}\) is parametrized in terms of the stochastic processes \(v\) representing the information drift, this problem has an equivalent representation in terms of the set of admissible information drifts \(\mathcal{V}\). Model misspecification is penalized by a loss function \(L : \mathcal{{Q}}\rightarrow {\mathbb {R}}_{+}\); therefore, the cost functional is modified as \({\mathfrak J}=J(u;{\mathbb {Q}})-L({\mathbb {Q}})\) to include the effects of the cost of model misspecification (related to uncertainty aversion). Various forms of the loss function \(L\) are conceivable, but in this work we will use a quadratic loss function of the form
$$\begin{aligned} L({\mathbb {Q}}):=L(v)&= {\mathbb E}_{{\mathbb {Q}}} \left[ \int \limits _{0}^{\infty } e^{-r t} \sum _{n} \theta _{n} \sum _{m} \bar{r}_{nm} v_{n}(t)v_{m}(t))dt \right] \\&= \theta {\mathbb E}_{{\mathbb {Q}}} \left[ \int \limits _{0}^{\infty } e^{-r t} \langle (\Theta {{\bar{\mathsf {R}}}} v)(t),v(t)\rangle ) dt \right] \end{aligned}$$
where \(\{\theta _{n}\}>0\), \(\theta =\max _{n} \theta _{n}\), \(\Theta =diag\left( \frac{\theta _1}{\theta }, \ldots , \frac{\theta _n}{\theta } \right) \), and \({{\bar{\mathsf {R}}}} =\{r_{nm}\}\) is a symmetric positive matrix. This corresponds to a quadratic loss function related to the total spatio-temporal “cost” of model misspecification, properly discounted. Quadratic loss functions are rather common in statistical decision theory, mainly on account of their connection with entropy (see Sect. 2.4). Therefore, the robust control problem can be expressed in the general form
$$\begin{aligned}&\min _{u}\max _{v} J(u;v)-L(v), \\&\text{ subject } \text{ to } \,\,\, (3), \end{aligned}$$
(where we have identified the model \({\mathbb {Q}}\) with the information drift process \(v\)) or using the notation \({\mathsf {R}}=\Theta \bar{{\mathsf {R}}}\), explicitly as
$$\begin{aligned} \begin{aligned}&\min _{u} \max _{v} {\mathbb {E}}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt} ( \langle ({\mathsf {P}} x)(t),x(t)\rangle + \langle ({\mathsf {Q}} u)(t),u(t)\rangle -\theta \langle ({\mathsf {R}} v)(t),v(t)\rangle ) dt \right] , \\&\qquad \qquad \text{ subject } \text{ to } \,\,\, (3). \end{aligned} \end{aligned}$$
One particularly intuitive way of viewing this problem is as a two player dynamic game, the first player is the decision maker, while the second player is nature who has control over the uncertainty. The first player chooses her actions so as to minimize the distance of the state of the system from a chosen target at the minimum possible cost, whereas the second player is considered by the first player as a malevolent player who tries to mess up the first players efforts. This interpretation allows us to use the Hamilton–Jacobi–Bellman–Isaacs (HJBI) equation approach for the solution of the robust control problem.

2.4 Relation with Entropic Constrained Robust Control

A crucial point in robust control is the determination of the set \(\mathcal{{Q}}\) of admissible probability measures \({\mathbb {Q}}\) that may serve as models for the stochastic factors \(w=\{w_{n}\}\) that induce the fluctuations around the mean state of the system (1). The question of defining the notion of distance between two probability measures is a very interesting one and a number of options arise. We will focus here on the adoption of a (pseudo) distance between two measures \({\mathbb {P}}\) and \({\mathbb {Q}}\), the Kuhlback–Leibler distance \(\mathcal{{H}}({\mathbb {Q}}\mid {\mathbb {P}})\) (or entropy) defined by
$$\begin{aligned} \mathcal{{H}}({\mathbb {Q}}\mid {\mathbb {P}}):={\mathbb E}_{{\mathbb {Q}}}\left[ \ln \left( \frac{d{\mathbb {Q}}}{d{\mathbb {P}}}\right) \right] = \int \limits _{\Omega } \ln \left( \frac{d{\mathbb {Q}}}{d{\mathbb {P}}}\right) \frac{d{\mathbb {Q}}}{d{\mathbb {P}}} d{\mathbb {P}}, \end{aligned}$$
as long as the measure \({\mathbb {Q}}\) is absolutely continuous with respect to \({\mathbb {P}}\), and define the set \(\mathcal{{Q}}\) as the set of measures \({\mathbb {Q}}\) whose distance from a reference measure \({\mathbb {P}}\) as quantified by \(\mathcal{{H}}({\mathbb {Q}}\mid {\mathbb {P}})\) is less than an allowed threshold. It is straightforward to see that if the Radon–Nikodym derivative of the measures is given by an exponential formula of the form (2) the entropy \(\mathcal{{H}}({\mathbb {Q}}\mid {\mathbb {P}})\) reduces to a quadratic functional of the information drift process \(v\), therefore, leading to a rather natural characterization of \(\mathcal{{Q}}\) in terms of \(v\) as a “ball” of a certain radius and center at \(v=0\), in the space of information drifts. This will lead us to the adoption of the term entropy ball. If the radius of the entropy ball is too small, then we are too hesitant to digress from the reference model \({\mathbb {P}}\), a fact that can be interpreted as being certain concerning the model employed. If on the other hand the radius of the entropy ball is large, this corresponds to allowing large digressions from the reference model, a fact that can be interpreted as not relying too much on the reference model.

In Hansen et al. [23] it is shown that for a temporal problem the robust control problem with a discounted entropy constraint can be expressed in an equivalent form of a stochastic control problem with a quadratic constraint in the information drift process. We will show that this result may be extended to spatio-temporal systems, and generalize it by introducing the concept of spatially localized entropy constraints which are important for spatially extended systems.

To motivate this notion we note that the way that model uncertainty has been introduced into (3) is through its effect on the stochastic factors \(w=\{w_{n}\}\), which are assumed to induce uncertainty into the state of the system. The decision maker is not interested in the misspecification of the factors \(w\)per se but on their effect on observable quantities, which are either the state of the system \(x=\{x_{n}\}\) or functionals of the state of the system. To make our approach more concrete assumes momentarily that the decision maker is interested in quantifying the effect that model uncertainty has on the state of the system at site \(n\). The effect of uncertainty on all the stochastic factors \(w=\{w_{m}\}\), \(m \in {\mathbb {Z}}_{N}\), as modeled by the corresponding information drifts \(v=\{v_{m}\}\) is going to have a cumulative effect on the specific quantity \(x_{n}\), and this is modeled in terms of the correlation matrix \({\mathsf {C}}\). By observing the state equation (1) we see that \(x_{n}\) is influenced by a composite stochastic factor \({\mathfrak {f}}_{n}\), whose temporal evolution is given by the Itō stochastic differential equation \(d{\mathfrak {f}}_{n}:=\sum _{m} \sigma _{n m} dw_{m}\). The same argument applies for any site \(n \in {\mathbb {Z}}_{N}\), so we may consider the vector of stochastic factors \({\mathfrak {f}}=\{{\mathfrak {f}}_{n}\}\). As it is the stochastic factors \({\mathfrak {f}}=\{{\mathfrak {f}}_{n}\}\) that directly influence the state of the system, it is natural to assume that the controller is interested in the misspecification of \({\mathfrak {f}}\) rather than in the misspecification of \(w\), and will prefer to impose thresholds on the degree of allowed misspecification in terms of \({\mathfrak {f}}\) directly rather than in terms of \(w\). Furthermore, it is natural to assume that the modeler may be more concerned about the effect of model misspecification on certain sites rather than other sites; therefore, it is desirable to impose thresholds in a fashion that allows us to be stricter on how much uncertainty we allow for our model for certain sites then for other sites. This leads to the concept of localized entropy constraints. As a concrete example takes climate change, in which model misspecification may have more serious effects for certain sites than others.

At an even more general level, the decision maker may be interested in the effect of model misspecification on linear combinations of the state of the system, e.g., on quantities \(y_{n}=\sum _{m} o_{nm} x_{m}\), \(n \in {\mathbb {Z}}_{N}\), or in compact form on \(y={\mathcal {O}}x\) where \(y=\{y_{n}\}\), \({\mathcal {O}}=\{o_{nm}\}\). In this context, \({\mathcal {O}}\) plays the role of an observation operator, which provides the quantity of interest; the controller is interested in specifying as accurately as possible using the model (3). Applying Itō’s lemma on \(y=\{y_{n}\}\), we see that the quantities of interest are subject to the stochastic factors \({\mathfrak {f}}=\{{\mathfrak {f}}_{n}\}\), with temporal evolution given by
$$\begin{aligned} d{\mathfrak {f}}_{n}:=\sum _{m} {\mathfrak {s}}_{n m} dw_{m}, \,\,\, n \in {\mathbb {Z}}_{N}, \end{aligned}$$
where \({\mathfrak {s}}_{nm}=\sum _{\ell } o_{n\ell }\sigma _{\ell m}\). In the special case where \({\mathcal {O}}={\mathsf I}\), \(y\) coincides with the state of the system \(x\).

To make the above discussion more quantitative, consider the stochastic differential equation (5) which provides the evolution of the stochastic factors directly influencing the observable quantity of interest \(y\), and let us view it under two alternative models \({\mathbb {P}}\) and \({\mathbb {Q}}\). Under model \({\mathbb {P}}\), the factors \({\mathfrak {f}}=\{{\mathfrak {f}}_{n}\}\) do not affect the mean behavior of the quantity of interest, since \({\mathbb E}_{{\mathbb {P}}}[{\mathfrak {f}}_{n}(t)]=0\), whereas under the model \({\mathbb {Q}}\) they do since \({\mathbb E}_{{\mathbb {Q}}}\left[ {\mathfrak {f}}_{n}(t) \right] ={\mathbb E}_{{\mathbb {Q}}}[\int _{0}^{t} \sum _{m}{\mathfrak {s}}_{nm} v_{m}(t) dt]\). Denote by \({\mathbb {P}}_{{\mathfrak {f}}}\), \({\mathbb {Q}}_{{\mathfrak {f}}}\) the probability measures related with the joint distribution of the vector valued family of random variables \({\mathfrak {f}}=\{ {\mathfrak {f}}_{n}\}\), when \({\mathbb {P}}\) and \({\mathbb {Q}}\) are assumed as models for \(w=\{w_{n}\}\), and denote by \(\bar{{\mathbb {P}}}_{n}\) and \(\bar{{\mathbb {Q}}}_{n}\), the marginals of \({\mathbb {P}}_{{\mathfrak {f}}}\) and \({\mathbb {Q}}_{{\mathfrak {f}}}\) that provide the distribution of the stochastic factor \({\mathfrak {f}}_{n}\). These measures are sufficient to provide information that \({\mathfrak {f}}_{n}(t) \in B\) where \(B\) is any Borel set, and thus provide information on the possible deviation of the observable quantity \(y_{n}(t)\) from its mean behavior, which is the true concern of the controller. Applying the Girsanov theorem we see that the rate of change in the mean at time \(t\) is equal to \(\sum _{m}{\mathfrak {s}}_{nm} v_{m}(t)\), and this is the relevant information drift for the factor \({\mathfrak {f}}_{n}\). If the information drift \(v\) was a constant vector, then we could explicitly provide the marginal distribution of the factors \({\mathfrak {f}}_{n}\) under the measures \(\bar{{\mathbb {P}}}_{n}\) and \(\bar{{\mathbb {Q}}}_{n}\) as follows: Under \(\bar{{\mathbb {P}}}_{n}\), \({\mathfrak {f}}_{n}(t) \sim {\mathcal {N}}(0, S_{n}\, t)\) where \(S_{n}=\sum _{m} {\mathfrak {s}}_{nm}{\mathfrak {s}}_{mn}\), whereas under \(\bar{{\mathbb {Q}}}_{n}\), \({\mathfrak {f}}_{n}(t) \sim {\mathcal {N}}(M_{n} t , S_{n} \, t)\) where \(M_{n}=\sum _{m} {\mathfrak {s}}_{nm} v_{m}\). Then, the Kullback–Leibler divergence between the measures \(\bar{{\mathbb {Q}}}_{n}\) and \(\bar{{\mathbb {P}}}_{n}\) can be explicitly calculated as \(\mathcal{H}_{n}(t):={\mathcal H}(\bar{{\mathbb {Q}}}_{n}(t) \mid \bar{{\mathbb {P}}}_{n}(t))=\frac{1}{S_{n}} M_{n}^{2} t\) where \(\bar{{\mathbb {Q}}}_{n}(t)\), \(\bar{{\mathbb {P}}}_{n}(t)\) are the above measures conditioned at \(\mathcal{F}_{t}\). The quantity \(\mathcal{H}_{n}(t)\) can be considered as a localized entropy, in the sense that it is the entropy between the conditional measures that provide information concerning the effect of model misspecification on factor \({\mathfrak {f}}_{n}\) by time \(t\). In the general case, where \(v\) is a stochastic process the above formulae have to be properly modified in terms of conditional expectations and conditional variances, and this leads us to the following definition:

Definition 1

(Localized entropy) Let \(v=\{v_{n}\}\) be the information drift (possibly a stochastic process) modeling misspecification of the stochastic factors \(w=\{w_{n}\}\) as arising from two alternative models \({\mathbb {P}}\) and \({\mathbb {Q}}\), let \({\mathfrak {f}}=\{f_{n}\}\) be the stochastic factors \({\mathfrak {f}}_{n}(t)=\int _{0}^{t}\sum _{m} {\mathfrak {s}}_{n m} dw_{m}\), \(n \in {\mathbb {Z}}_{N}\) that affect the quantities of interest for the model and set \(M_{n}=\sum _{m} {\mathfrak {s}}_{nm} v_{m}\), \( S_{n}=\sum _{m} {\mathfrak {s}}_{nm}{\mathfrak {s}}_{mn}\), \(n \in {\mathbb {Z}}_{N}\). The quantities
$$\begin{aligned} {\mathcal H}_{n}(t)&:= {\mathcal H}(\bar{{\mathbb {Q}}}_{n}(t) \mid \bar{{\mathbb {P}}}_{n}(t))={\mathbb E}_{{\mathbb {Q}}}\left[ \int _{0}^{t} \frac{1}{S_{n}} M_{n}^{2}\, dt\right] , \,\,\, n \in {\mathbb {Z}}_{N}, \\ {\mathcal R}_{n}(t)&:= {\mathcal R}(\bar{{\mathbb {Q}}}_{n}(t) \mid \bar{{\mathbb {P}}}_{n}(t))={\mathbb E}_{{\mathbb {Q}}}\left[ \frac{1}{S_{n}} M_{n}^{2}\right] , \,\,\, n \in {\mathbb {Z}}_{N}, \\ {\mathcal D}_{n}&:= {\mathcal D}(\bar{{\mathbb {Q}}}_{n} \mid \bar{{\mathbb {P}}}_{n}) = \int \limits _{0}^{\infty } e^{-r t} {\mathcal R}_{n}(t) dt,\,\,\, n \in {\mathbb {Z}}_{N}, \end{aligned}$$
are called the localized entropy up to time \(t\), the localized entropy rates at time \(t\), and the localized discounted entropies, respectively, of the marginal measures \(\bar{{\mathbb {Q}}}_{n}\) and \(\bar{{\mathbb {P}}}_{n}\) that provide information for the factor \({\mathfrak {f}}_{n}\). In the special case where \({\mathfrak {s}}_{nm}=\sigma _{nm}\) the localized entropy \({\mathcal H}_{n}\) provides information concerning model misspecification effects for the state of the system at site \(n\).
Using the quantities in Definition 1 we may impose localized entropy constraints on the allowed models \({\mathbb {Q}}\) in the form \(\mathcal{{Q}}_{n}=\{ {\mathbb {Q}}\, : \, {\mathcal D}_{n} \le H_{n} \}, \,\,\, n \in {\mathbb {Z}}_{N}\), where the allowed discrepancy \(H_{n}\) is allowed to differ from site to site. This leads to a characterization of the set of allowed models \(\mathcal{{Q}}\) as \(\mathcal{{Q}}=\bigcap _{n} \mathcal{{Q}}_{n}\). This definition offers us a geometric intuition for \(\mathcal{{Q}}\) as a collection of local entropy balls of varying radius. We may thus consider the robust optimal control problem subject to localized entropic constraints
$$\begin{aligned} \begin{aligned}&\inf _{u} \sup _{{\mathbb {Q}}\in \bigcap _{n} \mathcal{{Q}}_{n}} J(u,{\mathbb {Q}})-L({\mathbb {Q}}) \\&\text{ subject } \text{ to } \,\,\, (3), \end{aligned} \end{aligned}$$
which corresponds to the situation where the policy maker is concerned about the effect of model misspecification on the factors \({\mathfrak {f}}\) rather than on the primary factors \(w\) and allowing for different grades of concern at various lattice points. In the case where \({\mathfrak {s}}_{nm}=\sigma _{nm}\) the above problem takes into account the concerns of the policy maker at point \(n\), with the quantity \(H_{n}\) serving as a value for the uncertainty she is willing to accept for lattice site \(n\) (the smaller \(H_{n}\) is the less than the allowed uncertainty for the specific site). The assumption of spatially varying allowed model uncertainty is not unreasonable as certain lattice points may be considered as more crucial than others; therefore, specific care should be taken for them.

The optimization problem (6) is related to the linear quadratic problem (4) for various choices of the matrix \({\mathsf {R}}\), and this serves as a motivation for the statement of the robust control problem in terms of the linear quadratic differential game (4).

Claim 1

The optimization problem (6) is related to the differential game (4) for the choice \({\mathsf {R}} =\{ r_{nm}\}=\{ \sum _{\ell } \frac{1}{S_{\ell } }\bar{\theta }_{\ell } {\mathfrak {s}}_{\ell n} {\mathfrak {s}}_{\ell m} \}\), where \(\{\theta _{n}\}\) are Lagrange multipliers related to the radii of the local entropy balls \(\{H_{n}\}\), \(\theta =\max _{n}\{\theta _{n}\}\), and \(\bar{\theta }_{n}=\frac{\theta _{n}}{\theta }\), \(n \in {\mathbb {Z}}_{N}\).

Indeed, in order to solve the optimization problem (6) with the localized entropic constraints we will use the Lagrangian \(L= J(u;v)-\sum _{n}\theta _{n}\left( \mathcal {D}_{n}-H_{n}\right) \) where \(\{\theta _{n}\}\) are the Lagrange multipliers needed in order to guarantee that the localized entropic constraints hold. Using Definition 1 this Lagrangian reduces to the quadratic form \(J(u;v)- {\mathbb E}_{{\mathbb {Q}}} \left[ \int _{0}^{\infty } e^{-r t} \sum _{n} \theta _{n} \frac{1}{S_{n}} (\sum _{m} {\mathfrak {s}}_{nm} v_{m})^{2} dt \right] \), which upon rearrangement leads to a payoff functional of the form employed in (4) with \(r_{nm}=\sum _{\ell } \frac{1}{S_{\ell } }\bar{\theta }_{\ell } {\mathfrak {s}}_{\ell n} {\mathfrak {s}}_{\ell m}\).

While the exact relation between \(\{\theta _{n}\}\) and \(\{H_{n}\}\) would require the solution of an algebraic system of equations, we may offer a qualitative insight of how the two quantities are related. Large values of \(\theta _{n}\) in general correspond to small values of \(H_{n}\) therefore causing the policy maker to be extremely concerned with respect to model misspecification. An alternative way of interpreting that is the benchmark model \({\mathbb {P}}\) is a very good description of the system, and the policy maker can rely on that for choosing the optimal policy. On the other hand, small values of \(\theta _{n}\) in general will correspond to large values of \(H_{n}\), and the policy maker is rather relaxed concerning the effects of model misspecification at this lattice site. Our subsequent analysis will show that there is a lower bound for \(\theta =\max _{n} \theta _{n}\) below which the robust control procedure breaks down as the benchmark model is too unreliable to use for policy making, and the set \(\mathcal{{Q}}\) which contains the possible models is too large to be of any practical use. In the differential games interpretation, the strategies of the second player (Nature) are too unrestricted and allow her to have the upper hand against the first player. Such a breakdown is called a hot spot. In the way that we have formulated the problem, in terms of local entropy constraints, it is possible that such breakdown may start locally at point that \(\theta _{n}\) is too low and thus creates a “nucleus” for a hot spot. It is, however, conceivable that due to the spatial connectivity of the system this breakdown effect will propagate to other points.

An alternative problem is the robust control problem with global entropic constraints where we choose \(\mathcal{{Q}}=\{ \mathcal{{Q}}\, : \, \sum _{n} \mathcal{D}_{n} \le H_{0}\}\), i.e., we are choosing our models from a universe of models such that the total entropy is less of equal to a given threshold \(H_{0}\). If \({\mathfrak {s}}_{nm}=\delta _{nm}\), then the optimization problem (6) with the global entropic constraint is related to a differential game of the form (4) with \({\mathsf {R}}={\mathsf I}\) and a single Lagrange multiplier \(\theta =\theta _{1}=\cdots = \theta _{N}\). This is equivalent to the problems studied in Hansen et al. [23].

3 Translation Invariant Systems: Closed Form Solution

In this section we treat a special case of the robust control problem (4), which allows a solution in closed form. The closed form solution allows us to obtain a good intuition concerning the qualitative behavior of the solution, which will guide us in the treatment of the general case in later sections. The major simplifying assumption used in this section is that the matrices \({\mathsf {A}}\), \({\mathsf {B}}\), and \({\mathsf {C}}\) correspond to discrete convolutions. This is an assumption which essentially states that e.g., \(a_{nm}=a_{n-m}\), i.e., the effect that a site \(m\) has at site \(n\) depends only on the distance between \(n\) and \(m\) and not on the actual positions of the sites; therefore, the system enjoys translation invariance properties. This assumption allows us to make a great simplifying step toward the resolution of the problem, through the use of the discrete Fourier transform that allows us to express the stochastic differential equation (3) in Fourier space as a decoupled set of stochastic differential equations thus leading to a closed form solution.

For the remaining of the present section we will use the notation \(tr\) for the transpose, \(a^{(i)}={\mathsf {A}}(:,i)\) for the \(i\)-th column of the matrix \({\mathsf {A}}\) (and similarly for any other matrix) and make the following:

Assumption 1

  1. (i)

    The matrices \({\mathsf {A}} \), \({\mathsf {B}} \), and \({\mathsf {C}} \) are symmetric and translation invariant.9

  2. (ii)

    The vectors \(a^{(1)}\), \(b^{(1)}\), \(c^{(1)}\), 10 as well as the initial condition \(x^{0}\), the stochastic process \(w\) and the control \(u\) are restricted to \(\mathcal {X}_{R} \subset {\mathbb {R}}^{N}\), where \(\mathcal {X}_{R}:= span\{ {\mathfrak {C}}^{(m)} \, ; m=0, \ldots , N-1\}\) where \({\mathfrak {C}}^{(m)}\) is the vector \({\mathfrak {C}}^{(m)}:=\left( 1,\cos \left( 2\pi \frac{m}{N} \right) , \ldots , \cos \left( 2\pi \frac{nm}{N} \right) , \ldots ,\cos \left( 2\pi \frac{(N-1)m}{N} \right) \right) ^{tr}\).


Remark 1

(a) If the matrix \({\mathsf {A}}\) is translation invariant then \(({\mathsf {A}}x)_{n}=\sum _{m=0}^{N-1} a^{(1)}_{n-m} x_{m}\). Equivalently, the matrix \({\mathsf {A}}\) is circulant, i.e., it is periodicized with period \(N\) (i.e., \(a_{n+N,m}=a_{n,m+N}=a_{nm}\)) and \(a_{n+1,m+1}=a_{n,m}\). For more information concerning the properties of translation invariant operators see [45]. The discrete Laplacian is an example of a matrix that satisfies Assumption 1(i).

(b) The space \({\mathcal X}_{R}\) is a subspace of \({\mathbb R}^{N}\) containing vectors of a specific symmetry. For example, \(N\) is odd so that \(N=2 \, n +1\), the symmetry is such that any element \(x\) of \(\mathcal {X}_{R}\) is such that \(x_{0}\) is arbitrary, whereas \(x_{1}=x_{N-1}\), \(x_{2}=x_{N-2}\), \(\ldots \), \(x_{n}=x_{n+1}=x_{2n+1-n}\), i.e., \(\mathcal{X}_{n}\) is an \(n+1\)-dimensional subspace of \({\mathbb {R}}^{N}\).

We will also need the definition of the discrete Fourier transform

Definition 2

(Discrete Fourier transform) Let \(x \in {\mathbb C}^{N}\) be a (complex - valued) vector.
  1. 1.
    The discrete Fourier transform of \(x\) is the complex valued vector defined as
    $$\begin{aligned}&{\mathfrak F}(x):=\hat{x}=( \hat{x}_{0}, \ldots , \hat{x}_{N-1})^{tr}, \\&\hat{x}_{k}=\sum _{n=0}^{N-1} x_{n} \exp \left( -\frac{2\pi i k n}{N} \right) , \,\,\, k=0,1,\ldots , N-1. \end{aligned}$$
  2. 2.
    The inverse Fourier transform of \(\hat{x}\) is given by
    $$\begin{aligned}&{\mathfrak F}^{-1}(\hat{x}):=x=( x_{0}, \ldots , x_{N-1})^{tr}, \\&x_{n}=\frac{1}{N} \sum _{k=0}^{N-1} \hat{x}_{k}\exp \left( \frac{2\pi i k n}{N} \right) , \,\,\, n=0,1,\ldots , N-1. \end{aligned}$$

In general the discrete Fourier transform of a real valued vector is complex valued. While this is no major problem for the treatment of the problem, it introduces certain complications in the algebra required. However, if we restrict to the subspace \({\mathcal X}_{R}\) (as in Assumption 1), then the Fourier transform of the system (3) is real. The next lemma guarantees that.

Lemma 1

Under Assumption 1 the discrete Fourier transform of equation (3) yields a real valued equation in Fourier space; in other words, the subspace \({\mathcal X}_{R}\) is invariant under the action of the dynamical system (3).


It is easily seen by the properties of the subspace \({\mathcal X}_{R}\) that if \(x \in {\mathcal X}_{R}\) then \(Im(\hat{x})=0\). Furthermore using Assumption 1, and the notation introduced therein, since the matrices \({\mathsf {A}}\), \({\mathsf {B}}\), and \({\mathsf {C}}\) are translation invariant their actions on the vectors \(x\), \(u\), \(v\), and \(w\) can be realized as convolutions. Then by the properties of the discrete Fourier transform \({\mathfrak F}({\mathsf {A}}x)={\mathfrak {F}}(A \star x)={\mathfrak {F}}a^{(1)} \, {\mathfrak {F}}x\), and similarly for all other terms. Hence by Assumption 1 (ii) all the vectors involved in these products are real valued; therefore, the Fourier transformed equation (3) is real valued. \(\square \)

Proposition 1

Let Assumption 1 hold, \({\mathsf {P}}=p\,I\), \({\mathsf {Q}}=q\,I\), and \({\mathsf {R}}=\Theta =I\) where \(I: {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}^{N}\) is the identity operator. Let \(a^{(1)}\), \(b^{(1)}\), \(c^{(1)}\) be the first columns of the matrices \({\mathsf {A}}\), \({\mathsf {B}}\), \({\mathsf {C}}\), define
$$\begin{aligned}&\hat{a}=(\hat{a}_{0},\ldots , \hat{a}_{N-1})={\mathfrak F}(a^{(1)}), \,\,\, \hat{b}=(\hat{b}_{0},\ldots , \hat{b}_{N-1})={\mathfrak F}(b^{(1)}), \\&\quad \hat{c}=(\hat{c}_{0},\ldots , \hat{c}_{N-1})={\mathfrak F}(c^{(1)}),\\&\sigma _{k}^{2}:=1+4\sum _{r=1}^{n}\cos ^{2}\left( 2\pi \frac{rk}{N}\right) , \,\,\, k \in {\mathbb {Z}}_{N-1}, \end{aligned}$$
and let \(\mathfrak {w}\) be a \({\mathbb {Q}}\)-Wiener process. Then, the control system (4) achieves in Fourier space the decoupled equivalent form
$$\begin{aligned} \begin{aligned}&\min _{\{\hat{u}_{k}\}}\,\max _{\{\hat{v}_{k}\}}{\mathbb {E}}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}\sum _{k}p(\hat{x}_{k}(t))^{2}+q(\hat{u}_{k}(t))^{2} -\theta \sum _{n}(\hat{v}_{k}(t))^{2})dt\right] \\&\text{ subject } \text{ to }\\&d\hat{x}_{k}(t)=(\hat{a}_{k}\hat{x}_{k}(t)+\hat{b}_{k}\hat{u}_{k}(t)+\hat{c}_{k}\hat{v}_{k}(t))dt+\hat{c}_{k}\sigma _{k}{\mathfrak {w}}_{k} (t), \,\,\, k\in {\mathbb {Z}}_{N}, \end{aligned} \end{aligned}$$
where all quantities are real valued.


Applying the Fourier transform \({\mathfrak {F}}\) on (3) yields
$$\begin{aligned} d \hat{x}_{k}(t) = (\hat{a}_{k} \hat{x}_{k}(t) + \hat{b}_{k} \hat{u}_{k}(t) + \hat{c}_{k} \hat{v}_{k}(t))dt + \hat{c}_{k} d\hat{\bar{w}}_{k}(t), k \in {\mathbb {Z}}_{N}. \end{aligned}$$
where hats denote Fourier transformed quantities, \(\hat{a}=(\hat{a}_{0},\ldots ,\hat{a}_{N-1})={ \mathfrak F}(a^{(1)})\) is the Fourier transform of the first column of the matrix \({\mathsf {A}}\), and similarly for \({\mathsf {B}}\), \({\mathsf {C}}\).

In view of Assumption 1 and Remark 1(b) since the Wiener process \(\bar{w} \in \mathcal {X}_{R}\), it has a representation as \(\bar{w}=(w_{0}, w_{1}, \ldots , w_{n}, w_{n}, w_{n-1}, \ldots , w_{1})\), where \(w_{0},w_{1},\ldots , w_{n}\) are \(n+1\) independent \({\mathbb {Q}}\)-Wiener processes (where \(N=2 n +1\)). Then (see Lemma 1) the discrete Fourier transform \(\hat{\bar{w}}={\mathfrak F}(\bar{w})\) of \(\bar{w}\) is a real valued vector with coordinates \(\hat{\bar{w}}_{k}(t)=w_{0} + 2 \sum _{r=1}^{n} \bar{w}_{r}(t)\cos \left( 2\pi \frac{r k}{N} \right) \), \(k \in {\mathbb {Z}}_{N}\). Defining the quantities \(\sigma ^{2}_{k}:= 1 + 4 \sum _{r=1}^{n} \cos ^{2}\left( 2\pi \frac{r k}{N}\right) \), \(k \in {\mathbb {Z}}_{N}\), and the stochastic process \(y=(\frac{1}{\sigma _{0}}\hat{\bar{w}}_{0}, \ldots , \frac{1}{\sigma _{N-1}}\hat{\bar{w}}_{N-1})\) we see by a simple application of Lévy’s theorem for the characterization of the Wiener process (see e.g., [27]) that \(y\) is a \({\mathbb {Q}}\)-Wiener process, which will be denoted as \({\mathfrak w}\). Therefore, \(\hat{\bar{w}}_{k}=\sigma _{k} {\mathfrak w}_{k}\), and this allows us to express (8) in the form of the dynamic constraint as stated in (7). Lemma 1 guarantees that this system, which is decoupled, and its solution is real valued.

Our choice of \({\mathsf {P}}, {\mathsf {Q}}, {\mathsf {R}}\), as multiples of \(I\), indicates the control functional that contains terms of the form \(\sum _{n} x_{n}^2\), \(\sum _{n} u_{n}^{2}\), and \(\sum _{n} v_{n}^{2}\) which can be expressed using the Plancherel theorem (see e.g., [45]) in terms of the sum of the squares of the coordinates of the relevant discrete Fourier transforms. For example, \(\sum _{n} u_{n}^{2}=\frac{1}{N}\sum _{k} \hat{u}_{k}^{2}\) and similarly for the other quantities. All these quantities are real valued on account of Lemma 1. Therefore, the control functional achieves the form stated in (7).\(\square \)

The decoupling of the system in Fourier space greatly facilitates its treatment and allows for explicit solutions using dynamic programming techniques and the HJBI equation (see e.g., [3, 21, 24] and references therein).

Proposition 2

(Nash equilibrium) Let \(\Lambda _{k}=\frac{\hat{c} _{k}^{2} }{2 \theta } - \frac{\hat{b}_{k}^{2}}{2 q}\), \(A_{k}=2 \hat{a}_{k}-r\), and \(M_{k}\) be the smallest positive solution of
$$\begin{aligned} \Lambda _{k} M_{k}^{2}+ A_{k} M_{k} + 2 p =0, \end{aligned}$$
such that \(R_{k}:=\hat{a}_{k} + \Lambda _{k} M_{k}< \frac{r}{2}\). The Nash equilibrium for the stochastic differential game (7) is given by the feedback laws
$$\begin{aligned} \hat{u}_{k}^{*}=-\frac{\hat{b}_{k} M_{k}}{2 q}\hat{x}_{k}^{*}, \,\,\,\, \hat{v}_{k}^{*}=\frac{\hat{c}_{k} \, M_{k} }{2 \theta }\hat{x}_{k}^{*}, \,\,\, k \in {\mathbb {Z}}_{N}, \end{aligned}$$
where \(\hat{x}^{*}=(\hat{x}^{*}_{0},\cdots , \hat{x}^{*}_{N-1})\) is the optimal state which solves the equation
$$\begin{aligned} d\hat{x}_{k}^{*}= R_{k} \hat{x}_{k}^{*} dt + \hat{c}_{k} \sigma _{k} d{\mathfrak {w}}_{k}. \end{aligned}$$


Fix \(k \in {\mathbb {Z}}_{N}\), let \(V_{k}\) be the value function for corresponding problem (7), \(\mathcal {L}_{k}: C^{2}({\mathbb {R}}) \rightarrow C({\mathbb {R}})\) be the generator operator of the diffusion process \(\{\hat{x}_{k}(t)\}\), \(t \in {\mathbb {R}}_{+}\) defined by
$$\begin{aligned} (\mathcal {L}_{k} \Phi )(\hat{x}_{k} )= (\hat{a}_{k} \hat{x}_{k} + \hat{b}_{k} \hat{u}_{k} + \hat{c}_{k} \hat{v}_{k}) \frac{\partial \Phi }{\partial \hat{x}_{k} } + \frac{1}{2} \hat{c}_{k}^{2} \sigma ^{2}_{k} \frac{\partial ^{2} \Phi }{\partial \hat{x}_{k}^{2} } , \end{aligned}$$
and consider the Hamiltonian function \(\bar{H}_{k}\), which for any function \(\Phi \) of sufficient regularity is defined by
$$\begin{aligned} \bar{H}_{k}\left( \hat{x}_{k}, \frac{\partial \Phi }{\partial \hat{x}_{k} } , \frac{\partial ^{2} \Phi }{\partial \hat{x}_{k}^{2} } \right) :=\inf _{\hat{u}_{k}}\sup _{\hat{v}_{k}} \left( p\hat{x}_{k}^{2}+ q \hat{u}_{k}^{2}- \theta \hat{v}_{k}^{2} + \mathcal {L}_{k} \Phi \right) , \end{aligned}$$
where the optimization problems are considered as static optimization problems over appropriate subsets of \({\mathbb {R}}\). Dynamic programming arguments imply that the value function \(V_{k}\) satisfies the HJBI equation
$$\begin{aligned} r V_{k}= \bar{H}_{k}\left( \hat{x}_{k}, \frac{\partial V_{k} }{\partial \hat{x}_{k} } , \frac{\partial ^{2} V_{k} }{\partial \hat{x}_{k}^{2} } \right) , \end{aligned}$$
and the optimal strategy is obtained by the maximizer and the minimizer of the \(\bar{H}\).
To obtain the exact form of the HJBI equation we must first calculate \(\bar{H}_{k}(\hat{x}_{k},\Phi _{\hat{x}_{k}},\Phi _{\hat{x}_{k}\hat{x}_{k}})\) for any function \(\Phi \), where we use the shorthand notation \(\Phi _{\hat{x}_{k}}= \frac{\partial \Phi }{\partial \hat{x}_{k} } \) and \(\Phi _{\hat{x}_{k}\hat{x}_{k}}= \frac{\partial ^{2} \Phi }{\partial \hat{x}_{k}^{2} } \) for simplicity. The solution of the static maximization problem over \(\hat{v}_{k}\) is given by the first order condition \(\hat{v}_{k}^{*}=\frac{\hat{c}_{k}}{2\theta }\Phi _{\hat{x}_{k}}\), and this corresponds to a maximum value
$$\begin{aligned} \Psi _{k}:=\frac{\hat{c}_{k}^{2} \sigma ^{2}_{k}\Phi _{\hat{x}_{k}\hat{x}_{k}}}{2}+q\,\hat{u}_{k} ^{2}+p\,\hat{x}_{k}^{2}+\hat{b}_{k}\,\Phi _{\hat{x}_{k}}\,\hat{u}_{k}+\hat{a}_{k}\,\Phi _{\hat{x}_{k}}\,\hat{x}_{k}+\frac{\hat{c}_{k}^{2}\Phi _{\hat{x}_{k}}^{2}}{4\theta }. \end{aligned}$$
We now minimize the function \(\Psi _{k}\) with respect to \(\hat{u}_{k}\). The first order condition for the minimum is \(\hat{u}_{k}=-\frac{\hat{b}_{k}\,\Phi _{\hat{x}_{k}}}{2\,q}\) which upon substitution provides an explicit form for the Hamiltonian as
$$\begin{aligned} \bar{H}_{k}(x,\Phi _{\hat{x}_{k}},\Phi _{\hat{x}_{k}\hat{x}_{k}})=\frac{\hat{c}_{k}^{2}\sigma ^{2}_{k}\,\Phi _{\hat{x}_{k}\hat{x}_{k}}}{2}+p\,\hat{x}_{k}^{2}+\hat{a}_{k}\,\Phi _{\hat{x}_{k}}\,\hat{x}_{k}-\frac{\hat{b} _{k}^{2}\Phi _{\hat{x}_{k}}^{2}}{4\,q}+\frac{\hat{c}_{k}^{2}\,\Phi _{\hat{x}_{k}}^{2}}{4\theta }. \end{aligned}$$
The HJBI equation thus assumes the form
$$\begin{aligned} \frac{\hat{c}_{k}^{2} \sigma ^{2}_{k} }{2}\,\frac{\partial ^2 V_{k}}{\partial \hat{x}_{k}^2}+p\,\hat{x}_{k}^{2}+\hat{a}_{k}\,\hat{x}_{k}\frac{\partial V_{k}}{\partial \hat{x}_{k}}+\left( -\frac{\hat{b}_{k}^{2}}{4\,q}+\frac{\hat{c}_{k}^{2}}{4\theta }\right) \left( \frac{\partial V_{k}}{\partial \hat{x}_{k}}\right) ^2=r\, V_{k}, \end{aligned}$$
which is a nonlinear second order differential equation. The quadratic nature of the payoff function motivates us to look for solutions of the form
$$\begin{aligned} V_{k}(\hat{x}_{k})=\frac{M_{2,k}}{2}\hat{x}_{k}^{2}+ M_{1,k} \hat{x}_{k} + M_{0,k}, \end{aligned}$$
where \(M_{i,k}\), \(i=0,1,2\) are constants to be specified. Substituting into the HJBI equation and matching coefficients of different orders of \(\hat{x}_{k}\) we obtain that the coefficients \(M_{i,k}\), \(i=0,1,2\) are given by \(M_{1,k}=0\), \( M_{0,k}=\frac{\hat{c}_{k}^{2} \sigma ^{2}_{k} M_{2,k}}{2 r}\), and \(M_{2,k}=:M_{k}\) must be a positive solution of Eq. (9). Substituting this expression in the first order conditions for the static optimization problem that provided the expression for \(\bar{H}_{k}\) yields the feedback form for the optimal strategies of Eq. (10) and substitution into the state equation leads to (11). The dual problem is easily seen to have the same solution; therefore, the proposed solution is a Nash equilibrium. The condition \(R_{k}<\frac{r}{2}\) is needed so that the transversality condition for the infinite horizon optimal control problem holds. \(\square \)

Corollary 1

Assume that either (a) \(\Lambda _{k}<0\) or (b) \(\Lambda _{k}>0\), \(A_{k}<0\), \(8p \Lambda _{k} < A_{k}^{2}\) hold. Then, the Nash equilibrium and optimal path are as in Proposition 2 with
$$\begin{aligned} M_{k}=\frac{-A_{k} -\sqrt{A_{k}^2-8 p \Lambda _{k}}}{2 \Lambda _{k}} \end{aligned}$$
$$\begin{aligned} R_{k}=\frac{r}{2}-\frac{\sqrt{A_{k}^2-8 p \Lambda _{k}}}{2 } \end{aligned}$$


The proof follows by an analysis of the roots of the quadratic equation (9), \(\rho _{k,\pm }=\frac{1}{2\Lambda _{k}}(-A_{k} \pm \sqrt{A_{k}^{2}-8 p \Lambda _{k} })\). It is seen that if (a) \(\Lambda _{k}<0\), then irrespectively of the sign of \(A_{k}\), \(\rho _{k,+}<0<\rho _{k,-}\), and (b) if \(\Lambda _{k}>0\), \(A_{k}<0\), and \(8 p \Lambda _{k} < A_{k}^{2}\) then \(0< \rho _{k,-}<\rho _{k,+}\). Calculating \(R_{k}\) we see that only \(\rho _{k,-}\) is acceptable in all possible cases. \(\square \)

Remark 2

If \(M_{k}\) is complex then we may not construct a solution to the problem using the quadratic ansatz. If \(M_{k}\) is negative, then the solution we construct is not convex in \(x\). While this would be unacceptable in a standard minimization problem, we are more reluctant to exclude on a priori grounds here. However, by interpreting the solution of the problem as the solution of a minimization problem for the worst possible model \({\mathbb {Q}}_{w}\) it makes sense from the economic point of view to ask for \(M_{k}>0\).

The next proposition provides an exact connection of the stochastic differential game (7) with a robust control problem with global entropic constraints.

Proposition 3

Assume that either (a) \(\Lambda _{k}<0\) or (b) \(\Lambda _{k}>0\), \(A_{k}<0\), \(8p \Lambda _{k} < A_{k}^{2}\) hold. A solution of the stochastic differential game (7) for \(\theta \) such that
$$\begin{aligned} \frac{1}{N} \sum _{k} \frac{\hat{c}_{k}^{4}\sigma _{k}^{2} M_{k}^{2}}{2 r \theta ^2} \frac{1}{r-2 R_{k}} \le H_{0} \end{aligned}$$
corresponds to a solution of the robust control problem with global entropic constraint \(\sum _{n} \mathcal{D}_{n} \le H_{0}\). Therefore, as \(\theta \) decreases, \(H_{0}\) increases.


Under the imposed conditions, by Corollary 1 there exists \(M_{k}>0\) such that Proposition 2 holds, and we may calculate explicitly the global discounted entropy. Using Itō’s lemma on the function \(f(t,x)= \frac{\hat{c}_{k}^{4}\sigma _{k}^{2} M_{k}^{2}}{4\theta ^2}e^{-r t}x^2\), as composed with the Itō processes \(\hat{x}_{k}\) we find that \(\hat{y}_k:={\mathbb E}_{{\mathbb {Q}}}[exp(-r t)\hat{v}_{k}^2]\) satisfies the deterministic ordinary differential equation
$$\begin{aligned} \hat{y}_{k}'=(2 R_{k}-r )\hat{y}_{k} + \frac{\hat{c}_{k}^{4}\sigma _{k}^{2} M_{k}^{2}}{2\theta ^2} e^{-r t}, \end{aligned}$$
which can be integrated to give
$$\begin{aligned} \hat{y}_{k}(t)= \frac{\hat{c}_{k}^{4}\sigma _{k}^{2} M_{k}^{2}}{2\theta ^2} \frac{1}{2 R_{k}} \left( e^{(2 R_{k}-r)t} -e^{-r t} \right) , \end{aligned}$$
and a further integration over \([0,\infty )\) yields that
$$\begin{aligned} {\mathbb E}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty } e^{-r t}\hat{v}_{k}^{2}(t) \right] = \frac{\hat{c}_{k}^{4}\sigma _{k}^{2} M_{k}^{2}}{2 r \theta ^2} \frac{1}{r-2 R_{k}}, \,\,\, k \in {\mathbb {Z}}_{N}. \end{aligned}$$
An application of the Plancherel theorem allows us to compute the global entropy explicitly as
$$\begin{aligned} \sum _{n}\mathcal{D}_{n}={\mathbb E}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty } e^{-r t}\sum _{n} v_{n}^2(t) dt \right] =\frac{1}{N} \sum _{k} \frac{\hat{c}_{k}^{4}\sigma _{k}^{2} M_{k}^{2}}{2 r \theta ^2} \frac{1}{r-2 R_{k}}. \end{aligned}$$
This allows us to obtain an explicit connection between the Lagrange multiplier \(\theta \) and the radius of the entropy ball \(H_{0}\). Using (13) yields (12). Note that this relationship is deceptively simple as \(R_{k}\) and \(M_{k}\) depend on \(\theta \). However, it allows us to obtain a general trend in the relationship between \(\theta \) and \(H_{0}\). As \(\theta \) decreases, we can easily see that \(H_{0}\) increases. \(\square \)

4 Hot Spot Formation in Translation Invariant Systems

In this section we study the validity and the qualitative behavior of the controlled system. We will call the qualitative changes of the behavior of the system hot spots. We will define two types of hot spots:
  • \(\rhd \)Hot spot of type I: This is a breakdown of the solution procedure, i.e., a set of parameters where a solution to the above problem does not exist.

  • \(\rhd \)Hot spot of type II: This corresponds to the case where the solution exists but may lead to spatial pattern formation, i.e., to spatial instability similar to the Turing instability.

In what follows we discuss the formation of hot spots in the case of finite lattices \({\mathbb {Z}}_{N}\).

4.1 Hot Spots of Type I

The breakdown of the solution procedure is associated with the failure of obtaining a solution for the HJBI. In the context of the specific type of solutions considered here (see the proof of Proposition 2) this would correspond to the absence of real solutions11 to the algebraic quadratic equation (9). Therefore, hot spots of type I are related to the nonexistence of solutions to the robust control problem.

Proposition 4

(Type I hot spot creation) Let \(\Lambda _{k}=\frac{\hat{c} _{k}^{2}}{2 \theta } - \frac{\hat{b}_{k}^{2}}{2q}\) and \(A_{k}=2 \hat{a}_{k}-r\). Hot spots of Type I may be created if
$$\begin{aligned} \Lambda _{k}>0 \,\,\, \text{ and } \,\,\, A_{k}^2 < 8 p \Lambda _{k}. \end{aligned}$$


The occurence of hot spots of type I is associated with the nonexistence of real roots for the quadratic equation (9). This is equivalent to the discriminant being negative, which can only happen if \(\Lambda _{k}>0\) and \(A_{k}^{2} < 8 p \Lambda _{k}\). Therefore, we see that hot spot formation of type I will occur if the parameter values are such that (14) holds. \(\square \)

Condition (14) for the occurrence of hot spots of Type I is a multiparameter conditions whose validity can easily be checked for concrete models. However, a general qualitative result that can be obtained from Proposition 4 is that if \(\theta \) is too low, then condition (14) is always satisfied and hot spots of type I are expected to form. This result is reasonable as for low values of \(\theta \) the payoff functional of problem (4) will no longer guarantee concavity in \(v\) thus causing the assumptions of the minimax theorem that guarantees existence of Nash equilibria to fail. If we recall the interpretation of \(\theta \) as the Lagrange multiplier for the global entropic constraint, we may offer an alternative interpretation of this result by stating that large uncertainty (i.e., large radius of the entropy ball) causes the robust control procedure to break down. By observing condition (14) an explicit threshold \(\theta _{cr}\), below which formation of hot spots of Type I is expected can be obtained as \(\theta _{cr}:=\min _{k\in {\mathbb {Z}}_{N}}\left\{ \frac{p\hat{c}_{k} ^{2}}{\left( \hat{a}_{k}-\frac{r}{2}\right) ^{2}+\frac{p}{q}\hat{b}_{k}^{2} }\right\} \).

4.2 Hot Spots of Type II

We now consider the spatial behavior of the optimal path, as given by the Itō stochastic differential equation (11). The optimal path is a random field, thus leading to random patterns in space, some of which may be short lived and generated simply by the fluctuations of the Wiener process. We thus look for the spatial behavior of the mean field as describable by the expectation \(\hat{X}_{k}:={\mathbb {E}}_{{\mathbb {Q}}}[\hat{x} _{k}^{*}]\). By standard linear theory \(\hat{X}_{k}(t)=\hat{X}_{k} (0)\exp (R_{k}t)\) and this means that for the modes \(k\in {\mathbb {Z}}_{N}\) such that \(R_{k}\ge 0\) we have temporal growth, and these modes will dominate the long-term temporal behavior. On the contrary modes \(k\) such that \(R_{k}<0\) decay as \(t\rightarrow \infty \) therefore such modes correspond to (short term) transient temporal behavior, not likely to be observable in the long-term temporal behavior. The above discussion implies that the long time asymptotic mean behavior of the solution in real space, will yield a spatial pattern of the form
$$\begin{aligned} X_{n}(t):={\mathbb {E}}_{{\mathbb {Q}}}[x_{n}(t)]=\sum _{k \, : \, R_{k} \ge 0}\hat{x}_{k}(0)\,\exp (R_{k}t)\,\cos \left( 2\pi \frac{k}{N}n\right) . \end{aligned}$$
The above discussion therefore leads us to a very important conclusion, which is of importance to economic theory of spatially interconnected systems:

If as an effect of the robust optimal control procedure exerted on the system, there exist modes \(k \in {\mathbb {Z}}_{N}\) such that \(R_{k} >0\), then this will lead to spatial pattern formation which will create spatial patterns of the form (15). We will call such patterns an optimal robustness-induced spatial instability or hot spot of Type II.

The economic significance of this result should be stressed. We show the emergence of a spatial pattern formation instability, which can be triggered by the optimal control procedures exerted on the system; in other words emergence of spatial clustering and agglomerations in the economy caused by uncertainty aversion and robust control. This observation can further be extended in the case of nonlinear dynamics, in the weakly nonlinear case.

The next proposition identifies which modes can lead to hot spot of Type II formation (optimal robustness-induced spatial instability) and in this way through Eq. (15) identifies possible spatial patterns that can emerge in the spatial economy.

Proposition 5

(Type II hot spot creation) Assume that either (a) \(\Lambda _{k}<0\) or (b) \(\Lambda _{k}>0\), \(A_{k}<0\), \(8p \Lambda _{k} < A_{k}^{2}\) hold. There exists pattern formation behavior for the optimal path of problem (4) if there exist modes \(k\) such that \(R_{k}>0\), i.e., if there exist modes \(k\) such that
$$\begin{aligned} \hat{a}_{k}(r-\hat{a}_{k}) > - 2 p \Lambda _{k}. \end{aligned}$$


The expectation \(X_{k}^{*}:={\mathbb {E}}_{{\mathbb {Q}}} [x_{k}^{*}]\), \(k\in {\mathbb {Z}}_{N}\) of the optimal path is given by the solution of the linear deterministic ordinary differential equation \(dX_{k}^{*}(t)=R_{k}X_{k}^{*}(t)\,dt\), \(k\in {\mathbb {Z}}_{N}\). Thus, pattern formation occurs for these \(k\in {\mathbb {Z}}\) such that \(R_{k}>0\). For hot spot of type II creation we need \(R_{k} > 0\). This implies \(r^{2}>A_{k}^{2}-8 p \Lambda _{k}\), which upon rearrangement yields \(\hat{a}_{k}(r-\hat{a}_{k}) > -2 p \Lambda _{k}\). \(\square \)

The condition for instability clearly shows that the actions of the policy maker as well as robustness affect the possibility of creation of hot spots of type II through the term \(\Lambda _{k}\). Suppose that \(\hat{a}_{k}<0\) (so that this mode is stable for the uncontrolled system) and \(\Lambda _{k}<0\). Then, (16) cannot hold so this mode remains stable in the presence of robust control. If on the contrary \(\hat{a}_{k}<0\) and \(\Lambda _{k}>0\), then condition(16) may hold as long as \(\Lambda _{k}> \frac{|\hat{a}_{k}|(r+ |\hat{a}_{k}|) }{2p}\). This implies that in the limit as \(\theta \) is small, robustness is expected to have a destabilizing effect on modes that are stable for the uncontrolled system and thus creates a tendency for pattern formation in the controlled system as an effect of the actions of the decision maker. If \(\theta \) is large, then the control procedure is expected in general to have stabilizing effects in the system and suppress pattern formation.

In conclusion, optimal robustness-induced spatial instability or hot spot of Type II, is a spatial pattern formation mechanism, which has similarities with the celebrated Turing instability mechanism for pattern formation but with an important difference. The proposed mechanism can be triggered as an effect of control and robustness even when the uncontrolled system displays stable spatially homogeneous patterns. This is important, since it induces a selection procedure on the possible patterns. In fact, as the analysis in the proof of Proposition 5 indicates, only patterns which correspond to temporal growth rates less than \(\frac{r}{2}\) are allowed (by the transversality conditions of the system) which is a serious difference with the standard Turing instability in which it is only the spatial structure of the system (i.e., the spectrum of \({\mathsf {A}}\)) which selects the possible unstable patterns. Therefore, temporal discounting plays an important role in hot spot of Type II formation. Furthermore, as noted above, the combined effects of control and robustness play an important role in the phenomenon as well, in contrast to standard Turing instability in which such effects are absent. On the technical mathematical aspect, our results report a Turing type instability in a forward–backward system (which derives from the Hamiltonian formulation of the optimal control problem) rather than simply in a forward Cauchy problem which is the standard framework in the study of pattern formation. Finally, from the point of view of economic theory, even though the possibility of pattern formation has been discussed in a number of economic systems (see e.g., [29] or [11]) to the best of our knowledge this is the first time that this phenomenon has been reported in the combined presence of noise and robustness and highlighting the importance of these phenomena in the development of patterns.

4.3 The Cost of Robustness

Consider the solution of the problem when \(\theta \rightarrow \infty ,\) which is the case where the regulator is not concerned about model misspecification and trusts the benchmark model. In this case \(\hat{u}_{k}^{\infty *}=-\frac{\hat{b}_{k}M_{k}}{2q}\hat{x}_{k}^{*},\hat{v}_{k}^{\infty *}=0\) and the expectation \(\hat{X}_{k}^{\infty *}:={\mathbb {E}}_{{\mathbb {Q}}}[\hat{x} _{k}^{\infty *}]\), \(k\in {\mathbb {Z}}_{N}\) of the optimal path is given by the solution of differential equation \(\frac{d}{dt} \hat{X}_{k}^{\infty *}(t)=R_{k} \hat{X}_{k}^{\infty *}(t)\), with \(R_{k}:=\hat{a}_{k}+\Lambda _{k}M_{k},\)\(\Lambda _{k}=-\frac{\hat{b}_{k}^{2}}{2q}.\) Let \(X_{n}^{\infty }(t):={\mathbb {E} }_{{\mathbb {Q}}}[x_{n}^{\infty }(t)]\) and \(U_{n}^{\infty }\left( t\right) =(B X^{\infty })_{n}(t)\) the optimal mean paths for the state and the feedback control in real space, where \(B\) is an abbreviation of the control operator, defined through the appropriate inverse Fourier transforms

The expected minimum cost of the regulator when the benchmark model is trusted will be
$$\begin{aligned} C^{\infty }=\int \limits _{0}^{\infty }e^{-rt}\sum _{n}\left[ q\left( X_{n}^{\infty }(t)\right) ^{2}+p\left( ( B X^{\infty })_{n}(t)\right) ^{2}\right] dt. \end{aligned}$$
Consider now the solution of a the robust control problem for a \(\theta \) such that \(\theta ^{cr}<\theta <\infty .\) In this case \(\hat{u}_{k}^{\theta *}=-\frac{\hat{b}_{k}M_{k}}{2q}\hat{x}_{k}^{*},\,\hat{v}_{k}^{\theta *}=\frac{\hat{c}_{k}\,M_{k}}{2\theta }\hat{x}_{k}^{*},\,\) and furthermore the expectation of the robust optimal path is given by the solution of \(\frac{d}{dt}X_{k}^{\theta *}(t)=R_{k}\left( \theta \right) X_{k} ^{\theta *}(t)\), \(R_{k}\left( \theta \right) :=\hat{a}_{k}+\Lambda _{k}M_{k},\)\(\Lambda _{k}=\frac{\hat{c}_{k}^{2}}{2\theta }-\frac{\hat{b}_{k} ^{2}}{2q}.\) Defining again the optimal mean paths in real space, and using the notation \(B^{\theta }\) for the relevant control operator, the expected minimum cost of the regulator under robust control will be
$$\begin{aligned} C\left( \theta \right) =\int \limits _{0}^{\infty }e^{-rt}\sum _{n}\left[ q\left( X_{n}^{\theta }(t)\right) ^{2}+p\left( (B^{\theta }X^{\theta })_{n}(t)\right) ^{2}\right] dt. \end{aligned}$$
The cost of robustness will be defined as \(C_R\left( \theta \right) =C\left( \theta \right) -C^{\infty },\) and robust regulation is feasible, since \(\theta ^{cr}<\theta \) so that a hot spot of type I does not emerge. Suppose that there exist one or more modes \(M_{k}\) such that the cost of robustness as defined in real space by (18) is \(C_R\left( \theta \right) >0.\) These modes will be hot spots of type III, since they imply that there are locations which under robust control will increase the cost of regulator relative to regulation using the benchmark model. The emergence of type III hot spots will become more clear when we study the general linear quadratic model with local entropic constraints.

4.4 Nontranslation Invariant Systems

The methodology employed in this section to provide closed form solutions used the translation invariant property of the dynamical system, which allowed the use of the discrete Fourier transform. This is a symmetry property of the system (commutation of the vector field with the translation operator) which has as a result that the spatial operators are convolutions, and therefore, the discrete Fourier transform may be used to turn this convolution into a product in Fourier space. This situation may be generalized for other symmetry groups and may lead to interesting generalizations for systems which are not translation invariant but invariant under other more complicated symmetries. In this case the tools of harmonic analysis on groups (see e.g., [36]) may be used, and generalized Fourier transforms may be defined in terms of the Haar measure12. In terms of this generalized Fourier transform, the system will decouple thus allowing for use of the proposed method in more general settings (see e.g., [7] for a related discussion).

5 The General Linear Quadratic Control Problem

We now relax the simplifying (and restrictive) assumptions concerning the translation invariance property of the operators \({\mathsf {A}} , {\mathsf {B}}, {\mathsf {C}} \) as well as the overly restrictive assumption that \({\mathsf {P}} =p I\) and \({\mathsf {Q}} =q I\).

We now consider instead the solution of the general linear quadratic robust control problem (4) under the state constraint (3), and comment on the possibility of hot spot formation working in real space directly rather than in Fourier transform space. The general form of the problem allows the study of a wider range of economic applications (see, e.g., Sect. 7 for an illustration of the applicability of the general problem). The relaxation of translation invariance leads to significant complications, and to the inability to derive solutions in closed form. However, as our subsequent analysis shows the qualitative aspects regarding hot spot formation in general linear quadratic problems persist, beyond the translation invariant case.

5.1 Solution in Terms of the Riccati Equation

The problem may be treated using the HJBI equation, which is solvable in terms of a matrix Riccati equation. In what follows \(*\) denotes transposition of a matrix, \(Tr\) denotes the trace and \(\lambda _{min}\), \(\lambda _{max}\) the minimum and maximum eigenvalues, respectively (assuming they are real).

Theorem 1

If the problem (4) has a solution, for arbitrary \(x \in {\mathbb {R}}^{N}\), then the optimal controls are of the feedback control form
$$\begin{aligned} u= -{\mathsf {Q}} ^{-1}{\mathsf {B}}^{*} {\mathsf {H}} x, \,\,\,\,\, v=\frac{1}{\theta } {\mathsf {R}} ^{-1} {\mathsf {C}}^{*} {\mathsf {H}} x, \end{aligned}$$
and the optimal state satisfies the Ornstein–Uhlenbeck equation
$$\begin{aligned} dx=({\mathsf {A}} + \bar{{\mathsf {E}} } {\mathsf {H}})x \, dt + {\mathsf {C}} dW \end{aligned}$$
where \({\mathsf {H}}\) is a symmetric matrix which is the solution of the symmetric matrix Riccati equation
$$\begin{aligned} {\mathsf {A}}^{*} {\mathsf {H}}+ {\mathsf {H}}{\mathsf {A}}+ \frac{1}{2} {\mathsf {H}}{\mathsf {E}} {\mathsf {H}}- r {\mathsf {H}}+ 2 {\mathsf {P}}=0 \end{aligned}$$
and \({\mathsf {E}}:=\frac{1}{2}(\bar{{\mathsf {E}}}+\bar{{\mathsf {E}}}^{*})\) is the symmetric part of \(\bar{{\mathsf {E}}}:=\frac{1}{\theta } {\mathsf {C}} {\mathsf {R}} ^{-1}{\mathsf {C}}^{*}-{\mathsf {B}} {\mathsf {Q}} ^{-1}{\mathsf {B} }^{*} \).


To obtain the relevant HJBI we need to consider the Hamiltonian
$$\begin{aligned} H(V;x,u,v)&= \langle {\mathsf {A}} x + {\mathsf {B}} u + {\mathsf {C}} v, DV \rangle + \frac{1}{2} Tr({\mathsf {C}} {\mathsf {C}}^{*} D^{2}V)+\langle {\mathsf {P}}x,x\rangle \\&+\,\,\langle {\mathsf {Q} }u,u\rangle -\theta \langle {\mathsf {R}}v,v\rangle , \end{aligned}$$
( where \(DV\), \(D^{2}V\) are the gradient and Hessian matrix of \(V\)) as well as the upper Hamiltonian and lower Hamiltonians defined, respectively, as
$$\begin{aligned} \bar{H}:=\sup _{v}\inf _{u}H(V;x,u),\,\,\,\underline{H}:=\inf _{u}\sup _{v}H(V;x,u). \end{aligned}$$
The quadratic nature of the Hamiltonian allows the explicit calculation of \(\bar{H}\) and \(\underline{H}\) Let us present in some relative detail the construction of \(\underline{H}\). The first order condition for the minimization over \(u\) yields \(u=-\frac{1}{2} {\mathsf {Q}}^{-1}{\mathsf {B}}^{*}DV\), whereas the first order condition for the maximization over \(v\) in the form \(v=\frac{1}{2\theta }{\mathsf {R}}^{-1}{\mathsf {C}}^{*}DV\), where the fact that \({\mathsf {Q}}\) and \({\mathsf {R}}\) are considered symmetric and invertible has been used. Upon substitution of these into the Hamiltonian we obtain
$$\begin{aligned} \bar{H}=\underline{H}&= \langle {\mathsf {A}}x,DV\rangle +\frac{1}{4}\left\langle \left( \frac{1}{\theta }{\mathsf {C} }{\mathsf {R}}^{-1}{\mathsf {C}}^{*}- {\mathsf {B}}{\mathsf {Q}}^{-1}{\mathsf {B}}^{*}\right) DV,DV\right\rangle \\&+\,\,\left\langle {\mathsf {P}}x,x\right\rangle +\frac{1}{2} Tr({\mathsf {C}}{\mathsf {C} }^{*}D^{2}V), \end{aligned}$$
which allows us to express the HJBI equation in the form of a nonlinear elliptic equation in \(R^{N}\) as
$$\begin{aligned} \langle {\mathsf {A}}x,DV\rangle +\frac{1}{4}\left\langle \left( \frac{1}{\theta }{\mathsf {C} }{\mathsf {R}}^{-1}{\mathsf {C}}^{*}- {\mathsf {B}}{\mathsf {Q}}^{-1}{\mathsf {B}}^{*}\right) DV,DV\right\rangle \left\langle {\mathsf {P}}x,x\right\rangle +\frac{1}{2} Tr({\mathsf {C}}{\mathsf {C} }^{*}D^{2}V)=rV. \end{aligned}$$
We seek a solution of the form \(V(x)=\frac{1}{2}\langle \bar{{\mathsf {H}}} x, x\rangle + \langle j, x\rangle + K\) where \( \bar{{\mathsf {H}}} \in {\mathbb {R}}^{N \times N}\), \(j \in {\mathbb {R}}^{N}\) and \(K \in {\mathbb {R}}\) are to be determined. Noting that \(DV={\mathsf {H}}x\) and \(D^2 V={\mathsf {H}}\) where \({\mathsf {H}}=\frac{1}{2}(\bar{{\mathsf {H}}}+\bar{{\mathsf {H}}}^{*})\) is the symmetric part of \(\bar{{\mathsf {H}}}\), substituting into the HJBI and matching powers of \(x\), we obtain for the quadratic terms the matrix Riccati equation
$$\begin{aligned} {\mathsf {H}}{\mathsf {A}}+ \frac{1}{4} {\mathsf {H}}\bar{{\mathsf {E}} } {\mathsf {H}}+ {\mathsf {P}}- \frac{r}{2} \bar{{\mathsf {H}}} =0. \end{aligned}$$
To obtain an equation involving \({\mathsf {H}}\) only we take the transpose of (22) and add it to (22) to obtain (21). Once this is obtained we can obtain the other terms in the ansatz as \(j=0\) and \(K=\frac{1}{2r} Tr( {\mathsf {C}}{\mathsf {C}}^{*}{\mathsf {H}})\), by matching the terms of remaining orders in \(x\). The proof concludes by the determination of the optimal controls which are easily shown to be in the form (19). \(\square \)

Remark 3

The matrix Riccati equation (21) is the generalization of the quadratic algebraic equation (9) in the case where the matrices \({\mathsf {A}}\), \({\mathsf {B}}\), and \({\mathsf {C}}\) are not translation invariant, and thus not amenable to analysis using the Fourier transform.

Clearly, by Theorem 1 the solvability and the properties of the solution for the optimal control problem are reduced to the solvability and the properties of the solution of the operator Riccati equation (21).

Proposition 6

Let \(m=||{\mathsf {A}} ||\) defined as \(m=\{\sup \langle {\mathsf {A}} x,x\rangle , \,\,\, ||x||_{{\mathbb {R}}^{N}}=1\}\) and assume that \(m < r/2\). Then, for small enough values of \(||{\mathsf {E}}||\) and \(||{\mathsf {P}} ||\) the Riccati equation (21) admits a unique bounded solution.


By further defining the matrix \(\tilde{{\mathsf {A}} }={\mathsf {A}} -\frac{r}{2}I\) the symmetric matrix Riccati equation simplifies to
$$\begin{aligned} {\mathsf {H}} \tilde{{\mathsf {A}} } + \tilde{{\mathsf {A}} }^{*} {\mathsf {H}}+ \frac{1}{2}{\mathsf {H}} {\mathsf {E}} {\mathsf {H}} + 2{\mathsf {P}} =0 \end{aligned}$$
This is in the standard form of Riccati equation studied in the literature (see e.g., [9] or [16]). The spectrum of \(\tilde{{\mathsf {A}} }\) is in the interval \([-m-\frac{r}{2}, m-\frac{r}{2}]\), whereas the spectrum of the \(-\tilde{{\mathsf {A}} }\) is in the interval \([\frac{r}{2} -m,m+\frac{r}{2}]\). If \(m<\frac{r}{2}\) then \(d:=dist(spec( \tilde{{\mathsf {A}} }), spec(- \tilde{{\mathsf {A}} }))>0\). Then according to Theorem 3.7 in Albeverio et al. [2] (whose proof is based on the Banach contraction theorem) Eq. (23) has a unique solution. \(\square \)

Remark 4

The “smallness” condition on \(||{\mathsf {E} }||\) and \(||{\mathsf {P}}||\) is made explicit via the Banach contraction argument in the proof of Theorem 3.7 in Albeverio et al. [2]. In particular, for the existence of a solution we need \(||{\mathsf {E} }||+||{\mathsf {P}}||<d\). It can be seen that this condition breaks down for small enough values of \(\theta \), which in fact is the analog of the hot spot of Type I that was obtained before. for the restricted class of models involving translation invariant operators, using the Fourier expansion method (see Proposition 4).

5.2 Hot Spot Formation in General Linear Quadratic Systems

The various hot spots that were obtained explicitly for the translation invariance case, can be generalized for the general linear quadratic case.

Concerning hot spots of Type I, these are related to the breakdown of the minimax problem involved, for small values of the parameter \(\theta \). In fact, in McMillan and Triggiani [33] it has been shown rigorously for a similar deterministic problem that there does not exist a saddle point, and the functional becomes unbounded if \(\theta <\theta _{cr}\) where \(\theta _{cr}\) is an intrinsic parameter of the system related to the data of the system. In this section we show that this result seems to be robust in the stochastic case also, and furthermore, we provide detailed conditions on how this phenomenon may occur using the theory of the matrix Riccati equation. Therefore, hot spots of Type I do exist in general linear quadratic systems and are indeed related to model misspecification costs.

Proposition 7

(Hot spot formation in general systems) Consider the robust control problem (4).

(a) If \((\theta _{0},\ldots ,\theta _{N-1})\) are such that \({\mathsf {E}} =(e_{nm})\) is a positive matrix and
$$\begin{aligned} \lambda _{min}({\mathsf {E}} ) Tr({\mathsf {P}}) \ge \Lambda _{cr}:= \frac{ N}{4} \lambda _{\min }^{2}\left( {\mathsf {A}}+{\mathsf {A}}^{*}-r I\right) , \end{aligned}$$
where by \(\lambda _{min}\) we denote the lowest eigenvalue of the relevant matrix, then hot spots of type I occur. If the matrix \({\mathsf {E}} \) is diagonally dominated then hot spots of type I will occur for values of \((\theta _{0},\ldots ,\theta _{N-1})\) such that
$$\begin{aligned} \min _{k}\{ e_{kk} - \sum _{m \ne k} |e_{km}|\} \ge \Lambda _{cr}. \end{aligned}$$
(b) A hot spot of type II appears if the matrix \(\mathcal {R}:={\mathsf {A} }+\bar{{\mathsf {E}} } {\mathsf {H}}\) has positive eigenvalues. Therefore, a condition for occurence of hot spots of type II is
$$\begin{aligned} 0 \le \lambda _{max}\left( {\mathsf {A} }+\bar{{\mathsf {E}} } {\mathsf {H}}\right) < \frac{r}{2}. \end{aligned}$$


(a) We transform (21) to (23). If \(\theta \) is small enough so that \({\mathsf {E}} \ge 0\) we may apply Theorem 1 of Zhu and Pagilla [47] concerning necessary conditions for the existence of positive solutions for matrix Riccati equations in general form. A necessary condition for the existence of a positive solution to the algebraic Riccati equation is \(\lambda _{min}({\mathsf {E}} ) Tr({\mathsf {P}})-\frac{ N}{4} \lambda _{\min }^{2}\left( \tilde{{\mathsf {A}}}+\tilde{{\mathsf {A}}}^{*}\right) < 0\), and \(\lambda _{\min }\left( \tilde{{\mathsf {A}}}+\tilde{{\mathsf {A}}}^{*}\right) < 0\). Therefore, a solution will not exist if \(\lambda _{min}({\mathsf {E}} ) Tr({\mathsf {P}})- \frac{N}{4} \lambda _{\min }^{2}\left( \tilde{{\mathsf {A}}}+\tilde{{\mathsf {A}}}^{*}\right) \ge 0\), which leads to (24). If \(\lambda _{min}({\mathsf {E}} ) \ge \underline{\lambda }\) and \(\lambda _{min}({\mathsf {A}}+ {\mathsf {A}}^{*}-r I) \le \overline{\lambda }\) then the left hand side of (24) is bounded below by \( \underline{\lambda } Tr({\mathsf {P}}) -\frac{ N}{4} \overline{\lambda }^{2}\); therefore, a hot spot of type I will occur if \( \underline{\lambda } Tr({\mathsf {P}}) - \frac{N}{4} \overline{\lambda }^{2}>0\). It remains to estimate the relevant eigenvalues. If \({\mathsf {E}} \) is diagonally dominated, using a Gershgorin type of bound [26] we may estimate \(\underline{\lambda }\) as \(\underline{\lambda }=\min _{k}\{ e_{kk} - \sum _{m \ne k} |e_{km}|\}\) leading to (25).

(b)Assume without loss of generality that the matrix \({\mathcal R}\) has \(N\) discrete real eigenvalues \(\{\lambda _{i}\}\), \(i=1,\ldots , N\) with corresponding eigenvectors \(\{p_{i}\}\), \(i=1,\ldots , N\). The general solution of (20) can be obtained using this spectral decomposition of \({\mathcal R}\) and provides an expression of the form \({\mathbb E}_{{\mathbb {Q}}}[x(t)]=\sum _{n} e^{\lambda _{n} t} C_{n} p_{n}\), where \(C_{n} \in {\mathbb {R}}\) are arbitrary constants. This shows that the longrun behavior is dominated by the positive eigenvalues of the matrix \({\mathcal R}\), and the spatial behavior of the pattern is expressed in terms of the relevant eigenvector. Lower bound in inequality (26) allows zero eigenvalues, which corresponds to patterns related to the kernel of the matrix \({\mathcal R}\) and are marginal modes (related to the center manifold of the linear dynamical describing the mean optimal path. The upper bound in the inequality is related to the transversality condition. \(\square \)

Note that the condition for existence of hot spots of type I (condition (24)) involves only the data of the problem, so in principle it is very easily to verify by exact calculation of the relevant eigenvalues for the specific problem at hand. However, it reveals an interesting pattern, which is reminiscent of what we obtained explicitly for the special case of translation invariant systems, i.e., that hot spots of type I are likely to appear in the limit as some of the \(\theta _{n} \rightarrow 0\). This is seen by the fact that the left hand side of (24) depends on the \(\theta _{n}\), and in fact the minimal eigenvalue is increasing as some of the \(\theta _{n} \rightarrow 0\), whereas the right hand side of (24) is independent of \(\theta _{n}\). In fact as \(\theta =\max _{n}\{\theta _{n}\} \rightarrow 0\) hot spot formation of type I is guaranteed to appear; however, it is possible that breakdown of solutions occurs as a result of some of the \(\theta _{n} \rightarrow 0\) but in such a way that (24) holds. This corresponds to hot spot creation starting from a nucleus of uncertainty at site \(n\), which of course propagates eventually to the whole system as a result of the spatial connectivity of the system.

On the other hand, the condition (26) for creation of hot spots of type II requires knowledge of the solution of the matrix Riccati equation (21), which can be obtained numerically. Alternatively, a priori bounds for the solution of (21) can be employed to provide estimates for \(\lambda _{max}\left( {\mathsf {A} }+\bar{{\mathsf {E}} } {\mathsf {H}}\right) \) in terms of the data of the problem.

6 Nonlinear Systems

6.1 General Form of the Controlled System

Consider now the nonlinear system
$$\begin{aligned} dx=({\mathsf {A}} x + {\mathsf {F}} (x) + {\mathsf {B}} u)dt + {\mathsf {C}} dw \end{aligned}$$
where as before \(x \in {\mathbb {R}}^{N}\), \({\mathsf {A}} : {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}^{N}\) is a linear operator, and \({\mathsf {F}} : {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}^{N}\) is in general a nonlinear operator, and \({\mathsf {C}} \) is the covariance operator. The simplest choice for the nonlinear term \({\mathsf {F}} \) may be \({\mathsf {F}} (x)=(f_{1} (x_{1}),f_{2}(x_{2}),\ldots )\) in which case the nonlinear effects are purely local; however, this is by no means a necessary restriction13. The functions \(\{f_{i}\}\) will be assumed to be twice differentiable, together with their derivatives and satisfying dissipativity conditions. The control acts on the system through the linear operator \({\mathsf {B}} : {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}^{N}\).
The robust form of the system, using the Girsanov theorem, is
$$\begin{aligned} dx=({\mathsf {A}} x+{\mathsf {F}} (x) + {\mathsf {B}} u + {\mathsf {C}} v) dt + {\mathsf {C}} dw. \end{aligned}$$
We now consider a control functional of the form
$$\begin{aligned} J={\mathbb {E}}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}({\mathsf {U}} (x(t))+{\mathsf {K}}(u(t))-{\mathsf {T}}(v(t)))dt\right] \end{aligned}$$
where \({\mathsf {U}}:{\mathbb {R}}^{N}\rightarrow {\mathbb {R}}\) is a measure of distance from a desired target, \({\mathsf {K}}:{\mathbb {R}}^{N} \rightarrow {\mathbb {R}}\) is a cost function for the control, and \({\mathsf {T}} :{\mathbb {R}}^{N}\rightarrow {\mathbb {R}}\) is a cost function for the robustness. All three functions are assumed convex. The robust control problem thus becomes
$$\begin{aligned} \min _{u}\max _{v}{\mathbb {E}}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}({\mathsf {U} }(x(t))+{\mathsf {K}}(u(t))-{\mathsf {T}}(v(t)))dt\right] \end{aligned}$$
subject to the nonlinear state equation (27). By \({\mathsf {K} }^{\pounds }:{\mathbb {H}}^{*}\rightarrow {\mathbb {R}}\) we denote the Legendre–Fenchel transform of \({\mathsf {K}}\) defined by
$$\begin{aligned} {\mathsf {K}}^{\pounds }(p):=\sup _{x\in {\mathbb {H}}}[\langle p,x\rangle -{\mathsf {K}}(x)], \end{aligned}$$
where by the Riesz representation we assume that the dual space \({\mathbb {H} }^{*}\simeq {\mathbb {H}}\)14.

6.2 Solution in Terms of the HJBI Equation

The nonlinear optimal control problem may be treated in terms of a fully nonlinear HJBI equation.

Theorem 2

The Hamilton-Jacobi-Bellman-Isaacs equation associated with the robust control problem (28) subject to the constraint (27) is the nonlinear PDE
$$\begin{aligned} \langle {\mathsf {A}} x + {\mathsf {F}} (x) , DV \rangle + Tr({\mathsf {C}} {\mathsf {C}}^{*} D^{2}V) + {\mathsf {U}}(x) - {\mathsf {K} }^{\pounds }(-{\mathsf {B}}^{*} DV) + {\mathsf {T}}^{\pounds }({\mathsf {C}}^{*} DV)=r V \end{aligned}$$
where \({\mathsf {K}}^{\pounds }\) and \({\mathsf {T}}^{\pounds }\) are the Legendre–Fenchel transforms of \({\mathsf {K}}\) and \({\mathsf {T}}^{\pounds }\), respectively. Given a solution of this equation \(V:{\mathbb {H}} \rightarrow {\mathbb {R}}\) of sufficient regularity the associated closed loop system is the nonlinear Ornstein–Uhlenbeck system
$$\begin{aligned} dx=({\mathsf {A}} x + {\mathsf {F}} (x) + {\mathsf {B}}D{\mathsf {K} }^{\pounds } (-{\mathsf {B}}^{*} DV(x)) + {\mathsf {C}}D{\mathsf {T}}^{\pounds }({\mathsf {C} }^{*} DV(x))) dt + {\mathsf {C}} dw \end{aligned}$$


Consider the (pre)-Hamiltonian
$$\begin{aligned} H(V;x,u,v)=\langle {\mathsf {A}} x + {\mathsf {F}} (x) + {\mathsf {B}} u + {\mathsf {C}} v, DV \rangle + Tr({\mathsf {C}} {\mathsf {C}}^{*} D^{2} V) + {\mathsf {U}}(x) + {\mathsf {K}}(u) - {\mathsf {T} }(v), \end{aligned}$$
which is separable. On account of the separability and convexity of \({\mathsf {K}}\) and \({\mathsf {T}}\) the upper and the lower Hamiltonian coincide and can be expressed as
$$\begin{aligned} \bar{H}&= \langle {\mathsf {A}} x + {\mathsf {F}} (x) , DV \rangle + Tr({\mathsf {C}} {\mathsf {C}}^{*} D^{2}V) + {\mathsf {U}}(x) + \inf _{u}\left\{ \langle {\mathsf {B}}u, DV \rangle +{\mathsf {K}}(u) \right\} \\&+\,\,\sup _{v}\left\{ \langle {\mathsf {C}}v, DV \rangle -{\mathsf {T}}(v) \right\} \end{aligned}$$
The last term is recognized as \({\mathsf {T}}^{\pounds }({\mathsf {C}}^{*} DV)\), whereas the second term is expressed as
$$\begin{aligned} \inf _{u}\left\{ \langle {\mathsf {B}}u, DV \rangle +{\mathsf {K}}(u) \right\} =-\sup _{u}\left\{ - \langle {\mathsf {B}}u, DV \rangle -{\mathsf {K}}(u) \right\} =-{\mathsf {K}}^{\pounds }(-{\mathsf {B}}^{*} DV). \end{aligned}$$
$$\begin{aligned} \bar{H}(x,V)=\langle {\mathsf {A}} x + {\mathsf {F}} (x) , DV \rangle + Tr({\mathsf {C}} {\mathsf {C}}^{*} D^{2}V) + {\mathsf {U}}(x) -{\mathsf {K}}^{\pounds }(-{\mathsf {B}}^{*} DV) + {\mathsf {T}}^{\pounds }({\mathsf {C}}^{*} DV), \end{aligned}$$
which leads to the HJBI equation in the form (29). Candidates for the optimal policy are the \(u^{*}=argmax \{\langle u, -{\mathsf {B}}^{*} DV \rangle - {\mathsf {K}}(u)\}\) and \(v^{*}=argmax\{ \{\langle v, {\mathsf {C}}^{*} DV \rangle - {\mathsf {T}}(u)\}\). By the theory of the Fenchel–Legendre transform (see e.g., [6]) if \(u^{*}\) is a maximizer of \(\langle u, -{\mathsf {B}}^{*} DV \rangle - {\mathsf {K}}(u)\) then \(- {\mathsf {B}}^{*} DV \in \partial {\mathsf {K}}(u^{*})\) if and only if \(u^{*} \in \partial {\mathsf {K}}^{\pounds }(-{\mathsf {B}}^{*} DV)\) where \(\partial \) denotes the subdifferential operator. Therefore, \(u^{*} \in \partial {\mathsf {K}}^{\pounds }(-{\mathsf {B}}^{*} DV)\), and this provides a feedback formula for \(u\). Similarly we obtain that \(v^{*} \in \partial {\mathsf {T}}^{\pounds }({\mathsf {C}}^{*}DV)\). Assuming smoothness of \({\mathsf {K}}^{\pounds }\) and \({\mathsf {T}}^{\pounds }\), the subdifferential is single valued and coincides with the derivative; therefore, we obtain (30) for the optimal path. \(\square \)

6.3 Hot Spot Formation in Nonlinear Systems

We now briefly consider the possibility of hot spot formation for nonlinear robust control systems, by exploiting the understanding of the phenomenon we have developed from the linear quadratic case.

The occurence of hot spots of type I is related to the breakdown of solutions of the HJBI equation (29). The solvability of the HBJI equation may follow by generalizing results either of Da Prato and Zabczyk [17] or Cerrai [14], for stochastic control problems, to the case of stochastic differential games. It is expected that the limit \(\theta \rightarrow 0\) will the relevant limit for the breakdown of the solution to the robust control problem. Indeed, if the Hamiltonian is locally Lipschitz, an extension of the results of Cerrai [14] leads to the existence of a critical value for \(\rho \) above which solutions exist. A scaling argument shows that in our case the relevant parameter becomes \(\theta \rho \) which in the limit as \(\theta \rightarrow 0\) fails to satisfy this condition; therefore, hot spot creation of type I is expected in this limit. This intuitive argument can be brought into more rigorous form, but a detailed study is outside the scope of the present paper and will be reported elsewhere.

Concerning the occurence of hot spots of type II, the nonlinear form of the closed loop equation (30) does not allow us to obtain exact results concerning growing modes, but only approximate results obtained by linearization arguments around a chosen state. In particular we may assume that the decision maker chooses a steady state of the system \(x_{T}\) which is considered as a target and then wishes to study the spatio-temporal deviations of the robust optimal controlled system as expressed by the closed loop equation (30) around the desired state \(x_{T}\). There can be various choices for \(x_{T}\), one possibility is to assume that \(x_{T}\) is a solution of
$$\begin{aligned} 0={\mathsf {A}}x_{T} +{\mathsf {F}}(x_{T})+ {\mathsf {B}}D{\mathsf {K}}^{\pounds }({\mathsf {B}}^{*} DV(x_{T}))+{\mathsf {C}}D{\mathsf {T}}^{\pounds }({\mathsf {C}}^{*}DV(x_{Y})). \end{aligned}$$
We then express the optimal path for the nonlinear stochastic system as \(x=x_{T}+\epsilon z\) where \(z\) is a stochastic process to be specified. Substituting this ansatz into (30) and assuming \(V, {\mathsf {K}}^{\pounds },{\mathsf {T}}^{\pounds } \in C^{2}\) allows us to linearize around \(x_{T}\) and obtain a linear equation for the mean deviation \(Z={\mathbb E}_{{\mathbb {Q}}}[z]\) of the form \(Z' = {\mathcal R} Z\) where \({\mathcal R}\) is the matrix defined by
$$\begin{aligned} {\mathcal R}&= {\mathsf {A}}+D{\mathsf {F}}(x_{T})-{\mathsf {B}}D^{2}{\mathsf {K}} ^{\pounds }(-{\mathsf {B}}^{*}DV(x_{T})) \,{\mathsf {B}}^{*}D^{2}V(x_{T})\\&+\,\,{\mathsf {C}}D^{2}{\mathsf {T}}^{\pounds }({\mathsf {C}}^{*}DV(x_{T})) \,{\mathsf {C}}^{*}D^{2}V(x_{T}). \end{aligned}$$
Then, an analysis similar to the one used in Proposition (7) leads to the result that hot spots of type II correspond to the eigenvectors of the matrix \({\mathcal R}\) with positive eigenvalues. Weakly nonlinear analysis may then provide some results concerning the evolution of the unstable modes beyond their onset. Furthermore, since the value functions and the Legendre–Fenchel transforms satisfy convexity properties, by the assumed regularity we have positivity properties for the Hessian matrices involved in the definition of \({\mathcal R}\) which can be used to obtain a priori estimates on the spectrum of \({\mathcal R}\).

An alternative way to obtain some results concerning hot spot formation in nonlinear systems is to apply the linearization scheme proposed by Magill [32], obtain a local linear quadratic approximation of the full system around a desired target state and then apply our detailed results concerning hot spot formation of the approximate linear quadratic robust control problem.

7 Application: Distance-Dependent Utility and Robust Control of In Situ Consumption

An issue that has been given attention in spatial models of individual behavior is the concept of distance-dependent utility. In models of travel behavior the impact of distance on trip preferences underlies the choice of an individual to consume at locations which are away from his/her current location. The distance-dependent utility relates to the concept of spatial discounting which, similar to time discounting, provides weights which an individual attaches to utility derived at locations away from current location (e.g. [1, 35, 40, 41, 46]). For exponential spatial discounting, for example, a spatial discount factor can be defined as \(\alpha \left( n\right) =\beta ^{-n},\)\(\beta >1,\)\(n \in {\mathbb {Z}}_{N}\) indicating that the individual will attach declining weights to utility accruing at locations further away from his/her present location at \(n=0\).

Spatial discounting and distance - dependent utility can be interpreted in terms of an individual expressing preferences for consuming at different points in space. This interpretation can be associated with traveling to consume, for example, environmental amenities which take the form of services generated by stocks of natural capital. According to the Millennium Ecosystem Assessment classification stocks of natural capital accumulated in ecosystems generate supporting, provisioning, regulating, and cultural services. Some of these services can be consumed only in situ which means that an individual needs to travel a certain distance in order to consume. Recreational or tourism related services is a classic example of in situ consumption.

The analytical framework developed in this paper can be readily used to study the structure of equilibrium when utility is distance dependent, and an individual consumes in locations away from her under uncertainty. We formulate this application in the context of an economy located on a discrete lattice defined in terms of a finite group of integers modulo \(N.\) That is in our spatial economy each location or each cell belongs to a discrete ring of cells with the property that the cell 1 is the same as cell \(N,\) cell 2 as cell \(N+1\), and so. A representative consumer is located at each cell (location) \(n \in {\mathbb {Z}}_{N}.\) Each cell is characterized by a stock of natural capital \(x_{n}\left( t\right) \) which generates environmental services that can be consumed only in situ.

Consumption at location \(n\) is the sum of consumption of all individuals or \(u_{n}\left( t\right) =\sum _{m=0}^{N-1}u_{nm}\left( t\right) \), where \(u_{nm}\left( t\right) \) is the consumption of an individual located at location \(m\) of services at location \(n\). Consumption of services implies reduction of natural stocks. The evolution of the natural capital stock at a given location is determined by natural growth at the location and by the impact that stock levels at nearby locations might have on this natural growth rate. This impact might be positive or negative in the context of facilitating or competing growth. The evolution of the natural stock is subject to stochastic shocks. Thus, we write
$$\begin{aligned} dx_{n}\left( t\right) =\sum _{m=0}^{N-1}\left[ \alpha _{nm}x_{m}\left( t\right) -u_{nm}\left( t\right) \right] dt+\sum _{m=0}^{N-1}c_{nm}\left( t\right) dw_{m}, \end{aligned}$$
for the evolution of the stock of natural capital at location \(n\in {\mathbb {Z}}_{N}\). In this formulation \(x_{n}\left( t\right) \) for all \(n\), can be interpreted as deviations from a spatially homogeneous benchmark equilibrium stock, \(\bar{x}\) which could be determined historically (e.g., a preindustrial level). The terms \(\alpha _{nm}x_{m}\left( t\right) \) of (31) can be regarded as a first order approximation around the benchmark steady state, with \(a_{nn}\) being the value of the derivative of the first order approximation evaluated at the benchmark equilibrium.15 The influence kernel \(\alpha _{nm},\)\(n\ne m\) describes the spatial effects of nearby stocks on the growth of stock at \(n,\) and is assumed symmetric with \(\alpha _{nm}\equiv \alpha _{n-m}=\alpha _{m-n}\equiv \alpha _{mn}.\) The stock of amenities at location \(n\) is reduced by aggregate consumption \(u_{n}\left( t\right) .\)

7.1 Consumers

A representative individual located at a given location (cell) \(n\) can derive utility by traveling to the other locations of the ring and consuming the corresponding amenity services. Let \(\left( u_{nm}\left( t\right) ,b_{nm}\left( t\right) \right) ,\)\(m,n\in {\mathbb {Z}}_{N}\), denote the consumption at location \(m\) of an individual located at \(n\) and the corresponding bliss point for the same individual. An individual located at point \(n\) can travel to locations \(0,1,...,n-1,n+1,...N-1\) to consume the services there and compare consumption to his/her corresponding bliss point.

We define individual utility of the representative consumer at site \(n\) in terms of square deviations of consumption from the bliss point or \(U_{n}=\sum _{m}\beta _{nm}\left( u_{nm}\left( t\right) -b_{nm} \right) ^{2}\) with the objective being the minimization of deviations from the corresponding bliss point (e.g. [31]). The influence kernel \(\beta _{nm}\) reflects the weight that the individual located at \(n,\) attaches to the utility derived at location \(m,\) so it can be interpreted as spatial discounting. We assume that the kernel is symmetric and that the impact depends only on the distance between \(n\) and \(m\) so that \(\beta _{nm}\equiv \beta _{n-m}=\beta _{m-n} \equiv \beta _{mn}\). If \(\beta _{nm}=\beta \) then all locations are treated equally. To consume services at location \(m\) an individual located at \(n\) should pay a known exogenous price \(p_{m}\left( t\right) \ge 0,\) which could be, for example, an entrance fee.

Assume that the individual has a quasi linear utility function with respect to a numeraire commodity, and that the price \(p_{m}\left( t\right) \) is treated as parametric. The individual’s problem is
$$\begin{aligned} \max _{\{u_{nm}\}}~-\sum _{m=0}^{N-1}\beta _{nm}\left( u_{nm}\left( t\right) -b_{nm} \right) ^{2}+I_{n}\left( t\right) -\sum _{m=0}^{N-1}p_{m}\left( t\right) u_{nm}\left( t\right) ~\text {for all }n \end{aligned}$$
where \(I_{n}\left( t\right) \) is income at \(n\), and by \(\{u_{nm}\}\) we denote the whole family of processes \(u_{nm}(\cdot )\). The solution results in individual demand curves for consumption at each location \(m\) from a consumer located at \(n\). The first order conditions for the above problem yield that the consumption of an individual based at \(n\) at site \(m\) would be \(u_{nm}=b_{nm}-\frac{1}{2\beta _{nm}} p_{m}\). All individuals are supposed to consume according to this rule, so that the aggregate demand at location \(m\) and time \(t\), \(u_{m}(t):=\sum _{n=0} ^{N-1}u_{nm}(t)\), is obtained as:
$$\begin{aligned} u_{m}\left( t\right) =\sum _{n=0}^{N-1}b_{nm}\left( t\right) -\left( \sum _{n=0}^{N-1}\frac{1}{2\beta _{nm}}\right) p_{m} =:B_{0m} -B_{1m} p_{m}\left( t\right) . \end{aligned}$$

7.2 The Regulator

Consider now a regulator who seeks to allocate consumption at each location (cell) of the ring by maximizing utility across the whole spatial domain subject to the dynamic of the natural stocks. The regulator is concerned about possible misspecification of the dynamics. Allowing for uncertainty about the “true” statistical distribution of the stocks, and following the discussion in Sect. 2, the evolution of the natural stocks can be written as
$$\begin{aligned} dx_{n}\left( t\right) =\sum _{m=0}^{N-1}\left[ \alpha _{nm}x_{m}\left( t\right) -u_{nm}\left( t\right) +c_{nm}\upsilon _{m}\left( t\right) \right] dt+\sum _{m=0}^{N-1}c_{nm} dw_{m}. \end{aligned}$$
We will consider two versions of an optimal control problem for the regulator. In the first version (the full problem) the regulator sets the levels of the individual consumption \(U=\{u_{nm}\} \in {\mathbb {R}}^{N^2 \times 1}\), where the vector of control variables is relabeled so that
$$\begin{aligned} U=(U_1,U_2,\ldots , U_{N^2})^{tr}=(u_{01},u_{02},\ldots , u_{N-1,N-1})^{tr}. \end{aligned}$$
In the second version (an approximate problem) the regulator chooses the levels of total consumption at each site \(u=(u_{0},\ldots ,u_{N-1})^{tr} \in {\mathbb {R}}^{N\times 1}\). Depending on which formulation of the problem we study, the dynamic law for the evolution achieves a different compact form. For the first problem it is of the form
$$\begin{aligned} dx=({\mathsf {A}}x + \bar{{\mathsf {B}}} U + {\mathsf {C}}v)dt + {\mathsf {C}}dw, \end{aligned}$$
where \({\mathsf {A}}=(a_{nm}) \in {\mathbb {R}}^{N\times N}\), \({\mathsf {B}}=-{\mathbf 1} \in {\mathbb {R}}^{N^2 \times N^2}\), and \({\mathsf {C}}=(c_{nm}) \in {\mathbb {R}}^{N\times N}\), whereas for the second problem it is of the form
$$\begin{aligned} dx=({\mathsf {A}}x + {\mathsf {B}}u + {\mathsf {C}}v)dt + {\mathsf {C}}dw, \end{aligned}$$
where \({\mathsf {B}}=-I \in {\mathbb {R}}^{N \times N}\).
Under this notation, the regulator may face two distinct robust control problems with localized entropic constraints,
$$\begin{aligned}&\min _{\left\{ u_{nm}\left( \cdot \right) \right\} }\max _{\left\{ \upsilon _{n}\left( \cdot \right) \right\} }\mathbb {E}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}\left[ \sum _{n=0}^{N-1}\sum _{m=0}^{N-1}\beta _{nm}\left( u_{nm}\left( t\right) -b_{nm}\right) ^{2}-\sum _{n=0}^{N-1} \theta _{n} v_{n}^2\left( t\right) ^{2}\right] dt\right] \nonumber \\&\text {subject to } (32) \end{aligned}$$
$$\begin{aligned}&\min _{\{u_{n}(\cdot )\}}\max _{\{v_{n}(\cdot )\}}\mathbb {E}_{{\mathbb {Q}}}\left[ \int \limits _{0}^{\infty }e^{-rt}\left[ \sum _{n=0}^{N-1}\beta _{n} (u_{n}(t)-b_{n} )^{2}-\sum _{n=0}^{N-1}\theta _{n} v^2_{n}(t)\right] dt\right] \nonumber \\&\text{ subject } \text{ to } (33) \end{aligned}$$
The second problem (35) can be considered as a simplification of the first one (34) in which the regulator chooses aggregate consumption at each site rather than individual consumption and chooses an arbitrary bliss point \(b_{n} \) at each location, by imposing a weight \(\beta _{n}\) at each site and entropic constraints. It is conceivable that the regulator attaches the same weight to all locations, \(\beta _{n}=\beta \), \(n \in {\mathbb {Z}}_{N}\) as well as a uniform constraint at all points \(\theta _{n}=\theta \), \(n\in {\mathbb {Z}}_{N}\), thus leading to a global entropic constraint.
Defining the new control variables \(\bar{U}\) and \(\bar{u}\) choosing as reference point the bliss points we observe that both problems are of the general form of the linear quadratic problem (4) and in particular (34) for the choice \({\mathsf {P}}=0 \in {\mathbb {R}}^{N\times N}\), \(\bar{{\mathsf {Q}}}=diag(\beta _{00},\beta _{01},\ldots , \beta _{N-1,N-1}) \in {\mathbb {R}}^{N^2 \times N^2}\), \({\mathsf {R}}=diag(\theta _{1},\ldots , \theta _n) \in {\mathbb {R}}^{N \times N}\), while (35) for the choice \({\mathsf {P}}=0 \in {\mathbb {R}}^{N\times N}\), \({\mathsf {Q}}=diag(\beta _{0},\ldots ,\beta _{N-1}) \in {\mathbb {R}}^{N \times N}\), \({\mathsf {R}}=diag(\theta _{1},\ldots , \theta _n) \in {\mathbb {R}}^{N \times N}\) and hence can be solved using the general theory treated in Sect. 5, in terms of the Ricatti equation. The Riccati equation (21) for problem (34) becomes
$$\begin{aligned} {\mathsf {H}}_{a} {\mathsf {A}} + {\mathsf {A}}^{*} {\mathsf {H}}_{a}+ \frac{1}{2}{\mathsf {H}}_{a} {\mathsf {E}}_a {\mathsf {H}}_{a} - r {\mathsf {H}}_{a} =0, \end{aligned}$$
and \({\mathsf {E}}^{sym}_{a}\) is the symmetric part of \(\bar{{\mathsf {E}}}_{a}:=\frac{1}{\theta } {\mathsf {C}} {\mathsf {R}} ^{-1}{\mathsf {C}}^{*}-\bar{{\mathsf {B}}}\bar{ {\mathsf {Q}}} ^{-1}\bar{{\mathsf {B} }}^{*} \), while for problem (35)
$$\begin{aligned} {\mathsf {H}}_{b} {\mathsf {A}} + {\mathsf {A}}^{*} {\mathsf {H}}_{b}+ {\mathsf {H}}_{b} {\mathsf {E}}_b {\mathsf {H}}_{b} - r {\mathsf {H}}_{b} =0, \end{aligned}$$
and \({\mathsf {E}}^{sym}_{b}\) is the symmetric part of \(\bar{{\mathsf {E}}}_{b}:=\frac{1}{\theta } {\mathsf {C}} {\mathsf {R}} ^{-1}{\mathsf {C}}^{*}-{\mathsf {B}} {\mathsf {Q}} ^{-1}\bar{\mathsf {B} }^{*} \) and the optimal controls are
$$\begin{aligned} U_{a}= -\bar{{\mathsf {Q}}} ^{-1}\bar{{\mathsf {B}}}^{*} {\mathsf {H}}_{a} x_{a}, \,\,\,\,\, v_{a}=\frac{1}{\theta } {\mathsf {R}} ^{-1} {\mathsf {C}}^{*} {\mathsf {H}}_{a} x_{a}, \nonumber \\ dx_{a}=(A+ \bar{{\mathsf {E}} }_{a} {\mathsf {H}}_{a}) x_{a} dt + C dw, \end{aligned}$$
$$\begin{aligned} u_{b}= -{\mathsf {Q}} ^{-1}{\mathsf {B}}^{*} {\mathsf {H}}_{b} x_{b}, \,\,\,\,\, v_{b}=\frac{1}{\theta } {\mathsf {R}} ^{-1} {\mathsf {C}}^{*} {\mathsf {H}}_{b} x_{b}, \nonumber \\ dx_{b}=(A+ \bar{{\mathsf {E}} }_{b} {\mathsf {H}}_{b}) x_{b} dt + C dw, \end{aligned}$$
This allows us to obtain the prices at each site \(m\) using the market clearing conditions. For the first model by using \(U\) we may obtain total consumption at any site as \(u_{a}={\mathbf 1} U_{a} = -{\mathbf 1} \bar{{\mathsf {Q}}} ^{-1}\bar{{\mathsf {B}}}^{*} {\mathsf {H}}_{a}x_{a}\) so that the price vector will be given by
$$\begin{aligned} p_{a}=-W \bar{{\mathsf {Q}}} ^{-1}\bar{{\mathsf {B}}}^{*} {\mathsf {H}}_{a}x_{a} + W_{0} \end{aligned}$$
where the diagonal matrices \(W\) and \(W_{0}\) depend on \(B_{0m}\) and \(B_{1m}\), whereas in the second case we have directly from the solution of the robust control problem the total consumption \(u_{b}\) and the price vector will be given in terms of market clearing conditions as
$$\begin{aligned} p_{b}=-W {\mathsf {Q}} ^{-1}{\mathsf {B}}^{*} {\mathsf {H}}_{b} x_{b} + W_{0}. \end{aligned}$$
Then, the individual consumption \(u_{nm}\) is recovered by \(u_{nm}=b_{nm}-\frac{1}{2\beta _{nm}}p_{b,m}\), \(n,m \in {\mathbb {Z}}_{N}\). In both cases the prices are subject to spatiotemporal fluctuations which are introduced through their explicit dependence on \(x_{a}\) and \(x_{b}\), respectively, which is the optimal path.

Problem (35) in the special case that \(\beta _{n}=\beta \) and \(\theta _{n}=\theta \) can be treated in Fourier space, and an exact solution can be obtained. In this case \(M_{k}=-\frac{A_{k}}{\Lambda _{k}}\) and \(R_{k}=\hat{a}_{k}\), so that robust control has no effect on the possibility of formation of hot spots of Type II, in this particular case. On the other hand, hot spots of type I are expected to appear in the limit of small \(\theta \).

7.3 Hot Spot Interpretation

The model presented in this section may for appropriate parameter values allow for the generation of hot spots of types I and II according to the general theory developed here. The following economic interpretation of these hot spots is possible:
  • \(\triangleright \) Hot spot of type I: Regulation breaks down for small \(\theta \). This means that because of the regulator has very strong concerns about possible model misspecifications at specific site(s) the regulator cannot set up markets for consumption of in situ services where the supplied quantities satisfy the regulator’s criterion.

  • \(\triangleright \) Hot spot of type II: The regulator due to misspecification concerns allows a nonhomogeneous spatial pattern of the stocks to emerge. There exist a system of local prices that supports the spatial pattern.

The parameter \(\theta _{n}\) expressing misspecification concerns in site \(n\) can, for certain problems, be related to the physical characteristics of the site16. Thus if a hot spot is emerging from a given site this might signal the need for additional scientific evidence that might reduce the maximum misspecification error, and thus the entropy constraint \(H_{n}.\) Reduction of the entropy constraint will increase \(\theta _{n}\) and prevent the emergence of a hot spot.

8 Concluding Remarks

We study robust control methods in a spatial domain where explicit spatial interactions are modeled by kernels and where concerns about model misspecification could be different across locations. We analyze linear quadratic problems. We derive closed form solutions for translation invariant systems, but we also extend our results to general nontranslation invariant linear quadratic problems as well as to fully nonlinear systems. We show that misspecification concerns about specific cites could induce the emergence of hot spots which cause regulation to break down for the whole spatial domain. We also identify conditions for two more types of hot spots where location specific concerns could induce the emergence of spatial patterns, or could render regulation very costly. We apply our methods to a problem of regulating in situ consumption when consumers are characterized by distance-dependent utility. We examine the emergence of local markets for in situ consumption and cases where location specific concerns could break down regulation for the whole area, or could induce specific clustering.

Our results provide tools for studying optimal regulation of spatially interconnected systems when there are concerns about the specification of the model describing local processes describing the evolution of the system’s states. Given the increasing interconnections and the localized uncertainties in the real world our approach could be appropriate for a wide class of economic problems characterized and connectivity, not necessarily spatial, since connectivity can be regarded with respect to other attributes, and by local uncertainties.


  1. 1.

    see also [22, 23, 25, 37].

  2. 2.

    In the context of automatic control systems with spatially distributed parameter aspects (see, for example, [7, 15]) for the control of infinite platoons of vehicles over time.

  3. 3.

    Although we choose to interpret the characteristics associated with the distributed parameter aspect as physical space, the notion of “space” does not have to be physical. It can be used to model characteristics that are associated with economic, sociological, cultural, or other factors. Since the notion of “space” may be broadly interpreted, this suggests that our methods can be used for the analysis of a wide range of problems.

  4. 4.

    This choice is for simplicity of presentation. Most of the arguments and results presented here can be extended to infinite dimensional systems, admittedly with considerable technical effort employing techniques beyond the scope of the present paper, or when explicitly stated so with a weighted version of this space.

  5. 5.

    The generalization to vector valued state and control variables \(x^{n}\in {\mathbb {R}}^{d_{1}}\) and \(u_{n}\in {\mathbb {R}}^{d_{2}}\) on each site \(n\in {\mathbb {Z}}\) requires the use of the sequence spaces \(\ell ^{2}({\mathbb {R}}^{d_{i}})\), \(i=1,2\) rather than \(\ell ^{2}:=\ell ^{2}({\mathbb {R}})\)) and is straightforward. Furthermore, the generalization for infinite dimensional lattices is feasible but becomes technical from the mathematical points of view and is beyond the scope of the present paper.

  6. 6.

    There is uncertainty concerning the economy which is represented in terms of the vector valued stochastic process \(w\). These common factors affect the state of the economy \(x\) at the different sites. Each factor has a different effect on the state of the economy on each particular site; this will be modeled by a suitable correlation matrix. It is not of course necessary that the number of factors is the same as the number of sites in the system; however, without loss of generality we will make this assumption and assume that there is one factor or source of uncertainty related to each site. This assumption can be easily relaxed.

  7. 7.

    In the limiting case where \(v=\{v_{n}\}\) is a constant vector this leads to \({\mathbb {Q}}( w(t) \in A)=\int _{A} (2 \pi t)^{-\frac{N}{2}} \exp \left( -\frac{||x-v t||^2}{2 t} \right) dx\), i.e., \(w\) is distributed according to the normal law \({\mathcal {N}}(v \, t,{\mathsf {I}} \,t)\).

  8. 8.

    Girsanov’s theorem is a very powerful result in stochastic analysis describing how the law of a Wiener process is transformed when viewed under an alternative probability measure, and is essentially a change of drift argument, finding important applications in mathematical finance, stochastic control theory, mathematical economics, etc. The validity of the theorem requires some technical assumptions (a) regarding the filtration \(\{\mathcal{F}_{t}\}_{t \in [0,T]}\) often quoted as the usual conditions, i.e., right continuity and the property that \(\mathcal{F}_{0}\) contains all the null sets of \({\mathbb {P}}\) and (b) regarding the information drift process \(v\), which essentially is required to make sure that the stochastic exponential \(\mathcal{E}_{t}(v)\) (defined through (2) by setting \(T=t\)) is a martingale, rather than just a local martingale. The Novikov condition is a general condition that guarantees (b). For an excellent, clear and detailed exposition of Girsanov’s theorem see [27].

  9. 9.

    A matrix \({\mathsf {A}}\) is translation invariant if \({\mathsf {A}}{\mathsf {T}}_{k} = {\mathsf {T}}_{k} {\mathsf {A}}\) where \({\mathsf {T}}\) is the translation by \(k\) on \(\ell ^{2}({\mathbb {Z}}_{N}) \simeq {\mathbb {R}}^{N}\).

  10. 10.

    that correspond to the first column of the matrices \({\mathsf {A}}\), \({\mathsf {B}}\), and \({\mathsf {C}}\)

  11. 11.

    One might consider as a breakdown of solutions the absence of positive real roots which would correspond to a convex value function for the game (see Remark 2). However, the absence of real roots, whatsoever, corresponds to an even worse situation which implies that the HJBI equation is ill posed and admits no solutions at all, at least of the quadratic type.

  12. 12.

    The Haar measure is a generalization of the Lebesgue measure, which is invariant under the symmetry group

  13. 13.

    The general form \({\mathsf {F}} (x)=(f_{1}(x),f_{2}(x),\ldots ) \) where at each site the nonlinear effects depend on the state of the system at all lattice sites is easily handled within the framework presented here.

  14. 14.

    In this particular case \({\mathbb {H} }=\ell ^{2}:=\ell ^{2}({\mathbb {Z}}_{N})\simeq {\mathbb {R}}^{N}\) and by the finite dimensionality of the Hilbert space involved \({\mathbb {H}}={\mathbb {H} }^{*}\).

  15. 15.

    The full nonlinear model can be considered in the context of Sect. 6 with similar qualitative result. The linear approach provides, however, better tractability.

  16. 16.

    See, for example, [5] for calibrating the parameter \(\theta \) using scientific information related to climate change.



This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program “Education and Lifelong Learning” of the National Strategic Reference Framework (NSRF) - Research Funding Program: “Thalis - Athens University of Economics and Business - Optimal Management of Dynamical Systems of the Economy and the Environment: The Use of Complex Adaptive Systems”. W. Brock is grateful for financial and scientific support received from The Center for Robust Decision Making on Climate and Energy Policy (RDCEP) which is funded by a grant from the National Science Foundation (NSF) through the Decision Making Under Uncertainty (DMUU) program. He is also grateful to the Vilas Trust for financial support. None of the above is responsible for any errors, opinions, or shortcomings in this article. The authors are indebted to one of the anonymous referees whose thorough and critical reading of the manuscript and insightful comments highly contributed to the improvement of the current version of this work.


  1. 1.
    Akamatsu T, Takayama Y, Ikeda K (2012) Spatial discounting, fourier, and racetrack economy: a recipe for the analysis of spatial agglomeration models. J Econ Dyn Control 36(11):1729–1759CrossRefMathSciNetGoogle Scholar
  2. 2.
    Albeverio S, Makarov K, Motovilov A (2003) Graph subspaces and the spectral shift function. Canad. J. Math 55(3):449–503CrossRefMATHMathSciNetGoogle Scholar
  3. 3.
    Anderson E, Hansen L, Sargent T (2003) A quartet of semigroups for model specification, robustness, prices of risk, and model detection. J Eur Econ Assoc 1(1):68–123CrossRefGoogle Scholar
  4. 4.
    Armaou A, Christofides PD (2001) Robust control of parabolic pde systems with time-dependent spatial domains. Automatica 37(1):61–69CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Athanassoglou, S. and A. Xepapadeas (2011). Pollution control: when, and how, to be precautious. Fondazione eni enrico mattei working papers, 569.Google Scholar
  6. 6.
    Aubin J, Ekeland I (1984) Applied nonlinear analysis. Wiley, New YorkMATHGoogle Scholar
  7. 7.
    Bamieh B, Paganini F, Dahleh M (2002) Distributed control of spatially invariant systems. IEEE Trans Automat Contr 47(7):1091–1107CrossRefMathSciNetGoogle Scholar
  8. 8.
    Başar T, Bernhard P (2008) H\(^{\infty }\)-optimal control and related minimax design problems: a dynamic game approach. Birkhauser, BostonGoogle Scholar
  9. 9.
    Bensoussan A, Da Prato G, Delfour M, Mitter S (1992) Representation and control of infinite dimensional systems: Vol 2 (systems & control: foundations & applications). Birkhäuser Boston Inc., Boston, MAGoogle Scholar
  10. 10.
    Boucekkine R, Camacho C, Zou B (2009) Bridging the gap between growth theory and the new economic geography: the spatial ramsey model. Macroecon Dyn 13:20–45CrossRefMATHGoogle Scholar
  11. 11.
    Brito, P. (2004). The dynamics of growth and distribution in a spatially heterogeneous world. Instituto Superior de Economia e Gestão - DE Working papers n\(^\circ \) 14-2004/DE/UECEGoogle Scholar
  12. 12.
    Brock W, Xepapadeas A (2008) Diffusion-induced instability and pattern formation in infinite horizon recursive optimal control. J Econ Dyn Control 32(9):2745–2787CrossRefMATHMathSciNetGoogle Scholar
  13. 13.
    Brock W, Xepapadeas A (2010) Pattern formation, spatial externalities and regulation in coupled economic-ecological systems. J Environ Econ Manage 59:149–164MATHGoogle Scholar
  14. 14.
    Cerrai S (2001) Second order PDE’s in finite and infinite dimension: a probabilistic approach. Springer Verlag, BerlinCrossRefGoogle Scholar
  15. 15.
    Curtain, R., O. Iftime, and H. Zwart (2008). System theoretic properties of platoon-type systems. In Decision and control, 2008. CDC 2008. 47th IEEE Conference on decision and control , p 1442–1447. IEEE.Google Scholar
  16. 16.
    Da Prato G (2002) Linear quadratic control theory for infinite dimensional systems. . Mathematical control theory. ICTP Lecture notes, vol 8. p 59–105Google Scholar
  17. 17.
    Da Prato G, Zabczyk J (2002) Second order partial differential equations in Hilbert spaces. London mathematical society lecture notes. Cambridge Univ Pr., CambridgeGoogle Scholar
  18. 18.
    Desmet K, Rossi-Hansberg E (2010) On spatial dynamics. J Reg Sci 50(1):43–63CrossRefGoogle Scholar
  19. 19.
    Gilboa I, Schmeidler D (1989) Maxmin expected utility with non-unique prior. J Math Econ 18:141–153CrossRefMATHMathSciNetGoogle Scholar
  20. 20.
    Haldane A (2009) Rethinking the financial network. Speech delivered at the Financial Student Association, AmsterdamGoogle Scholar
  21. 21.
    Hansen L, Sargent T (2001) Robust control and model uncertainty. Am Econ Rev 91:60–66CrossRefGoogle Scholar
  22. 22.
    Hansen L, Sargent T (2008) Robustness in economic dynamics. Princeton university Press, PrincetonGoogle Scholar
  23. 23.
    Hansen L, Sargent T, Turmuhambetova G, Williams N (2006) Robust control and model misspecification. J Econ Theory 128:45–90CrossRefMATHMathSciNetGoogle Scholar
  24. 24.
    Isaacs R (1965) Differential games: A mathematical theory with applications to welfare and pursuit, control and optimization. Wiley, New YorkGoogle Scholar
  25. 25.
    JET (2006) Symposium on model uncertainty and robustness. J Econ Theory 128:1–163Google Scholar
  26. 26.
    Johnson CR (1989) A gersgorin-type lower bound for the smallest singular value. Linear Algebra Appl 112:1–7CrossRefMATHMathSciNetGoogle Scholar
  27. 27.
    Karatzas I, Shreve S (1991) Brownian motion and stochastic calculus. Springer, New YorkMATHGoogle Scholar
  28. 28.
    Karatzas I, Shreve SE (1998) Methods of mathematical finance. Springer, New YorkCrossRefMATHGoogle Scholar
  29. 29.
    Krugman P (1996) The self-organizing economy. Blackwell, OxfordGoogle Scholar
  30. 30.
    Leizarowitz A (2008) Turnpike properties of a class of aquifer control problems. Automatica 44(6):1460–1470CrossRefMATHMathSciNetGoogle Scholar
  31. 31.
    Ljungqvist L, Sargent T (2004) Recursive macroeconomic theory. The MIT Press, CambridgeGoogle Scholar
  32. 32.
    Magill M (1977) A local analysis of n-sector capital accumulation under uncertainty. J Econ Theory 15(1):211–219CrossRefMATHMathSciNetGoogle Scholar
  33. 33.
    McMillan C, Triggiani R (1994) Min-max game theory and algebraic riccati equations for boundary control problems with continuous input-solution map. part ii: the general case. Appl Math Optim 29(1):1–65CrossRefMATHMathSciNetGoogle Scholar
  34. 34.
    Murray J (2003) Mathematical biology, Vol. I and II, 3rd edn. Springer, New YorkGoogle Scholar
  35. 35.
    Perrings C, Hannon B (2001) An introduction to spatial discounting. J Reg Sci 41(1):23–38CrossRefGoogle Scholar
  36. 36.
    Rudin W (1990) Fourier analysis on groups. Wiley-Interscience, New YorkCrossRefMATHGoogle Scholar
  37. 37.
    Salmon M (2002) Special issue on robust and risk sensitive decision theory. Macroecon Dyn 6(1):19–39CrossRefGoogle Scholar
  38. 38.
    Sanchirico J, Wilen J (1999) Bioeconomics of spatial exploitation in a patchy environment. J Environ Econ Manage 37(2):129–150MATHGoogle Scholar
  39. 39.
    Smith M, Sanchirico J, Wilen J (2009) The economics of spatial-dynamic processes: applications to renewable resources. J Environ Econ Manage 57:104–121MATHGoogle Scholar
  40. 40.
    Smith T (1975) An axiomatic theory of spatial discounting behavior. Pap Reg Sci 35(1):31–44CrossRefGoogle Scholar
  41. 41.
    Smith T (1976) Spatial discounting and the gravity hypothesis. Reg Sci Urban Econ 6(4):331–356CrossRefGoogle Scholar
  42. 42.
    Turing A (1952) The chemical basis of morphogenesis. Philos Trans R Soc Lond 237:37–72CrossRefGoogle Scholar
  43. 43.
    Whittle P (1996) Optimal control: basics and beyond. Wiley, New YorkMATHGoogle Scholar
  44. 44.
    Wilen J (2007) Economics of spatial-dynamic processes. Am J Agric Econ 89(5):1134–1144CrossRefGoogle Scholar
  45. 45.
    Wong M (2011) Discrete fourier analysis. Birkhauser, BostonCrossRefMATHGoogle Scholar
  46. 46.
    Wu J, Plantinga A (2003) The influence of public open space on urban spatial structure. J Environ Econ Manage 46(2):288–309MATHGoogle Scholar
  47. 47.
    Zhu Y, Pagilla PR (2005) A note on the necessary conditions for the algebraic riccati equation. IMA J Math Control Inf 22(2):181–186CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • W. A. Brock
    • 1
    • 2
  • A. Xepapadeas
    • 3
  • A. N. Yannacopoulos
    • 4
  1. 1.Department of EconomicsUniversity of WisconsinMadissonUSA
  2. 2.Department of EconomicsUniversity of MissouriColumbiaUSA
  3. 3.Department of International and European Economic StudiesAthens University of Economics and BusinessAthensGreece
  4. 4.Department of StatisticsAthens University of Economics and BusinessAthensGreece

Personalised recommendations