## Introduction

As a collection of mathematical models of conflict and cooperation between rational decision makers, game theory [1] has been proven useful in many fields such as economics, management science, and sociology. According to the moves of players made simultaneously or over a number of time periods, games fall into two categories: static game and dynamic game. Extensive game [1, 2], which gives an explicit description of the sequential structure of game actions, is essentially a discrete dynamic game. When the system state is continuous over time and the system dynamics can be described by a differential equation, dynamic game has evolved into the differential game.

The origin of differential game could be traced to the pioneering work of Rufus Isaacs, who modeled missile versus enemy aircraft pursuit schemes in terms of state and control variables. His famous book Differential Games [3] marked the birth of the differential game. Soon afterwards, much further work springed up in the field of differential game. In 1964, Berkovitz [4] proposed a variational approach to differential game. In 1966, Pontryagin [5] solved the differential game in open-loop solution in terms of the maximum principle. In 1967, Leitmann and Mon [6] investigated some geometric aspects of differential game. In 1971, Friedman [7] introduced discrete approximation sequence methods to establish the values of differential game and existence of saddle point. His work laid a solid mathematical foundation for differential game theory.

As an effective means to represent conflict situation, differential game theory is widely applied, particularly in the area of military confrontations. However, in real-word applications, the state evolution is often affected by the interference of noise. The noise may be added to the players’ observations of the system state or to the state equation itself. In order to model the noise, there exist two mathematical systems. One is probability theory (Kolmogorov [8]), and the other is uncertainty theory (Liu [9]). Probability is interpreted as a frequency that requires enough historical data for probabilistic reasoning, while uncertainty is interpreted as personal belief degree that is from domain experts for lack of samples.

When noise is modeled by the Wiener process and system evolution can be described by a stochastic differential equation, the differential game will evolve into stochastic differential game. Thanks to Fleming [10] for his important work in stochastic control, a spectrum of stochastic differential game was proposed for analyzing differential game in stochastic situations. Since then, stochastic differential game has been discussed by many researchers. For instance, Basar [11] considered quadratic stochastic differential game. Clemhout and Wan [12] proposed dynamic common property resources and environment problems. Kaitala [13] gave equilibrium solution in stochastic resource management game under imperfect information. Jørgensen and Yeung [14] investigated a common property fishery problem in stochastic differential game model.

Sometimes, no samples are available to estimate the probability distribution. For such situation, we have no choice but to invite some domain experts to evaluate the belief degree that each event will occur. Perhaps some people would like to regard the belief degree as subjective probability or fuzzy concept, but it is usually inappropriate because both probability theory and fuzzy set theory may lead to counter-intuitive results in this case (see [15]). In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 by Liu [9] and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling human uncertainty.

In 2008, Liu [16] proposed the concept of uncertain process for describing dynamic uncertain systems. Moreover, Liu [17] designed a Lipschitz continuous uncertain process with stationary and independent normal uncertain increments, which is called the canonical Liu process now. Chen and Ralescu [18] defined and discussed the Liu process and uncertain calculus. When the noise of system state evolution can be described by the Liu process, Liu [16] introduced uncertain differential equation. Chen and Liu [19] gave an existence and uniqueness theorem for an uncertain differential equation under linear growth and Lipschitz continuous condition. Yao and Chen [20] proposed a numerical method to solve uncertain differential equation. Besides, Yao [21] studied the integral of solution to uncertain differential equation. As an application of uncertain differential equation, Liu [22] presented a paradox of stochastic finance theory which showed that the real stock price was impossible to follow any Ito’s stochastic differential equation, and suggested a new uncertain finance theory which was described by an uncertain differential equation. Based on uncertainty theory and uncertain differential equation, Zhu [23] introduced uncertain optimal control and gave Zhu’s equation of optimality by means of uncertain differential equation. Due to his pioneering work, this paper aims to initialize a spectrum of uncertain differential game when the system dynamics of game is described by an uncertain differential equation.

The rest of this paper is organized as follows: Firstly, we briefly review some basic results of uncertainty theory in the section ‘Preliminaries.’ Secondly, we use an uncertain differential equation to describe the system dynamics of differential game in the section ‘Uncertain differential games’ and present an uncertain differential game model. Moreover, we define feedback Nash equilibrium strategies as the solution of the uncertain differential game and construct a sufficient condition for a feedback Nash equilibrium. Thirdly, we apply the uncertain differential game to capitalism in the section‘An uncertain differential game of capitalism.’ In the capitalism, the government will tax less than the full amount of the payoff accrued to the firm, which will post a positive rate of investment. By comparison, we show that the uncertain differential game is an efficient means to solve the problem of capitalism.

## Preliminaries

In this section, we will introduce some basic results in uncertainty theory.

Definition 1. (Liu [9]) Let ℒ be a σ-algebra on a nonempty set Γ. A set function ℳ : ℒ → [ 0,1] is called an uncertain measure if it satisfies the following axioms:

Axiom 1. (Normality Axiom) ℳ {Γ} = 1 for the universal set Γ.

Axiom 2. (Duality Axiom) ℳ {Λ} + ℳ {Λc} = 1 for any event Λ.

Axiom 3. (Subadditivity Axiom) For every countable sequence of events Λ12, …,we have

$\text{ℳ}\left\{\bigcup _{i=1}^{\infty }{\Lambda }_{i}\right\}\le \sum _{i=1}^{\infty }\text{ℳ}\left\{{\Lambda }_{i}\right\}.$

Besides, in order to provide the operational law, Liu [17] defined the product uncertain measure on the product σ-algebra ℒ as follows.

Axiom 4. (Product Axiom) Let k ,ℒ k ,ℳ k ) be uncertainty spaces for k = 1,2,…. The product uncertain measure ℳ is an uncertain measure satisfying

$\text{ℳ}\left\{\prod _{k=1}^{\infty }{\Lambda }_{k}\right\}=\underset{k=1}{\overset{\infty }{\wedge }}{\text{ℳ}}_{k}\left\{{\Lambda }_{k}\right\},$

where Λ k are arbitrarily chosen events from ℒ k for k = 1,2,…, respectively.

Definition 2. (Liu [9]) An uncertain variable is a function from an uncertainty space (Γ,ℒ,ℳ) to the set of real numbers such that for any Borel set B of real numbers, the set

$\left\{\xi \in B\right\}=\left\{\gamma \in \Gamma |\xi \left(\gamma \right)\in B\right\}$

is an event.

In order to describe uncertain variable in practice, uncertainty distribution Φ:ℜ→[ 0,1] of an uncertain variable ξ is defined as

$\Phi \left(x\right)=\text{ℳ}\left\{\xi \le x\right\}.$

An uncertainty distribution Φ (·) is said to be regular if its inverse function Φ-1 (α) exists and is unique for each α ∈ (0,1), and Φ-1 (·) is called the inverse uncertainty distribution of ξ.

Definition 3. (Liu [17]) The uncertain variables ξ 1,ξ 2,…,ξ m  are said to be independent if

$\text{ℳ}\left\{\bigcap _{i=1}^{m}\left\{{\xi }_{i}\in {B}_{i}\right\}\right\}=\underset{i=1}{\overset{m}{\wedge }}\text{ℳ}\left\{{\xi }_{i}\in {B}_{i}\right\}$

for any Borel sets B 1,B 2,…,B m  of real numbers.

Definition 4. (Liu [9]) Let ξ be an uncertain variable. Then, the expected value of ξ is defined by

$\mathrm{E}\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\text{ℳ}\left\{\xi \ge r\right\}\mathrm{d}r-\underset{-\infty }{\overset{0}{\int }}\text{ℳ}\left\{\xi \le r\right\}\mathrm{d}r$

provided that at least one of the two integrals is finite.

If ξ is a regular uncertain variable with uncertainty distribution Φ (x), then the expected value may be calculated by

$\mathrm{E}\left[\xi \right]=\underset{0}{\overset{+\infty }{\int }}\left(1-\Phi \left(x\right)\right)\mathrm{d}x-\underset{-\infty }{\overset{0}{\int }}\Phi \left(x\right)\mathrm{d}x=\underset{0}{\overset{1}{\int }}{\Phi }^{-1}\left(\alpha \right)\mathrm{d}\alpha .$

Based on the uncertainty space, Liu introduced the concepts of uncertain process, canonical Liu process, uncertain differential equation, etc.

Definition 5. (Liu [16]) Let T be an index set and let (Γ,ℒ,ℳ) be an uncertainty space. An uncertain process is a measurable function from T × (Γ,ℒ,ℳ) to the set of real numbers, i.e., for each t ∈ T and any Borel set B,

$\left\{{X}_{t}\in B\right\}=\left\{\gamma \in \Gamma |{X}_{t}\left(\gamma \right)\in B\right\}$

is an event.

Definition 6. (Liu [16]) An uncertain process X t  is said to have independent increments if

${X}_{{t}_{0}},\phantom{\rule{0.3em}{0ex}}{X}_{{t}_{1}}-{X}_{{t}_{0}},\phantom{\rule{0.3em}{0ex}}{X}_{{t}_{2}}-{X}_{{t}_{1}},\phantom{\rule{0.3em}{0ex}}\dots ,\phantom{\rule{0.3em}{0ex}}{X}_{{t}_{k}}-{X}_{{t}_{k-1}}$

are independent uncertain variables where t 0 is the initial time and t 1,t 2,…,t k  are any times with t 0 < t 1 < … < t k .

Definition 7. (Liu [16]) An uncertain process X t  is said to have stationary increments if, for any given t > 0, the increments X s+t - X s  are identically distributed uncertain variables for all s > 0.

Definition 8. (Liu [17]) An uncertain process C t  is said to be a canonical Liu process if

1. (i)

C 0 = 0 and almost all sample paths are Lipschitz continuous.

2. (ii)

C t  has stationary and independent increments.

3. (iii)

Every increment C s+t  - C s  is a normal uncertain variable with expected value 0 and variance t 2, whose uncertainty distribution is

$\Phi \left(x\right)={\left(1+\text{exp}\left(\frac{-\pi x}{\sqrt{3}t}\right)\right)}^{-1},\phantom{\rule{1em}{0ex}}x\in \mathfrak{R.}$

Definition 9. (Chen and Ralescu [18]) Let C t be a canonical Liu process and let Z t be an uncertain process. If there exist two uncertain processes μ t and σ t such that

${Z}_{t}={Z}_{0}+\underset{0}{\overset{t}{\int }}{\mu }_{s}\mathrm{d}s+\underset{0}{\overset{t}{\int }}{\sigma }_{s}\mathrm{d}{C}_{s}$

for any t ≥ 0, then Z t is called a Liu process with drift μ t and diffusion σ t . Furthermore, Z t has an uncertain differential

$\mathrm{d}{Z}_{t}={\mu }_{t}\mathrm{d}t+{\sigma }_{t}\mathrm{d}{C}_{t}.$

Theorem 1. (Liu [17]) (Fundamental Theorem of Uncertain Calculus) Let h(t,c) be a continuously differentiable function. Then, Z t  = h(t,C t ) is a Liu process and has an uncertain differential

$\mathrm{d}{Z}_{t}=\frac{\partial h}{\partial t}\left(t,{C}_{t}\right)\mathrm{d}t+\frac{\partial h}{\partial c}\left(t,{C}_{t}\right)\mathrm{d}{C}_{t}.$

Based on the Liu process, the following concept of uncertain differential equation was introduced by Liu [16].

Definition 10. (Liu [16]) Suppose C t  is a canonical Liu process, and f and g are some given functions. Then,

$\mathrm{d}{X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$
(1)

is called an uncertain differential equation. A solution is a Liu process X t  that satisfies (1) identically in t.

Chen and Liu [19] presented an existence and uniqueness theorem for an uncertain differential equation.

Definition 11. (Chen and Liu [19]) The uncertain differential equation

${X}_{t}=f\left(t,{X}_{t}\right)\mathrm{d}t+g\left(t,{X}_{t}\right)\mathrm{d}{C}_{t}$

has a unique solution if the coefficients f (t,x) and g (t,x) satisfy the linear growth condition

$|\phantom{\rule{0.3em}{0ex}}f\left(t,x\right)|+|g\left(t,x\right)|\le L\left(1+|x|\right),\phantom{\rule{1em}{0ex}}\forall x\in \Re ,t\ge 0$

and Lipschitz condition

$|\phantom{\rule{0.3em}{0ex}}f\left(t,x\right)-f\left(t,y\right)|+|g\left(t,x\right)-g\left(t,y\right)|\le L|x-y|,\phantom{\rule{1em}{0ex}}\forall x,y\in \Re ,t\ge 0$

for some constant L. Moreover, the solution is sample continuous.

By means of uncertain differential equation, Zhu [23] introduced uncertain optimal control and gave an equation of optimality.

Definition 12. (Zhu [23]) Assume that R is the return function and W is the function of terminal reward. If we want to maximize the expected return on [ 0,T] using an optimal control, then we have the following optimal control model:

$\left\{\begin{array}{l}J\left(0,{x}_{0}\right)=\underset{u}{\text{sup}}E\left[\underset{0}{\overset{T}{\int }}R\left(t,x\left(t\right),u\left(t\right)\right)\mathrm{d}t+W\left(T,x\left(T\right)\right)\right]\\ \text{subject to}\\ \phantom{\rule{2em}{0ex}}\mathrm{d}x\left(t\right)=f\left(t,x\left(t\right),u\left(t\right)\right)\mathrm{d}t+g\left(t,x\left(t\right),u\left(t\right)\right)\mathrm{d}{C}_{t}\\ \phantom{\rule{2em}{0ex}}x\left(t\right)={x}_{0}.\end{array}\right\$

In order to find the optimal control, we write

$J\left(t,x\right)=\underset{u}{\text{sup}}E\left[\underset{t}{\overset{T}{\int }}R\left(s,x\left(s\right),u\left(s\right)\right)\mathrm{d}s+W\left(T,x\left(T\right)\right)\right]$

where t ∈ [ 0,T] and x (t) = x.

Theorem 2. (Zhu [23]) (Zhu’s Equation of Optimality) Let J(t,x) be twice differentiable on [ 0,T] × R, then we have

$-{J}_{t}\left(t,x\right)=\underset{u}{\text{sup}}\left\{R\left(t,x,u\right)+{J}_{x}\left(t,x\right)f\left(t,x,u\right)\right\},$

where J t (t,x) and J x (t,x) are the partial derivatives of J (t,x) with respect to t and x, respectively. Note that the boundary condition is

$J\left(T,x\right)=W\left(T,x\right).$

## Uncertain differential games

Differential game is a class of decision problem under which the evolution of the state is described by a differential equation, the players act throughout a time interval [ t 0,T] and want to maximize the payoff.

In the general n-person differential game model, the player i optimize his objective

$\underset{{u}_{i}}{\text{sup}}\underset{{t}_{0}}{\overset{T}{\int }}{R}_{i}\left(t,x\left(t\right),{u}_{1}\left(t\right),{u}_{2}\left(t\right),\dots ,{u}_{n}\left(t\right)\right)\mathrm{d}t+{W}_{i}\left(x\left(T\right)\right)\phantom{\rule{0.3em}{0ex}}\text{for}\phantom{\rule{0.3em}{0ex}}i\in N=\left\{1,2,\dots ,n\right\}$
(2)

subject to the deterministic dynamics (vector-valued differential equation)

$\mathrm{d}x\left(t\right)=f\left(t,x\left(t\right),{u}_{1}\left(t\right),{u}_{2}\left(t\right),\dots ,{u}_{n}\left(t\right)\right)\mathrm{d}t,\phantom{\rule{1em}{0ex}}x\left({t}_{0}\right)={x}_{0},$
(3)

where T > t 0 ≥0, x (t) ∈ ℜm denotes the state variable of the game, u i  ∈ U i (U i is a compact metric space) is the control variable of player i, and the initial state x 0 is given.

The function R i (t,x,u 1,u 2,…,u n ) is a transient payoff function of player i at time t, W i (·) is a terminal reward function of player i at terminal time T, and f (t,x,u 1,u 2,…,u n ) is a vector function. All functions mentioned are differentiable.

In a real game, the state evolution is often affected by the interference of noise that is added to the players’ observations of the state of the system or to the state equation itself. One way to incorporate noise in differential game is to introduce uncertain dynamics. An uncertain formulation for quantitative differential game of prescribed duration involves a vector-valued uncertain differential equation driven by the Liu process:

$\begin{array}{ll}\mathrm{d}x\left(t\right)& =f\left(t,x\left(t\right),{u}_{1}\left(t\right),{u}_{2}\left(t\right),\dots ,{u}_{n}\left(t\right)\right)\mathrm{d}t\\ \phantom{\rule{1em}{0ex}}+\sigma \left(t,x\left(t\right),{u}_{1}\left(t\right),{u}_{2}\left(t\right),\dots ,{u}_{n}\left(t\right)\right){\mathit{\text{dC}}}_{t},\phantom{\rule{1em}{0ex}}x\left({t}_{0}\right)={x}_{0}\end{array}$
(4)

which describes the evolution of the state and n objective functions

$\underset{{u}_{i}}{\text{sup}}{E}_{{t}_{0}}\left[\underset{{t}_{0}}{\overset{T}{\int }}{R}_{i}\left(t,x\left(t\right),{u}_{1}\left(t\right),{u}_{2}\left(t\right),\dots ,{u}_{n}\left(t\right)\right)\mathrm{d}t+{W}_{i}\left(x\left(T\right)\right)\right]\text{for}\phantom{\rule{0.3em}{0ex}}i\in N,$
(5)

where ${E}_{{t}_{0}}$ denotes the expectation operator performed at time t 0 and the canonical Liu process C t  defined on an uncertainty space (Γ,ℒ,ℳ) is Θ-dimension.

Now, let us make the following assumptions about the functions f (·) and σ (·). Suppose

$f:\left[\phantom{\rule{0.3em}{0ex}}{t}_{0},T\right]×\phantom{\rule{0.3em}{0ex}}{\Re }^{m}×{U}_{1}×\cdots ×{U}_{n}\to {\Re }^{m}$

and

$\sigma :\left[\phantom{\rule{0.3em}{0ex}}{t}_{0},T\right]×\phantom{\rule{0.3em}{0ex}}{\Re }^{m}×{U}_{1}×\cdots ×{U}_{n}\to {\Re }^{m}×{\Re }^{\Theta }$

have continuous partial derivatives and satisfy the linear growth condition and Lipschitz condition.

Each player has perfect observations of the state vector x (t) at every moment t ∈ [ t 0,T] and constructs his strategy in the game (4)-(5) as an admissible feedback control of the following type:

${u}_{i}\left(t\right)={u}_{i}\left(t,x\left(t\right)\right)$

where

${u}_{i}\left(·,·\right):\left[\phantom{\rule{0.3em}{0ex}}{t}_{0},T\right]×{\Re }^{m}\to {U}_{i}.$

Denote

${u}_{-i}\left(t,x\right)=\left\{{u}_{1}\left(t,x\right),{u}_{2}\left(t,x\right),\dots ,{u}_{i-1}\left(t,x\right),{u}_{i+1}\left(t,x\right),\dots ,{u}_{n}\left(t,x\right)\right\}.$

A feedback Nash equilibrium of the uncertain differential game (4)-(5) can be defined as follows.

Definition 13. A set of strategies $\left\{{u}_{1}^{\ast }\left(s,x\right),{u}_{2}^{\ast }\left(s,x\right),\dots ,{u}_{n}^{\ast }\left(s,x\right)\right\}$ is called a feedback Nash equilibrium for the n-person uncertain differential game (4)-(5), and {x (s),t ≤ s ≤ T} is the corresponding state trajectory, if there exist real-valued functions Vi(t,x) : [ t 0,T] × ℜm → ℜ, satisfying the following relations for each i ∈ N :

$\begin{array}{ll}{V}^{i}\left(t,x\right)& ={E}_{{t}_{0}}\left[\underset{t}{\overset{T}{\int }}{R}_{i}\left(s,{x}^{\ast }\left(s\right),{u}_{i}^{\ast }\left(s,{x}^{\ast }\right),{u}_{-i}^{\ast }\left(s,{x}^{\ast }\right)\right)\mathrm{d}s+{W}_{i}\left({x}^{\ast }\left(T\right)\right)\right]\\ \ge {E}_{{t}_{0}}\left[\underset{t}{\overset{T}{\int }}{R}_{i}\left(s,{x}^{\left[i\right]}\left(s\right),{u}_{i}\left(s,{x}^{\left[i\right]}\right),{u}_{-i}^{\ast }\left(s,{x}^{\left[i\right]}\right)\right)\mathrm{d}s+{W}_{i}\left({x}^{\left[i\right]}\left(T\right)\right)\right],\\ \phantom{\rule{1em}{0ex}}\forall \phantom{\rule{0.3em}{0ex}}{u}_{i}\left(·,·\right)\in \left[\phantom{\rule{0.3em}{0ex}}{t}_{0},T\right]×{\Re }^{m},x\left(·\right)\in {\Re }^{m};\\ {V}^{i}\left(T,x\right)& ={W}^{i}\left(x\left(T\right)\right)\end{array}$

where on the time interval [ t,T] :

$\begin{array}{l}\mathrm{d}{x}^{\ast }\left(s\right)=f\left(t,{x}^{\ast }\left(s\right),{u}_{i}^{\ast }\left(s,{x}^{\ast }\right),{u}_{-i}^{\ast }\left(s,{x}^{\ast }\right)\right)\mathrm{d}s+\sigma \left(t,{x}^{\ast }\left(s\right),{u}_{i}^{\ast }\left(s,{x}^{\ast }\right),{u}_{-i}^{\ast }\left(s,{x}^{\ast }\right)\right){\mathit{\text{dC}}}_{s},\\ {x}^{\ast }\left(t\right)=x;\\ \mathrm{d}{x}^{\left[i\right]}\left(s\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}f\left(s,\phantom{\rule{0.3em}{0ex}}{x}^{\left[i\right]}\left(s\right),\phantom{\rule{0.3em}{0ex}}{u}_{i}\left(s,{x}^{\left[i\right]}\right),\phantom{\rule{0.3em}{0ex}}{u}_{-i}^{\ast }\left(s,{x}^{\left[i\right]}\right)\right)\mathrm{d}s+\sigma \left(s,\phantom{\rule{0.3em}{0ex}}{x}^{\left[i\right]}\left(s\right),\phantom{\rule{0.3em}{0ex}}{u}_{i}\left(s,{x}^{\left[i\right]}\right),\phantom{\rule{0.3em}{0ex}}{u}_{-i}^{\ast }\left(s,{x}^{\left[i\right]}\right)\right){\mathit{\text{dC}}}_{s},\\ {x}^{\left[i\right]}\left(t\right)=\mathrm{x.}\end{array}$

The feature of the definition of feedback Nash equilibrium is that if an n-pair $\left\{{u}_{i}^{\ast }\left(s,x\right);i\in N\right\}$ provides a feedback Nash equilibrium to n-person uncertain differential game (4)-(5) with duration [t 0,T], its restriction to the time interval [ t,T] provides a feedback Nash equilibrium to the same differential game (4)-(5) defined on the shorter time interval [ t,T], with the initial state taken as x (t), and this being so for all t ∈ [ t 0,T]. Feedback Nash equilibrium depends only on the time variable t and the current value of the state x(t), but not on memory (including the initial state x 0).

Next, we give sufficient conditions guaranteeing the $\left\{{u}_{i}^{\ast }\left(t,x\right);i\in N\right\}$ is a feedback Nash equilibrium for the game (4)-(5).

Theorem 3.An n-tuple of strategies $\left\{{u}_{i}^{\ast }\left(t,x\right);i\in N\right\}$ provides a feedback Nash equilibrium to the n-person uncertain differential game (4)-(5) if there exist real-valued functions Vi(t,x) : [ t 0,T] × ℜm→ ℜ,i ∈ N, satisfying the partial differential equations

$\begin{array}{ll}-{V}_{t}^{i}\left(t,x\right)& =\underset{{u}_{i}}{\text{sup}}\left\{{R}_{i}\left(t,x,{u}_{i}\left(t,x\right),\underset{-i}{\overset{\ast }{u}}\left(t,x\right)\right)+\underset{x}{\overset{i}{V}}\left(t,x\right)f\left(t,x,{u}_{i}\left(t,x\right),\underset{-i}{\overset{\ast }{u}}\left(t,x\right)\right)\right\}\\ ={R}_{i}\left(t,x,{u}_{i}^{\ast }\left(t,x\right),{u}_{-i}^{\ast }\left(t,x\right)\right)+{V}_{x}^{i}\left(t,x\right)f\left(t,x,{u}_{i}^{\ast }\left(t,x\right),{u}_{-i}^{\ast }\left(t,x\right)\right),\\ {V}^{i}\left(T,x\right)& ={W}_{i}\left(x\left(T\right)\right).\end{array}$

Proof. This result follows readily from the definition of feedback Nash equilibrium and from Theorem 2, since by fixing all players’ strategies, except the i th one’s, at their equilibrium choices, we arrive at an uncertain optimal control problem of the type covered by Theorem 2.

## An uncertain differential game of capitalism

Let us return to one of the classical problems: capitalism, considering an uncertain differential game of capitalism where the capital accumulation process may be subjected to uncertainties such as environmental disasters (the global financial crisis). In brief, the state trajectory is represented by the uncertain differential equation.

Consider an uncertain differential game of capitalism with two players, a government and a representative firm. The government represents the workers, while the firm represents the capitalist.

The economy has a neoclassical production function y = f (k); it satisfies f (k) > 0, f′′ (k) < 0, $\underset{k\to 0}{\text{lim}}{f}^{\prime }\left(k\right)=\infty ,\underset{k\to \infty }{\text{lim}}{f}^{\prime }\left(k\right)=0$. The labor force receives an income equal to its marginal product f (k) - kf (k), while the firm derives a rent equivalent to its marginal product f (k), where k is capital per labor.

The capital accumulation equation is given by

$\mathit{\text{dK}}=\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}s\left(\phantom{\rule{0.3em}{0ex}}{f}^{\prime }\left(k\right)L-X\right)-\mathrm{\delta K}\right]\mathit{\text{dt}},$
(6)

where t ∈ [ 0,T], K is the capital, L is the labor, s is the investment rate controlled by the firm, X denotes the total social transfer within the government’s control, and δ is the depreciation rate.

Assume that labor dynamics follows an uncertain differential equation

$\mathit{\text{dL}}=\mathit{\text{nLdt}}+\sigma \text{LdC}$
(7)

where n is the expected rate of labor, σ is the instantaneous variance, and C is a one-dimensional canonical Liu process.

We can treat capital per labor $k=K/L\triangleq h\left(K,L\right)$ and apply the uncertain calculus (Theorem 1), we obtain

$\begin{array}{ll}\mathit{\text{dk}}& ={h}_{K}\mathit{\text{dK}}+{h}_{L}\mathit{\text{dL}}\\ =\phantom{\rule{0.3em}{0ex}}\left[\phantom{\rule{0.3em}{0ex}}s\left(\phantom{\rule{0.3em}{0ex}}{f}^{\prime }\left(k\right)-x\right)-\left(\delta +n\right)k\right]\mathit{\text{dt}}-\mathrm{\sigma kdC},\end{array}$
(8)

where x = X/L represents the social transfer per labor with the government’s control. This is the state trajectory uncertain different equation, and the initial state k (0) is fixed, equal to k 0.

The government in this model is a vote maximizer with vote function

${R}^{G}\left(k,x\left(·\right),s\left(·\right)\right)=f\left(k\right)-k{f}^{\prime }\left(k\right)+\mathrm{x.}$

It is straightforward that - f (k) + k f (k) ≤ x ≤ f (k). Then, the government’s objective functional can be written as

${J}^{G}\left({k}_{0},x\left(·\right),s\left(·\right)\right)=\underset{x\left(·\right)}{\text{sup}}E\left[\underset{0}{\overset{T}{\int }}{e}^{-\rho t}\left[f\left(k\right)-k{f}^{\prime }\left(k\right)+x\right]\mathrm{d}t\right],$
(9)

where ρ is a positive discount rate and E denotes the expectation operator performed at time 0.

Assume that the firm owns the capital in the production process and controls investment. The objective of the firm is to maximize the payment for the shareholders

${R}^{F}\left(k,x\left(·\right),s\left(·\right)\right)=\left(1-s\right)\left(\phantom{\rule{0.3em}{0ex}}{f}^{\prime }\left(k\right)-x\right),\phantom{\rule{2em}{0ex}}0\le s\le 1,$

where x is the social transfer (the government may be tax or subsidize the firm) and s is the investment controlled by the firm.

The firm’s objective functional is

${J}^{F}\left({k}_{0},x\left(·\right),s\left(·\right)\right)=\underset{s\left(·\right)}{\text{sup}}E\left[\underset{0}{\overset{T}{\int }}{e}^{-\rho t}\left(1-s\right)\left[{f}^{\prime }\left(k\right)-x\right]\mathrm{d}t\right].$
(10)

To summarize, we have defined an uncertain differential game model of capitalism by (8)-(10).

By Theorem 3, the following partial differential equations are satisfied.

$\begin{array}{l}-{J}_{t}^{G}\left(t,k\right)=\underset{x\left(·\right)}{\text{sup}}\left\{{e}^{-\mathrm{\rho t}}\left[f\left(k\right)\phantom{\rule{0.3em}{0ex}}-k{f}^{\prime }\left(k\right)+x\right]\right\\\ \phantom{\rule{8em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{J}_{k}^{G}\left(t,k\right)\left[\stackrel{̄}{s}\left(\phantom{\rule{0.3em}{0ex}}{f}^{\prime }\left(k\right)-x\right)-\left(\delta +n\right)k\right]},\end{array}$
(11)
$\begin{array}{l}-{J}_{t}^{F}\left(t,k\right)=\underset{s\left(·\right)}{\text{sup}}\left\{{e}^{-\rho t}\left(1-s\right)\left[{f}^{\prime }\left(k\right)-\stackrel{̄}{x}\right]\right\\\ \phantom{\rule{8em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{J}_{k}^{F}\left(t,k\right)\left[s\left(\phantom{\rule{0.3em}{0ex}}{f}^{\prime }\left(k\right)-\stackrel{̄}{x}\right)-\left(\delta +n\right)k\right]}.\end{array}$
(12)

The control $\left(\stackrel{̄}{x},\stackrel{̄}{s}\right)$ in the uncertain differential game of capitalism is a feedback Nash equilibrium in the sense that they are functions of current time t and current state k.

Respective maximization of Equation 11 with respect to x (·) and Equation 12 with respect to s (·) yield

${J}_{k}^{G}\left(t,k\right)={e}^{-\rho t}\frac{1}{\stackrel{̄}{s}}\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{J}_{k}^{F}\left(t,k\right)={e}^{-\rho t}.$

These values can be substituted back into the respective (11) and (12) to obtain

$\begin{array}{l}-{J}_{t}^{G}\left(t,k\right)={e}^{-\rho t}\left[f\left(k\right)-k{f}^{\prime }\left(k\right)+{f}^{\prime }\left(k\right)-\frac{k}{\stackrel{̄}{s}}\left(\delta +n\right)\right],\phantom{\rule{2em}{0ex}}\end{array}$
(13)
$\begin{array}{l}-{J}_{t}^{F}\left(t,k\right)={e}^{-\rho t}\left[{f}^{\prime }\left(k\right)-\stackrel{̄}{x}-\left(\delta +n\right)k\right].\phantom{\rule{2em}{0ex}}\end{array}$
(14)

For the government, let JG(t,k) = e - ρt Ak2. Hence, ${J}_{t}^{G}\left(t,k\right)=-\rho {e}^{-\rho t}A{k}^{2}$ and ${J}_{k}^{G}\left(t,k\right)=2\mathit{\text{Ak}}{e}^{-\rho t}$.

Substitute these into (13) and solve for A which is given by

$A=\frac{f\left(k\right)-k{f}^{\prime }\left(k\right)+{f}^{\prime }\left(k\right)-\left(k/\stackrel{̄}{s}\right)\left(\delta +n\right)}{\rho {k}^{2}}.$

Thus,

${J}^{G}\left(t,k\right)={e}^{-\mathrm{\rho t}}\frac{f\left(k\right)-k{f}^{\prime }\left(k\right)+{f}^{\prime }\left(k\right)-\left(k/\stackrel{̄}{s}\right)\left(\delta +n\right)}{\rho }.$

Similarly for the firm, we let JF(t,k) = e- ρt Bk2. Thus, ${J}_{t}^{F}\left(t,k\right)=-\rho {e}^{-\rho t}B{k}^{2}$ and ${J}_{k}^{F}\left(t,k\right)=2\mathit{\text{Bk}}{e}^{-\rho t}$.

We solve B which is given by

$B=\frac{{f}^{\prime }\left(k\right)-\stackrel{̄}{x}-\left(\delta +n\right)k}{\rho {k}^{2}}.$

It follows that

${J}^{F}\left(t,k\right)={e}^{-\rho t}\frac{{f}^{\prime }\left(k\right)-\stackrel{̄}{x}-\left(\delta +n\right)k}{\rho }.$

Proposition 1.The feedback Nash equilibrium for capitalism is given by

$\left(\stackrel{̄}{s},\stackrel{̄}{x}\right)=\left(\frac{k\left(\rho +2\delta +2n\right)}{2\left[f\left(k\right)-k{f}^{\prime }\left(k\right)+{f}^{\prime }\left(k\right)\right]},{f}^{\prime }\left(k\right)-\left(\frac{\rho }{2}+\delta +n\right)k\right).$

Proof. It is easily obtained by the above equation.

Lemma 1.If the following inequality holds:

$-\rho -\delta -n\le \delta +n\le \frac{2}{k}\left[f\left(k\right)-k{f}^{\prime }\left(k\right)+{f}^{\prime }\left(k\right)\right]-\left(\rho +\delta +n\right),$

then $\stackrel{̄}{s}$ is non-negative.

Proof. Because investment rate $\stackrel{̄}{s}$ satisfies constraint $0\le \stackrel{̄}{s}\le 1$, we can easily obtain this condition.

We say δ + n is the effective depreciation rate of capital per labor, and the upper and lower bounds of the effective depreciation rate of capital per labor are given from Lemma 1.

In the deterministic game of capitalism, the payoff of the firm is completely taxed away, and the firm stops investing completely, which is a very extreme and unrealistic solution. However, in the uncertain differential game of capitalism, the government will tax less than the full amount of the payoff accrued to the firm, which will post a positive rate of investment.

## Concluding remarks

In this paper, we investigated an uncertain differential game based on the uncertain optimal control. The state dynamics is an uncertain differential equation involving a Liu process. Moreover, we proposed the definition of feedback Nash equilibrium strategies as the solution of the uncertain differential game, and a sufficient condition provided a way to find feedback equilibrium was constructed. Finally, we applied the uncertain differential game to capitalism problem. In the capitalism, the government will tax less than the full amount of the payoff accrued to the firm, which will post a positive rate of investment.

Meanwhile, this work initiated a spectrum of uncertain differential games. Therefore, there are lots of things for further research. At least, one may consider zero-sum uncertain differential games, linear-quadratic uncertain differential games, and uncertain differential games with asymmetric information. Furthermore, one may study the application of uncertain differential games such as common property fishery problem, natural resources extraction, and military confrontation.