## Introduction

Quite a number of systems may be viewed as and are called networks although their physical appearance is quite different in nature. Most of these systems are results of human civilization and are of great importance for its functioning. In particular, transportation networks, like road, rail, and airline networks, canal networks for transporting water, waster water, oil and natural gas, and communication networks of telephones and computers networks and so on even natural systems such as caves and rivers can be viewed as networks. To see the common structure behind these different systems, they must be abstracted from their physical appearance. Then, their underlying structure may be seen as a collection of vertices, which might be road crossings, railway stations, airports, pumping stations and so on, and a collection of lines, which might be roads, canals, telephone cables, etc. connecting all or some of the vertices and every arc has weights or capacities. In deterministic network, the vertices, arcs, and capacities of arcs are deterministic. The deterministic network was developed and widely applied in the last century. However, in practice, different types of indeterminacy must be taken into account for a variety of reasons.

Random network was first investigated by Frank and Hakimi [1] in 1965 for modeling communication network with random capacities. From then on, the random network was well developed and widely applied. For example, Frank [2], Mirchandani [3], and Sigal et al. [4] studied probabilistic distribution of the shortest path when the network arc of weighs are random variables. Frank and Frisch [5] considered how to determine the maximum flow probability distribution in networks where each capacity is a continuous random variable. Doulliez [6] studied multi-terminal network with discrete probabilistic branch capacities. In addition, some researchers have tried to give lower and upper bounds on the expected maximum flow of network. Carey and Hendrickson [7] and Onaga [8] presented an efficient method to find a lower bound in general directed networks. Furthermore, Fishman [9], Goldberg and Tarjan [10], and Nawathe and Rao [11] mainly used stochastic optimization to solve the maximum flow problem in a random network. Other researchers, such as Fu and Rillet [12], Hall [13], and Lou [14], have done a lot of work in this field of random network.

Uncertain network was first explored by Liu [15] in 2009. Gao [16] investigates solutions to the α-shortest path and the most shortest path in an uncertain network in 2011. Besides, the maximum flow problem was discussed by Han et al. [17], the uncertain minimum cost flow problem was dealt with by Ding [18], and Chinese postman problem was explored by Zhang and Peng [19] for an uncertain network.

In many cases, uncertainty and randomness simultaneously appear in a complex system. In order describe this complex system, Liu [20] gave a concept of the uncertain random network in which some weights are random variables and others are uncertain variables, at the same time, studied the shortest path chance distribution of an uncertain random network.

In this paper, we will give the maximum flow of chance distribution of an uncertain random network. The remainder of this paper is organized as follows. In the section ‘Preliminaries’, some basic concepts and properties of uncertainty theory and chance theory used throughout this paper are introduced. In the section ‘Uncertain random network’, uncertain random network is recalled. In the section ‘Maximum flow of uncertain random network’, chance distribution of maximum flow is proved. The section ‘Expected value of maximum flow’ proposes the expected value the maximum flow in an uncertain random network. The section ‘Conclusions’ gives a brief summary to this paper.

## Preliminaries

In this section, we first review some concepts of uncertainty theory, including uncertain measure, uncertain variable, uncertainty distribution, and operational law. Then, we introduce some useful definitions and properties about uncertain random variable, chance measure, chance distribution, and operational law.

### Uncertainty theory

This subsection reviews some basic concepts of uncertainty theory. In reality, however, because of the lack of information that no samples are available to estimate a probability distribution, in this situation, we have to invite some domain experts to evaluate the belief degree that each event will happen, and this moment, belief degree are uncertain variables rather than random variables. If we insist on using probability theory to deal with indeterminacy, counterintuitive results will occur [21]. In order to deal with some indeterministic phenomena, Liu [21] founded uncertainty theory in 2007. And Liu [21] first presented uncertain measure as a set function satisfying four axioms. As a fundamental concept in uncertainty theory, the uncertain variable was presented by Liu [21] in 2007. Liu [22] proposed the concept of uncertainty distribution and inverse uncertainty distribution. After that, many researchers widely studied the uncertainty theory and made significative progress. Gao [23] studied the properties of continuous uncertain measure. Peng and Iwamura [24] proved a sufficient and necessary condition for uncertainty distribution. In addition, the concept of independence was proposed by Liu [25]. After, Liu [22] presented the operational law of uncertain variables. In order to rank uncertain variables, Liu [21] proposed the concept of expected value of uncertain variables. The linearity of expected value operator was verified by Liu [22]. As an important contribution, Liu and Ha [26] derived a useful formula for calculating the expected values of strictly monotone functions of independent uncertain variables. Up to now, theory and practice have shown that uncertainty theory is an efficient tool to deal with indeterministic phenomena, especially expert belief degrees. Liu [27] presented a counterexample of truck-cross-over-bridge, it is inappropriate to model belief degrees by probability theory. It is a reason why there is a need for uncertainty theory. The similarities and differences between uncertainty concept and standard probabilistic concept, as well as other concepts of uncertainty could be found in [21].

#### Definition1.

(Liu [21]) Let be a σ-algebra on a nonempty set Γ. A set function $\mathcal{ℳ}:\mathcal{ℒ}\to \left[0,1\right]$ is called an uncertain measure if it satisfies the following axioms:

Axiom 1: (Normality Axiom) $\mathcal{ℳ}\left\{\Gamma \right\}=1$ for the universal set Γ.

Axiom 2: (Duality Axiom) $\mathcal{ℳ}\left\{\Lambda \right\}+\mathcal{ℳ}\left\{{\Lambda }^{c}\right\}=1$ for any event Λ.

Axiom 3: (Subadditivity Axiom) For every countable sequence of events Λ1, Λ2,⋯, we have

$\mathcal{ℳ}\left\{\bigcup _{i=1}^{\infty }{\Lambda }_{i}\right\}\le \sum _{i=1}^{\infty }\mathcal{ℳ}\left\{{\Lambda }_{i}\right\}.$

Besides, the product uncertain measure on the product σ-algebra is defined by the following product axiom.

Axiom 4: (Product Axiom) (Liu [25]) Let $\left({\Gamma }_{k},{\mathcal{ℒ}}_{k},{\mathcal{ℳ}}_{k}\right)$ be uncertainty spaces for k=1,2,⋯ The product uncertain measure is an uncertain measure satisfying

$\mathcal{ℳ}\left\{\prod _{i=1}^{\infty }{\Lambda }_{k}\right\}=\underset{k=1}{\overset{\infty }{\wedge }}{\mathcal{ℳ}}_{k}\left\{{\Lambda }_{k}\right\}$

where Λ k are arbitrarily chosen events from ${\mathcal{ℒ}}_{k}$ for k=1,2,⋯, respectively.

#### Definition2.

(Liu [21]) An uncertain variable ξ is a measurable function from an uncertainty space $\left(\Gamma ,\mathcal{ℒ},\mathcal{ℳ}\right)$ to the set of real numbers, i.e., for any Borel set B of real numbers, the se

$\left\{\xi \in B\right\}=\left\{\gamma \in \Gamma |\xi \left(\gamma \right)\in B\right\}$

is an event.

An uncertain variable is essentially a measurable function from an uncertain space to the set of real numbers. In order to describe an uncertain variable, a concept of uncertainty distribution is defined as follows.

#### Definition3.

(Liu [21]) The uncertainty distribution of an uncertain variable ξ is defined by

$\Phi \left(x\right)=\mathcal{ℳ}\left\{\xi \le x\right\}$

for any x∈ℜ.

#### Definition4.

(Liu [21]) Let ξ be an uncertain variable with regular uncertainty distribution Φ. Then, the inverse function Φ−1 is called the inverse uncertainty distribution of ξ.

The distribution of a monotonous function of uncertain variables can be obtained by the following theorem.

#### Theorem1.

(Liu [22]) Let ξ1, ξ2,⋯, ξ n be independent uncertain variables with uncertainty distributions Φ1, Φ2,⋯, Φ n , respectively. If f(ξ1, ξ2,⋯, ξ n ) is strictly increasing with respect to ξ1, ξ2,⋯, ξ m and strictly decreasing with respect to ξm+1, ξm+2,⋯, ξ n , then ξ=f(ξ1, ξ2,⋯, ξ n ) is an uncertain variable with an inverse uncertainty distribution

${\Phi }^{-1}\left(\alpha \right)=f\left({\Phi }_{1}^{-1}\left(\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\Phi }_{m}^{-1}\left(\alpha \right),{\Phi }_{m+1}^{-1}\left(1-\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\Phi }_{n}^{-1}\left(1-\alpha \right)\right).$

#### Definition5.

(Liu [21]) The expected value of an uncertain variable ξ is defined by

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\mathcal{ℳ}\left\{\xi \ge x\right\}\mathrm{d}x-\underset{-\infty }{\overset{0}{\int }}\mathcal{ℳ}\left\{\xi \le x\right\}\mathrm{d}x$

provided that at least one of the two integrals is finite.

#### Theorem2.

(Liu [21]) Let ξ be an uncertain variable with uncertainty distribution Φ. If the expected value exists, then

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\left(1-\Phi \left(x\right)\right)\mathrm{d}x-\underset{-\infty }{\overset{0}{\int }}\Phi \left(x\right)\mathrm{d}\mathrm{x.}$

Based on this result, Liu [25] proved the linearity property of the expected value operator. For two independent uncertain variables ξ and η, we have E[ a ξ+b η]=a E[ ξ]+b E[ η] where a and b are real numbers.

In 2010, Liu [22] first introduced a formula expected value by inverse uncertainty distribution, that is

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{1}{\int }}{\Phi }^{-1}\left(\alpha \right)\mathrm{d}\mathrm{\alpha .}$

Liu and Ha [26] proposed a generalized formula for expected value by inverse uncertainty distribution.

#### Theorem3.

(Liu and Ha [26]) Let ξ1, ξ2,⋯, ξ n be independent uncertain variables with uncertainty distributions Φ1, Φ2,⋯, Φ n , respectively. If f(ξ1, ξ2,⋯, ξ n ) is strictly increasing with respect to ξ1,⋯, ξ m and strictly decreasing with respect to ξm+1,⋯, ξ n , then the uncertain variable ξ=f(ξ1, ξ2,⋯, ξ n )has an expected value

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{1}{\int }}f\left({\Phi }_{1}^{-1}\left(\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\Phi }_{m}^{-1}\left(\alpha \right),{\Phi }_{m+1}^{-1}\left(1-\alpha \right),\cdots \phantom{\rule{0.3em}{0ex}},{\Phi }_{n}^{-1}\left(1-\alpha \right)\right)\mathrm{d}\mathrm{\alpha .}$

Meanwhile, Liu [21] presented the concept of variance of uncertain variables and also proposed some formulas to calculate the variance through uncertainty distribution. Recently, Yao [28] proposed a formula to calculate the variance using inverse uncertainty distribution. Sheng and Kar [29] verified some results of moment of uncertain variable through inverse uncertainty distribution.

### Chance theory

Probability theory was developed by Kolmogorov [30] in 1933. Since then, probability theory has become an important branch of mathematics for modeling frequencies, while uncertainty theory was founded by Liu [21] in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. However, in many cases, uncertainty and randomness simultaneously appear in a complex system. In order to describe this phenomenon, Liu [31] first proposed chance theory, which is a mathematical methodology for modeling complex systems with both uncertainty and randomness in 2013, including chance measure, uncertain random variable, chance distribution, operational law, expected value, and so on. As an important contribution to chance theory, Liu [32] presented an operational law of uncertain random variables. Meanwhile, Guo and Wang [33] proposed a formula to calculate the variance through chance distribution and Sheng and Yao [34] verified some results of variance through inverse chance distribution. Furthermore, in order to deal with uncertain random phenomenon evolving in time, Gao and Yao [35] presented an uncertain random process and an uncertain random renewal process in the light of chance theory.

Let $\left(\Gamma ,\mathcal{ℒ},\mathcal{ℳ}\right)$ be an uncertainty space and $\left(\Omega ,\mathcal{A},\text{Pr}\right)$ be a probability space. Then, the chance space refers to the product $\left(\Gamma ,\mathcal{ℒ},\mathcal{ℳ}\right)×\left(\Omega ,\mathcal{A},\text{Pr}\right)$.

#### Definition6.

(Liu [31]) Let $\left(\Gamma ,\mathcal{ℒ},\mathcal{ℳ}\right)×\left(\Omega ,\mathcal{A},\text{Pr}\right)$ be a chance space, and let $\Theta \in \mathcal{ℒ}×\mathcal{A}$ be an uncertain random event. Then, the chance measure of Θ is defined as

$\text{Ch}\left\{\Theta \right\}=\underset{0}{\overset{1}{\int }}\text{Pr}\left\{\omega \in \Omega \mid \mathcal{ℳ}\left\{\gamma \in \Gamma |\left(\gamma ,\omega \right)\in \Theta \right\}\ge r\right\}\mathrm{d}\mathrm{r.}$

Liu [31] proved that a chance measure satisfies normality, duality, and monotonicity properties, that is

1. (i)

Ch{Γ×Ω}=1.

2. (ii)

Ch{Θ}+Ch{Θ c}=1 for and event Θ.

3. (iii)

Ch{Θ 1}≤Ch{Θ 2} for any real number set Θ 1Θ 2.

Besides, Hou [36] proved the subadditivity of chance measure, that is,

$\text{Ch}\left\{\bigcup _{i=1}^{\infty }{\Theta }_{i}\right\}\le \sum _{i=1}^{\infty }\text{Ch}\left\{{\Theta }_{i}\right\}$

for a sequence of events Θ1, Θ2,⋯.

#### Definition7.

(Liu [31]) An uncertain random variable is a measurable function ξ from a chance space $\left(\Gamma ,\mathcal{ℒ},\mathcal{ℳ}\right)×\left(\Omega ,\mathcal{A},\text{Pr}\right)$ to the set of real numbers, i.e., {ξB} is an event for any Borel set B.

Random variables and uncertain variables can be regarded as special cases of uncertain random variables. Let η be a random variable, and let τ be an uncertain variable. Then, η+τ and η×τ are both uncertain random variables.

To calculate the chance measure, Liu [31] presented a definition of chance distribution.

#### Definition8.

(Liu [31]) Let ξ be an uncertain random variable. Then, its chance distribution by

$\Phi \left(x\right)=\text{Ch}\left\{\xi \le x\right\}$

for any $x\in \mathcal{R.}$

The chance distribution of a random variable is just its probability distribution, and the chance distribution of an uncertain variable is just its uncertainty distribution.

#### Theorem4.

(Liu [32]) Let η1, η2,⋯, η m be independent uncertain random variables with probability distributions Ψ1, Ψ2,⋯, Ψ m , respectively, and let τ1, τ2,⋯, τ n be uncertain variables. Then, the uncertain random variable

$\xi =f\left({\eta }_{1},{\eta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\eta }_{m},{\tau }_{1},{\tau }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\tau }_{n}\right)$

has a chance distribution

$\Phi \left(x\right)=\underset{{\mathcal{R⌉}}^{m}}{\int }F\left(x,{y}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{m}\right)\mathrm{d}{\Psi }_{1}\left({y}_{1}\right)\cdots \mathrm{d}{\Psi }_{m}\left({y}_{m}\right)$

where F(x, y1,⋯, y m ) is the uncertainty distribution of uncertain variable

$f\left(\phantom{\rule{0.3em}{0ex}}{y}_{1},{y}_{2},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{m},{\tau }_{1},{\tau }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\tau }_{n}\right)$

for any real numbers y1, y2,⋯, y m .

#### Definition9.

(Liu [32]) Let ξ be an uncertain random variable. Then, its expected value is defined by

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\text{Ch}\left\{\xi \ge r\right\}\mathrm{d}r-\underset{-\infty }{\overset{0}{\int }}\text{Ch}\left\{\xi \le r\right\}\mathrm{d}r$

provided that at least one of the two integrals is finite.

Let Φ denote the chance distribution of ξ. Liu [32] proved a formula to calculate the expected value of uncertain random variable with chance distribution if E[ ξ] exists, then

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\left(1-\Phi \left(x\right)\right)\mathrm{d}x-\underset{-\infty }{\overset{0}{\int }}\Phi \left(x\right)\mathrm{d}\mathrm{x.}$

Let Φ be an uncertain random variable with regular chance distribution Φ. Liu [32] proved a formula to calculate the expected value of uncertain random variable with inverse chance distribution if E[ ξ] exists, then

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{1}{\int }}{\Phi }^{-1}\left(\alpha \right)\mathrm{d}\mathrm{\alpha .}$

#### Theorem5.

(Liu [32]) Let η1, η2,⋯, η m be independent uncertain random variables with probability distributions, Ψ1, Ψ2,⋯, Ψ m , respectively, and let τ1, τ2,⋯, τ n be uncertain variables, then the uncertain random variable

$\xi =f\left({\eta }_{1},{\eta }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\eta }_{m},{\tau }_{1},{\tau }_{2},\cdots \phantom{\rule{0.3em}{0ex}},{\tau }_{n}\right)$

has an expected value

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{{\mathcal{R}}^{m}}{\int }E\left[\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}f\left(\phantom{\rule{0.3em}{0ex}}{y}_{1},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{m},{\tau }_{1},\cdots \phantom{\rule{0.3em}{0ex}},{\tau }_{n}\right)\right]\mathrm{d}{\Psi }_{1}\left(\phantom{\rule{0.3em}{0ex}}{y}_{1}\right)\cdots \mathrm{d}{\Psi }_{m}\left(\phantom{\rule{0.3em}{0ex}}{y}_{m}\right)$

where E[ y1,⋯, y m , τ1,⋯, τ n ] is the expected value of uncertain variable f(y1,⋯, y m , τ1,⋯, τ n ) for any real numbers y1, y2,⋯, y m .

Meanwhile, Liu [32] proved the linearity of expected value operator, that is

$E\left[\phantom{\rule{0.3em}{0ex}}\eta +\tau \right]=E\left[\phantom{\rule{0.3em}{0ex}}\eta \right]+E\left[\phantom{\rule{0.3em}{0ex}}\tau \right]$

where η is a random variable and is τ an uncertain variable.

In application, Liu [32] founded uncertain random programming in 2013. As extensions, Zhou et al. [37] proposed uncertain random multi-objective programming for optimizing multiple, incommensurable, and conflicting objectives. After that, uncertain random programming was developed steadily and applied widely; Qin [38] proposed uncertain random goal programming in order to satisfy as many goals as possible in the order specified, and Ke [39] proposed uncertain random multilevel programming for studying decentralized decision systems. In order to quantify the rise of uncertain random systems in which the leader and followers may have their own decision variables and objective functions, Liu and Ralescu [40] invented the tool of uncertain random risk analysis. Wen et al. [41] presented the tool of uncertain random reliability analysis for dealing with uncertain random systems.

## Uncertain random network

In this section, we introduce a definition of an uncertain random network and the shortest path chance distribution of uncertain random network.

### Definition10.

(Liu [20]) Assume is the collection of nodes, is the collection of uncertain arcs, is the collection of random arcs, and is the collection of uncertain and random arc capacities. Then, the quartette $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ is said to be an uncertain random network.

In this paper, we assume that the uncertain random network is of order n with a collection of nodes $\mathcal{N}=\left\{1,2,\cdots \phantom{\rule{0.3em}{0ex}},n\right\}$, where ‘1’ is of the source node, and ‘n’ is the destination node. We defined two collections of arcs,

$\mathcal{U}=\left\{\left(i,j\right)|\left(i,j\right)\text{are uncertain arcs}\right\},$
$\mathcal{R}=\left\{\left(i,j\right)|\left(i,j\right)\text{are random arcs}\right\}.$

Note that all deterministic arcs are regarded as special uncertain ones. Let C i j denote the capacities of arcs (i, j), $\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}$, respectively. Then, C i j are uncertain variables if $\left(i,j\right)\in \mathcal{U}$ and random variables if $\left(i,j\right)\in \mathcal{R}$. Write $\mathcal{C}=\left\{{C}_{\mathit{\text{ij}}}|\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}\right\}$.

Figure 1 shows an uncertain random network $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ of order 6 in which

$\mathcal{N}=\left\{1,2,3,4,5,6\right\},$
$\mathcal{U}=\left\{\left(1,2\right),\left(2,5\right),\left(3,5\right),\left(5,6\right)\right\},$
$\mathcal{R}=\left\{\left(1,3\right),\left(3,4\right),\left(2,4\right),\left(4,6\right)\right\},$
$\mathcal{C}=\left\{{C}_{12},{C}_{13},{C}_{24},{C}_{25},{C}_{34},{C}_{35},{C}_{46},{C}_{56}\right\}.$

The uncertain random network degenerates to a random network (Frank and Hakimi [1]) if all capacities are random variables and degenerates to an uncertain network (Liu [22]) if all weights are uncertain variables.

### Theorem6.

(Shortest path chance distribution Liu [20]) Let $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ be an uncertain random network. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for $\left(i,j\right)\in \mathcal{U}$ and the random capacities ξ i j have probability distributions Ψ i j for $\left(i,j\right)\in \mathcal{R}$, respectively. Then, the shortest path from a source node to a destination node has a chance distribution

$\Phi \left(x\right)=\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right)$

where $F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)$ is the uncertainty distribution of uncertain variable $f\left(\phantom{\rule{0.3em}{0ex}}{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right)$ and it is determined by its inverse uncertainty distribution

${F}^{-1}\left(\alpha ;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)=f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right),$

and f can be calculated by the Dijkstra’s algorithm.

## Maximum flow of uncertain random network

In this section, we will introduce some algorithm of the maximum and apply chance theory to the maximum flow problem in an uncertain random network. This is a distinct contribution from other optimization methods. We will illustrate that chance theory can serve as a powerful tool to deal with the maximum flow in an uncertain random network.

### Maximum flow problem

For a network, there is one of the important problems to study the maximum flow. In the past five decades, many efficient algorithms for the maximum flow problem have emerged for a fixed network. The maximum flow algorithm was first investigated by Fulkerson and Dantzig [42] in 1955. Then, representative methods in maximum flow algorithms are based on either augmenting paths or preflows. Augmenting path algorithms push flow along a path from the source to the sink in the residual network and include Ford-Fulkerson’s labeling algorithm [43] and Dinic’s blocking flow algorithm [44]. Edmonds and Karp [45] also independently proposed that the Ford and Fulkerson algorithm augments flow along shortest paths. In order to reduce the number of augmentations, preflow-based algorithms push flow along edges were investigated in the residual network and include Karzanov’s blocking flow algorithms [46], which introduced the first preflow-push algorithm on layered networks, and Goldberg-Tarjan’s push-relabeling algorithm [10]; it constructed distance labels instead of layered networks to improve the running time of preflow-push algorithm. They described a very flexible generic preflow-push algorithm that performs push and relabel operations at active nodes. In short, for a classical network, the maximum flow can be calculated by above for any algorithms.

How do we consider the maximum flow of a indeterminacy network? For a random network, Fishman [9], Goldberg and Tarjan [10], and Nawathe and Rao [11] mainly used stochastic optimization to solve the maximum flow problem in a random network. For an uncertain network, Han et al. [17] gave the inverse uncertain distribution of the maximum flow in an uncertain network. In this paper, according to chance theory, we will study the chance distribution, the maximum flow of an uncertain random network.

In here, we assume that the networks are directed with only one source and one sink. If the arc capacities of a network are given, then we can calculate the maximum flow f of the network using the above algorithm. For the different capacities, obtain different but unique maximum flow f. In other words, the maximum flow f is a function of arc capacities. In paper [17], the author proved that the maximum flow f is a continuous and strictly increasing function with respect to C i j , where C i j denote the capacities of the (i, j) arcs. For a random network, the maximum flow f is a random variable function of arc capacities. For an uncertain network, the maximum flow f is an uncertain variable function of arc capacities. Similarly, for an uncertain random network, the maximum flow f is an uncertain random variable function of arc capacities.

### Chance distribution of maximum flow

Now, we employ chance theory to deal with this indeterministic factor. Define $\xi =\left\{{\xi }_{\mathit{\text{ij}}}|\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}\right\}$. We can denote the network with uncertain random arc capacities as $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$, its maximum flow is f(ξ). Obviously, f(ξ) is an uncertain random variable. Then, we can obtain the chance distribution of the maximum flow from a source node to a sink node by Theorem 4; we have the following theorem.

#### Theorem7.

Let $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ be an uncertain random network. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for $\left(i,j\right)\in \mathcal{U}$ and the random capacities ξ i j have probability distributions Ψ i j for $\left(i,j\right)\in \mathcal{R}$, respectively. Then, the maximum flow $f\left({\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}\right)$has a chance distribution

$\Phi \left(x\right)=\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right)$

where $F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)$ is the uncertainty distribution of uncertain variable $f\left({y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right)$, and it is determined by its inverse uncertainty distribution

${F}^{-1}\left(\alpha ;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)=f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right),$

and f may be calculated by the Ford-Fulkerson’s algorithm.

#### Proof.

By Definitions 6 and 8 and Theorem 4, we have

$\begin{array}{ll}\Phi \left(x\right)& =\text{Ch}\left\{\phantom{\rule{0.3em}{0ex}}f\left({\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}\right)\le x\right\}\phantom{\rule{2em}{0ex}}\\ =\underset{0}{\overset{1}{\int }}Pr\left\{\omega \in \Omega |\mathcal{ℳ}\left\{\phantom{\rule{0.3em}{0ex}}f\left({\xi }_{\mathit{\text{ij}}}\left(\omega \right),\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right)\le x\right\}\ge r\right\}\mathrm{d}r\phantom{\rule{2em}{0ex}}\\ =\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}\mathcal{ℳ}\left\{\phantom{\rule{0.3em}{0ex}}f\left({\xi }_{\mathit{\text{ij}}}\left(\omega \right),\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right)\le x\right\}\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right)\phantom{\rule{2em}{0ex}}\\ =\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right)\phantom{\rule{2em}{0ex}}\end{array}$

where $F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)$ is the uncertainty distribution of uncertain variable $f\left({y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right)$ for any real numbers ${y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}$ and it is determined by its inverse uncertainty distribution

${F}^{-1}\left(\alpha ;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)=f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right),$

and $f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)$ is just the maximum flow of a determinacy network and f is a strictly increasing function with respect to ξ i j , where ξ i j denote the capacities of the (i, j) arcs. We may be calculated f by the Ford-Fulkerson’s algorithm or Dinic’s algorithm for each given α. The theorem is verified.

#### Remark1.

If the uncertain random network becomes a random network, then the probability distribution of maximum flow is

$\Phi \left(x\right)=\underset{f\left({y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R\int }\right)\le x}{\int }\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right).$

#### Remark2.

(Han el at. [17]) If the uncertain random network becomes an uncertain network, then the inverse uncertainty distribution of maximum flow is

${\Phi }^{-1}\left(\alpha \right)=f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U}\right).$

#### Example1.

There is a series uncertain random network with n arcs defined by Figure 2. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for $\left(i,j\right)\in \mathcal{U}$, and the random capacities ξ i j have probability distributions Ψ i j for $\left(i,j\right)\in \mathcal{R}$, respectively. Then, by Theorem 6, we have that the maximum flow of the network $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ is an uncertain random variable and its chance distribution is

$\Phi \left(x\right)=\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right)$

where $F\left(x;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)$ is determined by its inverse uncertainty distribution

${F}^{-1}\left(\alpha ;{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)=\underset{\left(i,j\right)\in \mathcal{U\int }\cup \mathcal{R\int }}{min}\left\{\underset{\mathit{\text{ij}}}{\overset{-1}{Υ}}\left(\alpha \right),{y}_{\mathit{\text{ij}}}\right\}.$

#### Example2.

Assume an uncertain random network with four arcs defined by Figure 3. Assume that the uncertain capacities τ1, τ2 have regular uncertainty distributions Υ1, Υ2, and the random capacities ξ1, ξ2 have probability distributions Ψ1, Ψ2, respectively. Then, by Theorem 6, we have that the maximum flow of the network $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ is an uncertain random variable and its chance distribution is

$\Phi \left(x\right)=\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{+\infty }{\int }}F\left(x;{y}_{1},{y}_{2}\right)\mathrm{d}{\Psi }_{1}\left({y}_{1}\right)\mathrm{d}{\Psi }_{2}\left({y}_{2}\right)$

where F(x;y1, y2) is determined by its inverse uncertainty distribution

${F}^{-1}\left(\alpha ;{y}_{1},{y}_{2}\right)={y}_{1}\wedge {Υ}_{1}^{-1}\left(\alpha \right)+{y}_{2}\wedge {Υ}_{2}^{-1}\left(\alpha \right).$

#### Example3.

Assume an uncertain random network with five arcs defined by Figure 4. Assume that the uncertain capacities τ1, τ2 have regular uncertainty distributions Υ1, Υ2, and the random capacities ξ1, ξ2, ξ3 have probability distributions Ψ1, Ψ2, Ψ3, respectively. Then, by Theorem 6, we have that the maximum flow of the network $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ is an uncertain random variable and its chance distribution is

$\Phi \left(x\right)=\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{+\infty }{\int }}F\left(x;{y}_{1},{y}_{2},{y}_{3}\right)\mathrm{d}{\Psi }_{1}\left({y}_{1}\right)\mathrm{d}{\Psi }_{2}\left({y}_{2}\right)\mathrm{d}{\Psi }_{3}\left({y}_{3}\right)$

where F(x;y1, y2, y3) is determined by its inverse uncertainty distribution

${F}^{-1}\left(\alpha ;{y}_{1},{y}_{2},{y}_{3}\right)=\left(\left(\left({y}_{1}-{Υ}_{1}^{-1}\left(\alpha \right)\right)\vee 0\right)\wedge {y}_{3}+{y}_{2}\right)\wedge {Υ}_{2}^{-1}\left(\alpha \right)+\left({y}_{1}\wedge {Υ}_{1}^{-1}\left(\alpha \right)\right).$

## Expected value of maximum flow

In uncertain random network, we can obtain chance distribution of maximum flow, but it is difficult to calculate the maximum flow. Sometimes, we need not know the maximum flow for an uncertain random network, but only it is enough average value of flow, so we need only to calculate the expected value of the maximum flow. By Theorem 5, we have the following theorem.

### Theorem8.

Let $\left(\mathcal{N},\mathcal{U},\mathcal{R},\mathcal{C}\right)$ be an uncertain random network. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for $\left(i,j\right)\in \mathcal{U}$ and the random capacities ξ i j have probability distributions Ψ i j for $\left(i,j\right)\in \mathcal{R}$, respectively. Then, the maximum flow $\xi =f\left({\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}\right)$has an expected value

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{1}{\int }}f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\mathrm{d}\alpha \prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right).$

### Proof.

Since the maximum flow $\xi =f\left({\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\cup \mathcal{R}\right)$ is a strictly increasing function with respect to ${\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}$, we have

$\begin{array}{ll}E\left[\phantom{\rule{0.3em}{0ex}}{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right]& =\underset{0}{\overset{1}{\int }}f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\mathrm{d}\mathrm{\alpha .}\phantom{\rule{2em}{0ex}}\end{array}$

It follows from Theorem 5, we have

$\begin{array}{ll}E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]& =\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}E\left[f\left({\xi }_{\mathit{\text{ij}}}\left(\omega \right),\left(i,j\right)\in \mathcal{R},{\xi }_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{U}\right)\right]\prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right)\phantom{\rule{2em}{0ex}}\\ =\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{1}{\int }}f\left({Υ}_{\mathit{\text{ij}}}^{-1}\left(\alpha \right),\left(i,j\right)\in \mathcal{U},{y}_{\mathit{\text{ij}}},\left(i,j\right)\in \mathcal{R}\right)\mathrm{d}\alpha \prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right).\phantom{\rule{2em}{0ex}}\end{array}$

The theorem is verified.

In Example 1, by Theorem 7, we can obtain the expected value of the maximum

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\cdots \underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{1}{\int }}\underset{\left(i,j\right)\in \mathcal{U\int }\cup \mathcal{R\int }}{min}\left\{\underset{\mathit{\text{ij}}}{\overset{-1}{Υ}}\left(\alpha \right),{y}_{\mathit{\text{ij}}}\right\}\mathrm{d}\alpha \prod _{\left(i,j\right)\in \mathcal{R\int }}\mathrm{d}{\Psi }_{\mathit{\text{ij}}}\left({y}_{\mathit{\text{ij}}}\right).$

In Example 2, by Theorem 7, we can obtain the expected value of the maximum

$E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{1}{\int }}\left({y}_{1}\wedge {Υ}_{1}^{-1}\left(\alpha \right)+{y}_{2}\wedge {Υ}_{2}^{-1}\left(\alpha \right)\right)\mathrm{d}\alpha \mathrm{d}{\Psi }_{1}\left({y}_{1}\right)\mathrm{d}{\Psi }_{2}\left({y}_{2}\right).$

In Example 3, by Theorem 7, we can obtain the expected value of the maximum

$\phantom{\rule{-20.0pt}{0ex}}E\left[\phantom{\rule{0.3em}{0ex}}\xi \right]=\phantom{\rule{0.3em}{0ex}}\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{+\infty }{\int }}\underset{0}{\overset{1}{\int }}\left(\phantom{\rule{0.3em}{0ex}}\left(\phantom{\rule{0.3em}{0ex}}\left(\phantom{\rule{0.3em}{0ex}}\left({y}_{1}\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}{Υ}_{1}^{-1}\left(\alpha \right)\phantom{\rule{0.3em}{0ex}}\right)\vee 0\right)\phantom{\rule{0.3em}{0ex}}\wedge \phantom{\rule{0.3em}{0ex}}{y}_{3}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}{y}_{2}\right)\phantom{\rule{0.3em}{0ex}}\wedge \phantom{\rule{0.3em}{0ex}}{Υ}_{2}^{-1}\left(\alpha \right)\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}\left({y}_{1}\phantom{\rule{0.3em}{0ex}}\wedge \phantom{\rule{0.3em}{0ex}}{Υ}_{1}^{-1}\left(\alpha \right)\right)\phantom{\rule{0.3em}{0ex}}\right)$
$\mathrm{d}\alpha \mathrm{d}{\Psi }_{1}\left({y}_{1}\right)\mathrm{d}{\Psi }_{2}\left({y}_{2}\right)\mathrm{d}{\Psi }_{3}\left({y}_{3}\right).$

## Conclusions

Indeterministic factors often appear in network flow problems. In the past, probability theory and uncertainty theory have been employed to deal with these indeterministic factors. Chance theory provides a new approach to deal with indeterministic factors in a complex indeterministic network. In this paper, we investigated the maximum flow problem of network in an uncertain random environment. Under the framework of chance theory, we gave the chance distribution of the maximum flow and the expected value of the maximum flow of uncertain random network was derived. Some examples were derived to illustrate the theoretical considerations.