Introduction

Quite a number of systems may be viewed as and are called networks although their physical appearance is quite different in nature. Most of these systems are results of human civilization and are of great importance for its functioning. In particular, transportation networks, like road, rail, and airline networks, canal networks for transporting water, waster water, oil and natural gas, and communication networks of telephones and computers networks and so on even natural systems such as caves and rivers can be viewed as networks. To see the common structure behind these different systems, they must be abstracted from their physical appearance. Then, their underlying structure may be seen as a collection of vertices, which might be road crossings, railway stations, airports, pumping stations and so on, and a collection of lines, which might be roads, canals, telephone cables, etc. connecting all or some of the vertices and every arc has weights or capacities. In deterministic network, the vertices, arcs, and capacities of arcs are deterministic. The deterministic network was developed and widely applied in the last century. However, in practice, different types of indeterminacy must be taken into account for a variety of reasons.

Random network was first investigated by Frank and Hakimi [1] in 1965 for modeling communication network with random capacities. From then on, the random network was well developed and widely applied. For example, Frank [2], Mirchandani [3], and Sigal et al. [4] studied probabilistic distribution of the shortest path when the network arc of weighs are random variables. Frank and Frisch [5] considered how to determine the maximum flow probability distribution in networks where each capacity is a continuous random variable. Doulliez [6] studied multi-terminal network with discrete probabilistic branch capacities. In addition, some researchers have tried to give lower and upper bounds on the expected maximum flow of network. Carey and Hendrickson [7] and Onaga [8] presented an efficient method to find a lower bound in general directed networks. Furthermore, Fishman [9], Goldberg and Tarjan [10], and Nawathe and Rao [11] mainly used stochastic optimization to solve the maximum flow problem in a random network. Other researchers, such as Fu and Rillet [12], Hall [13], and Lou [14], have done a lot of work in this field of random network.

Uncertain network was first explored by Liu [15] in 2009. Gao [16] investigates solutions to the α-shortest path and the most shortest path in an uncertain network in 2011. Besides, the maximum flow problem was discussed by Han et al. [17], the uncertain minimum cost flow problem was dealt with by Ding [18], and Chinese postman problem was explored by Zhang and Peng [19] for an uncertain network.

In many cases, uncertainty and randomness simultaneously appear in a complex system. In order describe this complex system, Liu [20] gave a concept of the uncertain random network in which some weights are random variables and others are uncertain variables, at the same time, studied the shortest path chance distribution of an uncertain random network.

In this paper, we will give the maximum flow of chance distribution of an uncertain random network. The remainder of this paper is organized as follows. In the section ‘Preliminaries’, some basic concepts and properties of uncertainty theory and chance theory used throughout this paper are introduced. In the section ‘Uncertain random network’, uncertain random network is recalled. In the section ‘Maximum flow of uncertain random network’, chance distribution of maximum flow is proved. The section ‘Expected value of maximum flow’ proposes the expected value the maximum flow in an uncertain random network. The section ‘Conclusions’ gives a brief summary to this paper.

Preliminaries

In this section, we first review some concepts of uncertainty theory, including uncertain measure, uncertain variable, uncertainty distribution, and operational law. Then, we introduce some useful definitions and properties about uncertain random variable, chance measure, chance distribution, and operational law.

Uncertainty theory

This subsection reviews some basic concepts of uncertainty theory. In reality, however, because of the lack of information that no samples are available to estimate a probability distribution, in this situation, we have to invite some domain experts to evaluate the belief degree that each event will happen, and this moment, belief degree are uncertain variables rather than random variables. If we insist on using probability theory to deal with indeterminacy, counterintuitive results will occur [21]. In order to deal with some indeterministic phenomena, Liu [21] founded uncertainty theory in 2007. And Liu [21] first presented uncertain measure as a set function satisfying four axioms. As a fundamental concept in uncertainty theory, the uncertain variable was presented by Liu [21] in 2007. Liu [22] proposed the concept of uncertainty distribution and inverse uncertainty distribution. After that, many researchers widely studied the uncertainty theory and made significative progress. Gao [23] studied the properties of continuous uncertain measure. Peng and Iwamura [24] proved a sufficient and necessary condition for uncertainty distribution. In addition, the concept of independence was proposed by Liu [25]. After, Liu [22] presented the operational law of uncertain variables. In order to rank uncertain variables, Liu [21] proposed the concept of expected value of uncertain variables. The linearity of expected value operator was verified by Liu [22]. As an important contribution, Liu and Ha [26] derived a useful formula for calculating the expected values of strictly monotone functions of independent uncertain variables. Up to now, theory and practice have shown that uncertainty theory is an efficient tool to deal with indeterministic phenomena, especially expert belief degrees. Liu [27] presented a counterexample of truck-cross-over-bridge, it is inappropriate to model belief degrees by probability theory. It is a reason why there is a need for uncertainty theory. The similarities and differences between uncertainty concept and standard probabilistic concept, as well as other concepts of uncertainty could be found in [21].

Definition 1.

(Liu [21]) Let be a σ-algebra on a nonempty set Γ. A set function :[0,1] is called an uncertain measure if it satisfies the following axioms:

Axiom 1: (Normality Axiom) {Γ}=1 for the universal set Γ.

Axiom 2: (Duality Axiom) {Λ}+{ Λ c }=1 for any event Λ.

Axiom 3: (Subadditivity Axiom) For every countable sequence of events Λ1, Λ2,⋯, we have

i = 1 Λ i i = 1 Λ i .

Besides, the product uncertain measure on the product σ-algebra is defined by the following product axiom.

Axiom 4: (Product Axiom) (Liu [25]) Let ( Γ k , k , k ) be uncertainty spaces for k=1,2,⋯ The product uncertain measure is an uncertain measure satisfying

i = 1 Λ k = k = 1 k Λ k

where Λ k are arbitrarily chosen events from k for k=1,2,⋯, respectively.

Definition 2.

(Liu [21]) An uncertain variable ξ is a measurable function from an uncertainty space (Γ,,) to the set of real numbers, i.e., for any Borel set B of real numbers, the se

{ ξ B } = { γ Γ | ξ ( γ ) B }

is an event.

An uncertain variable is essentially a measurable function from an uncertain space to the set of real numbers. In order to describe an uncertain variable, a concept of uncertainty distribution is defined as follows.

Definition 3.

(Liu [21]) The uncertainty distribution of an uncertain variable ξ is defined by

Φ ( x ) = { ξ x }

for any x∈ℜ.

Definition 4.

(Liu [21]) Let ξ be an uncertain variable with regular uncertainty distribution Φ. Then, the inverse function Φ−1 is called the inverse uncertainty distribution of ξ.

The distribution of a monotonous function of uncertain variables can be obtained by the following theorem.

Theorem 1.

(Liu [22]) Let ξ1, ξ2,⋯, ξ n be independent uncertain variables with uncertainty distributions Φ1, Φ2,⋯, Φ n , respectively. If f(ξ1, ξ2,⋯, ξ n ) is strictly increasing with respect to ξ1, ξ2,⋯, ξ m and strictly decreasing with respect to ξm+1, ξm+2,⋯, ξ n , then ξ=f(ξ1, ξ2,⋯, ξ n ) is an uncertain variable with an inverse uncertainty distribution

Φ 1 ( α ) = f Φ 1 1 ( α ) , , Φ m 1 ( α ) , Φ m + 1 1 ( 1 α ) , , Φ n 1 ( 1 α ) .

Definition 5.

(Liu [21]) The expected value of an uncertain variable ξ is defined by

E [ ξ ] = 0 + { ξ x } d x 0 { ξ x } d x

provided that at least one of the two integrals is finite.

Theorem 2.

(Liu [21]) Let ξ be an uncertain variable with uncertainty distribution Φ. If the expected value exists, then

E [ ξ ] = 0 + ( 1 Φ ( x ) ) d x 0 Φ ( x ) d x.

Based on this result, Liu [25] proved the linearity property of the expected value operator. For two independent uncertain variables ξ and η, we have E[ a ξ+b η]=a E[ ξ]+b E[ η] where a and b are real numbers.

In 2010, Liu [22] first introduced a formula expected value by inverse uncertainty distribution, that is

E [ ξ ] = 0 1 Φ 1 ( α ) d α.

Liu and Ha [26] proposed a generalized formula for expected value by inverse uncertainty distribution.

Theorem 3.

(Liu and Ha [26]) Let ξ1, ξ2,⋯, ξ n be independent uncertain variables with uncertainty distributions Φ1, Φ2,⋯, Φ n , respectively. If f(ξ1, ξ2,⋯, ξ n ) is strictly increasing with respect to ξ1,⋯, ξ m and strictly decreasing with respect to ξm+1,⋯, ξ n , then the uncertain variable ξ=f(ξ1, ξ2,⋯, ξ n )has an expected value

E [ ξ ] = 0 1 f Φ 1 1 ( α ) , , Φ m 1 ( α ) , Φ m + 1 1 ( 1 α ) , , Φ n 1 ( 1 α ) d α.

Meanwhile, Liu [21] presented the concept of variance of uncertain variables and also proposed some formulas to calculate the variance through uncertainty distribution. Recently, Yao [28] proposed a formula to calculate the variance using inverse uncertainty distribution. Sheng and Kar [29] verified some results of moment of uncertain variable through inverse uncertainty distribution.

Chance theory

Probability theory was developed by Kolmogorov [30] in 1933. Since then, probability theory has become an important branch of mathematics for modeling frequencies, while uncertainty theory was founded by Liu [21] in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. However, in many cases, uncertainty and randomness simultaneously appear in a complex system. In order to describe this phenomenon, Liu [31] first proposed chance theory, which is a mathematical methodology for modeling complex systems with both uncertainty and randomness in 2013, including chance measure, uncertain random variable, chance distribution, operational law, expected value, and so on. As an important contribution to chance theory, Liu [32] presented an operational law of uncertain random variables. Meanwhile, Guo and Wang [33] proposed a formula to calculate the variance through chance distribution and Sheng and Yao [34] verified some results of variance through inverse chance distribution. Furthermore, in order to deal with uncertain random phenomenon evolving in time, Gao and Yao [35] presented an uncertain random process and an uncertain random renewal process in the light of chance theory.

Let (Γ,,) be an uncertainty space and (Ω,A,Pr) be a probability space. Then, the chance space refers to the product (Γ,,)×(Ω,A,Pr).

Definition 6.

(Liu [31]) Let (Γ,,)×(Ω,A,Pr) be a chance space, and let Θ×A be an uncertain random event. Then, the chance measure of Θ is defined as

Ch { Θ } = 0 1 Pr { ω Ω { γ Γ | ( γ , ω ) Θ } r } d r.

Liu [31] proved that a chance measure satisfies normality, duality, and monotonicity properties, that is

  1. (i)

    Ch{Γ×Ω}=1.

  2. (ii)

    Ch{Θ}+Ch{Θ c}=1 for and event Θ.

  3. (iii)

    Ch{Θ 1}≤Ch{Θ 2} for any real number set Θ 1Θ 2.

Besides, Hou [36] proved the subadditivity of chance measure, that is,

Ch i = 1 Θ i i = 1 Ch { Θ i }

for a sequence of events Θ1, Θ2,⋯.

Definition 7.

(Liu [31]) An uncertain random variable is a measurable function ξ from a chance space (Γ,,)×(Ω,A,Pr) to the set of real numbers, i.e., {ξB} is an event for any Borel set B.

Random variables and uncertain variables can be regarded as special cases of uncertain random variables. Let η be a random variable, and let τ be an uncertain variable. Then, η+τ and η×τ are both uncertain random variables.

To calculate the chance measure, Liu [31] presented a definition of chance distribution.

Definition 8.

(Liu [31]) Let ξ be an uncertain random variable. Then, its chance distribution by

Φ ( x ) = Ch { ξ x }

for any xR.

The chance distribution of a random variable is just its probability distribution, and the chance distribution of an uncertain variable is just its uncertainty distribution.

Theorem 4.

(Liu [32]) Let η1, η2,⋯, η m be independent uncertain random variables with probability distributions Ψ1, Ψ2,⋯, Ψ m , respectively, and let τ1, τ2,⋯, τ n be uncertain variables. Then, the uncertain random variable

ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n )

has a chance distribution

Φ ( x ) = R⌉ m F ( x , y 1 , , y m ) d Ψ 1 ( y 1 ) d Ψ m ( y m )

where F(x, y1,⋯, y m ) is the uncertainty distribution of uncertain variable

f ( y 1 , y 2 , , y m , τ 1 , τ 2 , , τ n )

for any real numbers y1, y2,⋯, y m .

Definition 9.

(Liu [32]) Let ξ be an uncertain random variable. Then, its expected value is defined by

E [ ξ ] = 0 + Ch { ξ r } d r 0 Ch { ξ r } d r

provided that at least one of the two integrals is finite.

Let Φ denote the chance distribution of ξ. Liu [32] proved a formula to calculate the expected value of uncertain random variable with chance distribution if E[ ξ] exists, then

E [ ξ ] = 0 + ( 1 Φ ( x ) ) d x 0 Φ ( x ) d x.

Let Φ be an uncertain random variable with regular chance distribution Φ. Liu [32] proved a formula to calculate the expected value of uncertain random variable with inverse chance distribution if E[ ξ] exists, then

E [ ξ ] = 0 1 Φ 1 ( α ) d α.

Theorem 5.

(Liu [32]) Let η1, η2,⋯, η m be independent uncertain random variables with probability distributions, Ψ1, Ψ2,⋯, Ψ m , respectively, and let τ1, τ2,⋯, τ n be uncertain variables, then the uncertain random variable

ξ = f ( η 1 , η 2 , , η m , τ 1 , τ 2 , , τ n )

has an expected value

E [ ξ ] = R m E f y 1 , , y m , τ 1 , , τ n d Ψ 1 ( y 1 ) d Ψ m ( y m )

where E[ y1,⋯, y m , τ1,⋯, τ n ] is the expected value of uncertain variable f(y1,⋯, y m , τ1,⋯, τ n ) for any real numbers y1, y2,⋯, y m .

Meanwhile, Liu [32] proved the linearity of expected value operator, that is

E [ η + τ ] = E [ η ] + E [ τ ]

where η is a random variable and is τ an uncertain variable.

In application, Liu [32] founded uncertain random programming in 2013. As extensions, Zhou et al. [37] proposed uncertain random multi-objective programming for optimizing multiple, incommensurable, and conflicting objectives. After that, uncertain random programming was developed steadily and applied widely; Qin [38] proposed uncertain random goal programming in order to satisfy as many goals as possible in the order specified, and Ke [39] proposed uncertain random multilevel programming for studying decentralized decision systems. In order to quantify the rise of uncertain random systems in which the leader and followers may have their own decision variables and objective functions, Liu and Ralescu [40] invented the tool of uncertain random risk analysis. Wen et al. [41] presented the tool of uncertain random reliability analysis for dealing with uncertain random systems.

Uncertain random network

In this section, we introduce a definition of an uncertain random network and the shortest path chance distribution of uncertain random network.

Definition 10.

(Liu [20]) Assume is the collection of nodes, is the collection of uncertain arcs, is the collection of random arcs, and is the collection of uncertain and random arc capacities. Then, the quartette (N,U,R,C) is said to be an uncertain random network.

In this paper, we assume that the uncertain random network is of order n with a collection of nodes N={1,2,,n}, where ‘1’ is of the source node, and ‘n’ is the destination node. We defined two collections of arcs,

U = { ( i , j ) | ( i , j ) are uncertain arcs } ,
R = { ( i , j ) | ( i , j ) are random arcs } .

Note that all deterministic arcs are regarded as special uncertain ones. Let C i j denote the capacities of arcs (i, j), (i,j)UR, respectively. Then, C i j are uncertain variables if (i,j)U and random variables if (i,j)R. Write C={ C ij |(i,j)UR}.

Figure 1 shows an uncertain random network (N,U,R,C) of order 6 in which

N = { 1 , 2 , 3 , 4 , 5 , 6 } ,
U = { ( 1 , 2 ) , ( 2 , 5 ) , ( 3 , 5 ) , ( 5 , 6 ) } ,
R = { ( 1 , 3 ) , ( 3 , 4 ) , ( 2 , 4 ) , ( 4 , 6 ) } ,
C = { C 12 , C 13 , C 24 , C 25 , C 34 , C 35 , C 46 , C 56 } .
Figure 1
figure 1

An uncertain random network.

The uncertain random network degenerates to a random network (Frank and Hakimi [1]) if all capacities are random variables and degenerates to an uncertain network (Liu [22]) if all weights are uncertain variables.

Theorem 6.

(Shortest path chance distribution Liu [20]) Let (N,U,R,C) be an uncertain random network. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for (i,j)U and the random capacities ξ i j have probability distributions Ψ i j for (i,j)R, respectively. Then, the shortest path from a source node to a destination node has a chance distribution

Φ ( x ) = 0 + 0 + F ( x ; y ij , ( i , j ) R ) ( i , j ) R∫ d Ψ ij ( y ij )

where F(x; y ij ,(i,j)R) is the uncertainty distribution of uncertain variable f( y ij ,(i,j)R, ξ ij ,(i,j)U) and it is determined by its inverse uncertainty distribution

F 1 ( α ; y ij , ( i , j ) R ) = f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R ,

and f can be calculated by the Dijkstra’s algorithm.

Maximum flow of uncertain random network

In this section, we will introduce some algorithm of the maximum and apply chance theory to the maximum flow problem in an uncertain random network. This is a distinct contribution from other optimization methods. We will illustrate that chance theory can serve as a powerful tool to deal with the maximum flow in an uncertain random network.

Maximum flow problem

For a network, there is one of the important problems to study the maximum flow. In the past five decades, many efficient algorithms for the maximum flow problem have emerged for a fixed network. The maximum flow algorithm was first investigated by Fulkerson and Dantzig [42] in 1955. Then, representative methods in maximum flow algorithms are based on either augmenting paths or preflows. Augmenting path algorithms push flow along a path from the source to the sink in the residual network and include Ford-Fulkerson’s labeling algorithm [43] and Dinic’s blocking flow algorithm [44]. Edmonds and Karp [45] also independently proposed that the Ford and Fulkerson algorithm augments flow along shortest paths. In order to reduce the number of augmentations, preflow-based algorithms push flow along edges were investigated in the residual network and include Karzanov’s blocking flow algorithms [46], which introduced the first preflow-push algorithm on layered networks, and Goldberg-Tarjan’s push-relabeling algorithm [10]; it constructed distance labels instead of layered networks to improve the running time of preflow-push algorithm. They described a very flexible generic preflow-push algorithm that performs push and relabel operations at active nodes. In short, for a classical network, the maximum flow can be calculated by above for any algorithms.

How do we consider the maximum flow of a indeterminacy network? For a random network, Fishman [9], Goldberg and Tarjan [10], and Nawathe and Rao [11] mainly used stochastic optimization to solve the maximum flow problem in a random network. For an uncertain network, Han et al. [17] gave the inverse uncertain distribution of the maximum flow in an uncertain network. In this paper, according to chance theory, we will study the chance distribution, the maximum flow of an uncertain random network.

In here, we assume that the networks are directed with only one source and one sink. If the arc capacities of a network are given, then we can calculate the maximum flow f of the network using the above algorithm. For the different capacities, obtain different but unique maximum flow f. In other words, the maximum flow f is a function of arc capacities. In paper [17], the author proved that the maximum flow f is a continuous and strictly increasing function with respect to C i j , where C i j denote the capacities of the (i, j) arcs. For a random network, the maximum flow f is a random variable function of arc capacities. For an uncertain network, the maximum flow f is an uncertain variable function of arc capacities. Similarly, for an uncertain random network, the maximum flow f is an uncertain random variable function of arc capacities.

Chance distribution of maximum flow

Now, we employ chance theory to deal with this indeterministic factor. Define ξ={ ξ ij |(i,j)UR}. We can denote the network with uncertain random arc capacities as (N,U,R,C), its maximum flow is f(ξ). Obviously, f(ξ) is an uncertain random variable. Then, we can obtain the chance distribution of the maximum flow from a source node to a sink node by Theorem 4; we have the following theorem.

Theorem 7.

Let (N,U,R,C) be an uncertain random network. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for (i,j)U and the random capacities ξ i j have probability distributions Ψ i j for (i,j)R, respectively. Then, the maximum flow f( ξ ij ,(i,j)UR)has a chance distribution

Φ ( x ) = 0 + 0 + F ( x ; y ij , ( i , j ) R ) ( i , j ) R∫ d Ψ ij ( y ij )

where F(x; y ij ,(i,j)R) is the uncertainty distribution of uncertain variable f( y ij ,(i,j)R, ξ ij ,(i,j)U), and it is determined by its inverse uncertainty distribution

F 1 α ; y ij , ( i , j ) R = f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R ,

and f may be calculated by the Ford-Fulkerson’s algorithm.

Proof.

By Definitions 6 and 8 and Theorem 4, we have

Φ ( x ) = Ch f ( ξ ij , ( i , j ) U R ) x = 0 1 Pr ω Ω | f ( ξ ij ( ω ) , ( i , j ) R , ξ ij , ( i , j ) U ) x r d r = 0 + 0 + f ( ξ ij ( ω ) , ( i , j ) R , ξ ij , ( i , j ) U ) x ( i , j ) R∫ d Ψ ij ( y ij ) = 0 + 0 + F ( x ; y ij , ( i , j ) R ) ( i , j ) R∫ d Ψ ij ( y ij )

where F(x; y ij ,(i,j)R) is the uncertainty distribution of uncertain variable f( y ij ,(i,j)R, ξ ij ,(i,j)U) for any real numbers y ij ,(i,j)R and it is determined by its inverse uncertainty distribution

F 1 α ; y ij , ( i , j ) R = f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R ,

and f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R is just the maximum flow of a determinacy network and f is a strictly increasing function with respect to ξ i j , where ξ i j denote the capacities of the (i, j) arcs. We may be calculated f by the Ford-Fulkerson’s algorithm or Dinic’s algorithm for each given α. The theorem is verified.

Remark 1.

If the uncertain random network becomes a random network, then the probability distribution of maximum flow is

Φ ( x ) = f ( y ij , ( i , j ) R∫ ) x ( i , j ) R∫ d Ψ ij ( y ij ) .

Remark 2.

(Han el at. [17]) If the uncertain random network becomes an uncertain network, then the inverse uncertainty distribution of maximum flow is

Φ 1 ( α ) = f Υ ij 1 ( α ) , ( i , j ) U .

Example 1.

There is a series uncertain random network with n arcs defined by Figure 2. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for (i,j)U, and the random capacities ξ i j have probability distributions Ψ i j for (i,j)R, respectively. Then, by Theorem 6, we have that the maximum flow of the network (N,U,R,C) is an uncertain random variable and its chance distribution is

Φ ( x ) = 0 + 0 + F ( x ; y ij , ( i , j ) R ) ( i , j ) R∫ d Ψ ij ( y ij )

where F(x; y ij ,(i,j)R) is determined by its inverse uncertainty distribution

F 1 ( α ; y ij , ( i , j ) R ) = min ( i , j ) U∫ R∫ Υ ij 1 ( α ) , y ij .
Figure 2
figure 2

Network (N,U,R,C) for Example 1.

Example 2.

Assume an uncertain random network with four arcs defined by Figure 3. Assume that the uncertain capacities τ1, τ2 have regular uncertainty distributions Υ1, Υ2, and the random capacities ξ1, ξ2 have probability distributions Ψ1, Ψ2, respectively. Then, by Theorem 6, we have that the maximum flow of the network (N,U,R,C) is an uncertain random variable and its chance distribution is

Φ ( x ) = 0 + 0 + F ( x ; y 1 , y 2 ) d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 )

where F(x;y1, y2) is determined by its inverse uncertainty distribution

F 1 ( α ; y 1 , y 2 ) = y 1 Υ 1 1 ( α ) + y 2 Υ 2 1 ( α ) .
Figure 3
figure 3

Network(N,U,R,C)for Example 2.

Example 3.

Assume an uncertain random network with five arcs defined by Figure 4. Assume that the uncertain capacities τ1, τ2 have regular uncertainty distributions Υ1, Υ2, and the random capacities ξ1, ξ2, ξ3 have probability distributions Ψ1, Ψ2, Ψ3, respectively. Then, by Theorem 6, we have that the maximum flow of the network (N,U,R,C) is an uncertain random variable and its chance distribution is

Φ ( x ) = 0 + 0 + 0 + F ( x ; y 1 , y 2 , y 3 ) d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ 3 ( y 3 )

where F(x;y1, y2, y3) is determined by its inverse uncertainty distribution

F 1 α ; y 1 , y 2 , y 3 = y 1 Υ 1 1 α 0 y 3 + y 2 Υ 2 1 ( α ) + y 1 Υ 1 1 ( α ) .
Figure 4
figure 4

Network (N,U,R,C) for Example 3.

Expected value of maximum flow

In uncertain random network, we can obtain chance distribution of maximum flow, but it is difficult to calculate the maximum flow. Sometimes, we need not know the maximum flow for an uncertain random network, but only it is enough average value of flow, so we need only to calculate the expected value of the maximum flow. By Theorem 5, we have the following theorem.

Theorem 8.

Let (N,U,R,C) be an uncertain random network. Assume that the uncertain capacities ξ i j have regular uncertainty distributions Υ i j for (i,j)U and the random capacities ξ i j have probability distributions Ψ i j for (i,j)R, respectively. Then, the maximum flow ξ=f( ξ ij ,(i,j)UR)has an expected value

E [ ξ ] = 0 + 0 + 0 1 f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R d α ( i , j ) R∫ d Ψ ij ( y ij ) .

Proof.

Since the maximum flow ξ=f( ξ ij ,(i,j)UR) is a strictly increasing function with respect to ξ ij ,(i,j)U, we have

E [ y ij , ( i , j ) R , ξ ij , ( i , j ) U ] = 0 1 f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R d α.

It follows from Theorem 5, we have

E [ ξ ] = 0 + 0 + E [ f ( ξ ij ( ω ) , ( i , j ) R , ξ ij , ( i , j ) U ) ] ( i , j ) R∫ d Ψ ij ( y ij ) = 0 + 0 + 0 1 f Υ ij 1 ( α ) , ( i , j ) U , y ij , ( i , j ) R d α ( i , j ) R∫ d Ψ ij ( y ij ) .

The theorem is verified.

In Example 1, by Theorem 7, we can obtain the expected value of the maximum

E [ ξ ] = 0 + 0 + 0 1 min ( i , j ) U∫ R∫ Υ ij 1 ( α ) , y ij d α ( i , j ) R∫ d Ψ ij ( y ij ) .

In Example 2, by Theorem 7, we can obtain the expected value of the maximum

E [ ξ ] = 0 + 0 + 0 1 y 1 Υ 1 1 ( α ) + y 2 Υ 2 1 ( α ) d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) .

In Example 3, by Theorem 7, we can obtain the expected value of the maximum

E [ ξ ] = 0 + 0 + 0 + 0 1 y 1 Υ 1 1 α 0 y 3 + y 2 Υ 2 1 ( α ) + y 1 Υ 1 1 ( α )
d α d Ψ 1 ( y 1 ) d Ψ 2 ( y 2 ) d Ψ 3 ( y 3 ) .

Conclusions

Indeterministic factors often appear in network flow problems. In the past, probability theory and uncertainty theory have been employed to deal with these indeterministic factors. Chance theory provides a new approach to deal with indeterministic factors in a complex indeterministic network. In this paper, we investigated the maximum flow problem of network in an uncertain random environment. Under the framework of chance theory, we gave the chance distribution of the maximum flow and the expected value of the maximum flow of uncertain random network was derived. Some examples were derived to illustrate the theoretical considerations.