Introduction

Network flow problems (NFPs) appear in different areas such as electrical and power networks, highway systems, manufacturing and distribution networks, computer networks, neural networks, and water supply systems. Based on different methodologies of their solutions, NFPs could be categorized into three classes: static, dynamic, and uncertain. For a good overview of static NFPs, see Ahuja [1]. Dynamic NFPs are introduced by Ford and Fulkerson [14] and developed by other researchers [2, 12, 13, 37]. Nondeterministic NFPs are usually studied by using the probability theory [2, 8, 11, 17, 31, 32, 34, 35]. It is obvious that the probability theory can be applied to the known sample data, which are not true in many real-life situations such as problems arising from urban traffic networks. Although fuzzy theory is applicable to these problems, but because of some theoretical drawbacks, it cannot handle them efficiently [26]. Liu [26] introduced the uncertainty theory to solve some kinds of uncertain problems where probabilistic and fuzzy theories do not work properly. This theory is well applicable to many problems, especially for uncertain static NFPs [10, 15, 18, 25, 30, 44]. These results motivate us to use the uncertainty theory to solve UDNFPs.

Some materials about dynamic network flow problems and stochastic network flow optimization are noted in this section. In view of [13, 18, 33, 36, 37], we extend the formulations of simple cases to the more complicated ones.

Dynamic Network Flow Problems

Ford and Fulkerson [14] introduced the maximal dynamic flow problem. The problem is defined on a dynamic network N=(V,E,u,τ,{s,t}): a set of nodes V, a set of directed edges E with a non-negative integral capacity function u and transit times τ, a subset S of nodes, a source s, and a sink t. The goal is to find a dynamic flow that sends as much flow as possible from the source to the sink in a given time horizon T. In the Ford-Fulkerson model, time is measured in discrete steps so that if one unit of flow leaves node i at time θ on arc (i,j), one unit of flow arrives at node j at time θ+τ ij , where τ ij is the transit time of arc (i,j). The model also allows storage at any node in the network.

Discrete and Continuous Time Dynamic Network Flows

To handle dynamic network flow problems, we can either (1) model time in discrete time steps or (2) model time continuously. To produce theoretically or practically efficient algorithms, the time-expanded network is used in the case one either explicitly in algorithms or implicitly in proofs. To prove the existence of optimal solutions and generalize the model, networks with time-varying capacities and costs are considered in the case two.

A discrete dynamic flow is a function g that assigns a flow to each arc at each time step. It must also obey capacity constraints 0≤g(θ)≤u for all time steps θ.

A continuous dynamic flow is a function x that defines the rate of flow (per unit time) entering each arc at each moment of time. The capacity constraints are now flow rate constraints.

L. Fleischer and E. Tardos [13] extended the discrete time dynamic flow algorithms to solve the analogous continuous-time dynamic flow problems. Based on [13], we study UDNFPs just in the discrete time mode.

Dynamic Network Flows and Network Flows Over Time

Most of the dynamic network flow problems are treated in a purely static environment in which the input data is given in advance and assumed to be independent of time, and only the flow changes over time. L. Fleischer and M. Skutella [12] point out that the term dynamic is more consistent for problems in which the data changes over time. Thus, they use the term network flows over time instead of dynamic network flows to express that only the movement of flow through the network over time is considered.

Since in the UDNFPs considered here, arc capacities are uncertain and may vary with time, moreover, flows vary with time; thus, by the above discussion, the term dynamic is appropriate for them.

Stochastic Network Optimization Problem

Many real-life networks behave stochastically, for example in communication systems, production systems, and logistics systems. In practical problems, different types of uncertainty arise which should be taken into account.

Some researchers address the nondeterministic variables as random variables [2, 8, 11, 17, 31, 32, 34, 35] or fuzzy variables [3, 4, 19, 20, 38]. Such researchers use the probability theory developed by Kolmogorov [21] or fuzzy mathematics introduced by Zadeh [43] to model frequencies or fuzzy quantities. They mainly use stochastic optimization, chance constrained programming, robust optimization, and fuzzy techniques to solve some flow problems in uncertain networks. Some others consider the nondeterministic variables under uncertainty theory [10, 15, 18, 25, 30, 44] in which the concept of belief degree introduced by Liu [25] is used.

To deal with some uncertain phenomena, Liu proposed the uncertainty theory in 2007 [25] and refined it in 2010. Uncertainty theory that has become a branch of mathematics for modeling human uncertainty is different from probability theory and fuzzy mathematics. Details about the similarities and differences between Liu’s uncertainty concept and standard probabilistic concept, as well as fuzzy concept can be found in [26, 27]. It has shown theoretically and practically that uncertainty theory is an efficient tool in dealing with nondeterministic information, especially expert data and subjective estimation. From theoretical point of view, uncertain process [22, 41], uncertain differential equation [5, 42], and uncertain logic [29] have been established. From practical point of view, uncertain programming [15, 16, 24, 40], uncertain calculus [7, 23], and uncertain risk analysis [28] have been developed quickly.

It is interesting to know when the uncertainty theory should be used for network problems. When sample data corresponding to nondeterministic variables of a network is not enough, we ask some experts to express their belief degrees. Based on these belief degrees, the uncertainty theory is used to solve the problems.

In this paper, we study the UDNFPs where the word uncertainty refers to nondeterministic situations with poor sample data. In these problems, arc capacities are uncertain and independent from each other, and flow varies through the network over time. Our contribution is to solve some kinds of UDNFPs by uncertainty theory. In fact, UDNFPs can be transformed to DNFPs by using of the uncertainty theory under some conditions. Especially, the uncertain network, which leads to these problems, can be transformed to the equivalent certain network by an algorithm under some conditions.

The rest of the paper is organized as follows. In the “Preliminaries” section, some basic information about the dynamic network flow problems and the uncertainty theory is reviewed. In the “Formulations” section, the formulation of UDNFPs is surveyed. The main algorithm is presented in the “The Main Algorithm of Solving the UDNFPs” section to solve the UDNFPs, and applied to a simple example in the “Numerical Example” section. Finally, the conclusion and future works are given in the “Conclusion and Future Works” section, and declarations are given in the last section.

Preliminaries

Dynamic Network Flow Problems

Let G=(V,E) be a network (directed graph) with a source node sV and a sink node tV. Each arc eE has an associated capacity u e and a transit time (or length) τ e ≥0. In the setting with costs, each arc e also has a cost coefficient c e , which determines the cost of sending one unit of flow through the arc. An arc e from node v to node w is sometimes also denoted (v,w); in this case, we write head (e)=w and tail (e)=v. To avoid confusion, we assume without loss of generality that there is at most one arc between any pair of nodes in G and that there are no loops.

Since the UDNFPs comprise of different problems with different shapes, it seems to be better to survey each of them individually. Fortunately, uncertainty theory is applicable for all of them with a same manner (in the case that just capacities are uncertain). Therefore, we just study Uncertain Maximum Dynamic Network Flow problem (UMDNFP) in which the flow varies with time and capacities are uncertain (may vary with time or not). In addition, this problem is regarded just in the discrete time mode because the results could be adopted for continuous time mode according to [13].

Maximum Flows Over Time Problem (MFOTP)

It can be seen that UMDNFP with no uncertain factors turns to MFOTP, thus, we mention some properties of MFOTP. Here, we consider flows over time with a fixed time horizon T≥0.

Definition 1

(Flow over time). A flow over time f with time horizon T consists of a Lebesgue integrable function f e :[ 0,T)→R ≥0 for each arc eE, moreover f e (θ)=0 must hold for θTτ e . To simplify notation, we sometimes consider f e as a function with domain \(\mathbb {R}\). In this case, we set f e (θ):=0 for all θ∉[ 0,T).

We say that f e (θ) is the rate of flow (i.e., amount of flow per time unit) entering arc e at time θ. The flow particles entering arc e at its tail at time θ arrive at the head of e exactly τ e time units later. In particular, the outflow rate at the head of arc e at time θ is equal to f e (θτ e ). Definition 1 ensures that all flow has left arc e at time T as f e (θ)=0 for θTτ e .

Definition 2

(Capacity, excess, flow conservation, st-flow over time). Let f be a flow over time with time horizon T.

  1. (a)

    The flow over time f fulfills the capacity constraints (and is called feasible) if f e (θ)≤u e for each eE and all θ∈ [ 0,T).

  2. (b)

    For vV, the excess at node v and time θ is the amount of flow that enters node v up to time θ, that is,

    $$ \text{ex}_{f}(v, \theta):= \sum \limits_{e \in \delta^-(v)} \int_{0}^{\theta-\tau_{e}} f_{e}(\xi)d\xi -\sum \limits_{e \in \delta^+(v)} \int_{0}^{\theta} f_{e}(\xi)d\xi. $$
  3. (c)

    The flow over time f fulfills the weak flow conservation constraints if ex f (θ)≤0 for each vV∖{s,t} and all θ∈ [ 0,T). Moreover, ex f (T)=0 must hold for each vV∖{s,t}.

  4. (d)

    A flow over time satisfying the weak flow conservation constraints is an st-flow over time. The value of an st-flow over time with time horizon T is |f|:=ex f (t,T).

  5. (e)

    An st-flow over time f fulfills the strict flow conservation constraints if ex f (v,θ)=0 for all vV∖{s,t} and θ∈ [ 0,T]. The strict flow conservation constraints say that flow must not be stored at intermediate nodes.

By the definition of ex f (v,θ), we can formulate the MFOTP as:

$$ \left\{\begin{array}{l} \max \limits_{f_{e}(\theta)} |f|: =|\text{ex}_{f} (v, T)|\\ \mathrm{subject \ to:} \\ \qquad f_{e}(\theta) \leq u_{e}, \forall e \in E, \theta \in\, [\!0, T),\\ \qquad \text{ex}_{f} (v, \theta)=0, \forall v \in V \backslash \{s, t\}, \theta \in\, [\!0, T). \end{array}\right. $$
(1)

Uncertainty

Here, some basic concepts are recalled from [26] which required in the sequel.

Definition 3

Let \(\mathcal {L}\) be a σ-algebra on a nonempty set Γ. A set function \(\mathcal {M} : \mathcal {L} \rightarrow [\!0, 1]\) is called an uncertain measure if it satisfies the following axioms:

Axiom 1 (Normality Axiom) \(\mathcal {M}\{\Lambda \}=1\) for the universal set Γ; Axiom 2 (Duality Axiom) \(\mathcal {M}\{\Lambda \}+\mathcal {M}\{\Lambda ^{c}\}=1\) for any event \(\Lambda \in \mathcal {L}\); Axiom 3 (Subadditivity Axiom) For every countable sequence of events Λ 1,Λ 2,…, we have

$$ \mathcal{M}\left\{\bigcup \limits_{i=1}^{\infty} \Lambda_{i}\right\}\leq \sum \limits_{i=1}^{\infty} \mathcal{M}\{\Lambda_{i}\}. $$

In the uncertainty theory, the triplet \((\Gamma, \mathcal {L}, \mathcal {M})\) is called an uncertainty space, and the product uncertain measure on the product algebra defined by Liu [25] via the following product axiom: Axiom 4 (Product Axiom) Let \((\Gamma _{k}, \mathcal {L}_{k}, \mathcal {M}_{k})\) be uncertainty spaces for k=1,2,…. The product uncertain measure \(\mathcal {M}\) is an uncertain measure satisfying

$$ \mathcal{M}\left\{\prod \limits_{k=1}^{\infty} \Lambda_{k}\right\}= \bigwedge \limits_{k=1}^{\infty} \mathcal{M}\{\Lambda_{k}\} $$

where Λ k are arbitrarily chosen events from \(\mathcal {L}_{k}\) for k=1,2,… respectively.

An uncertain variable ξ is a measurable function from an uncertainty space to the set of real numbers. In order to describe an uncertain variable in practice, Liu [25] defined a concept of uncertainty distribution as follows.

Definition 4

The uncertainty distribution Φ of an uncertain variable ξ is defined by

$$ \Phi(x)=\mathcal{M} \{\xi \leq x\} $$

for any real number x.

Definition 5

An uncertainty distribution Φ(x) is said to be regular if its inverse function Φ −1(α) exists for each α∈(0,1).

In this paper, we always assume that uncertainty distributions are regular. Otherwise, we may give the uncertainty distribution a small perturbation to become regular [26].

Definition 6

The uncertain variables ξ 1,ξ 2,…,ξ n are said to be independent if

$$ \mathcal{M} \left\{ \bigcap \limits_{i=1}^{n} (\xi_{i} \in B_{i}) \right\}= \bigwedge \limits_{i=1}^{n} \mathcal{M}\{\xi_{i} \in B_{i}\} $$

for any Borel sets B 1,B 2,…,B n of real numbers.

Definition 7

Let ξ be an uncertain variable. Then the expected value of ξ is defined by

$$ E\,[\!\xi]= \int_{0}^{+\infty} \mathcal{M}\{\xi \geq x\}dx - \int_{-\infty}^{0} \mathcal{M}\{\xi \leq x\}dx $$

provided that at least one of the two integrals is finite.

Theorem 1

(Liu [ 27 ]) Let ξ be an uncertain variable with regular uncertainty distribution Φ. Then

$$ E[\xi]= \int_{0}^{1} \Phi^{-1} (\alpha)d\alpha. $$

Theorem 2

(Liu [ 27 ]) Let ξ 1,ξ 2,…,ξ n be independent uncertain variables with regular uncertainty distributions Φ 1,Φ 2,…,Φ n respectively. If the function f(ξ 1,ξ 2,…,ξ n ) is strictly increasing with respect to ξ 1,ξ 2,…,ξ m and strictly decreasing with respect to ξ m+1,ξ m+2,…,ξ n , then

$$ \pmb{\xi}=f(\xi_{1}, \xi_{2}, \ldots, \xi_{n}) $$

is an uncertain variable with inverse uncertainty distribution

$$ \Psi^{-1}(\alpha)=f(\Phi_{1}^{-1}(\alpha), \ldots, \Phi_{m}^{-1}(\alpha), \Phi_{m+1}^{-1}(1-\alpha), \ldots, \Phi_{n}^{-1}(1-\alpha)) $$

where Ψ is the uncertainty function of ξ.

Uncertain Programming

Uncertain programming is a type of mathematical programming involving uncertain variables. Assume that x is a decision vector, and ξ is an uncertain vector. Since an uncertain objective function f(x,ξ) cannot be minimized directly, we may minimize its expected value, i.e.,

$$ \min \limits_{x} E[f(\pmb{x}, \pmb{\xi})]. $$

In addition, since the uncertain constraints g j (x,ξ)≤0, j=1,2,…,p, do not define a certain feasible set, it is naturally desired that the uncertain constraints hold with confidence levels α 1,α 2,..,,α p . Then we have a set of chance constraints,

$$ \mathcal{M} \{g_{j}(\pmb{x}, \pmb{\xi}) \leq 0\} \geq \alpha_{j}, \ j=1, 2, \ldots, p. $$
(2)

Theorem 3

(Liu [26]) Let ξ 1,ξ 2,…,ξ n be independent uncertain variables with regular uncertainty distributions Φ 1,Φ 2,…,Φ n respectively. If the constraint function g(x,ξ 1,ξ 2,…,ξ n )is strictly increasing with respect to ξ 1,ξ 2,…,ξ k and strictly decreasing with respect to ξ k+1,ξ k+2,…,ξ k , then The chance constraint

$$ \mathcal{M} \{g(\pmb{x},\xi_{1}, \xi_{2}, \ldots, \xi_{n}) \leq 0\} \geq \alpha $$

holds if and only if

$$ g(\pmb{x}, \Phi_{1}^{-1}(\alpha), \ldots, \Phi_{k}^{-1}(\alpha), \Phi_{k+1}^{-1}(1-\alpha), \ldots, \Phi_{n}^{-1}(1-\alpha)) \leq 0. $$

In order to make a decision with minimum expected objective value subject to a set of chance constraints, Liu [ 24 ] proposed the following uncertain programming model,

$$ \left\{\begin{array}{l} \min \limits_{\pmb{x}} E[f(\pmb{x}, \pmb{\xi})] \\ \text{subject to:} \\ \qquad \mathcal{M} \{g_{j}(\pmb{x}, \pmb{\xi}) \leq 0\} \geq \alpha_{j}, \ j=1, 2, \ldots, p, \\ \qquad \pmb{x} \in D. \end{array}\right. $$

Since there is no algorithm to solve this uncertain model directly, we transform it to an equivalent certain model that can be solved by existing algorithms of certain programming. By Theorem 2, the equivalent certain model becomes

$$ \left\{\begin{array}{l} \min \limits_{\pmb{x}} \int_{0}^{1} \Phi^{-1}_{f}(\pmb{x}, \alpha)d\alpha \\ \mathbf{subject\ to:} \\ \qquad \Psi^{-1}_{g_{j}}(\pmb{x}, \alpha) \leq 0, \ j=1, 2, \ldots, p, \\ \qquad \pmb{x} \in D. \end{array}\right. $$

where \(\Phi ^{-1}_{f}\) and \(\Psi ^{-1}_{g_{j}}\) are the inverse of f(x,ξ) and g j (x,ξ) respectively.

Formulations

Applying different kinds of criteria (i.e., expected values, belief degrees, etc.) leads to different formulations for a specific problem each of which may have a different solution. Therefore, a suitable criterion is needed to obtain the best possible solution. For more details see [26].

Now, we review the chance constraint that is the keystone of the uncertain programming. For the uncertainty capacity u e of an arc eE with uncertainty distribution Φ e , and the amount of flow f e (θ) in e at time θ∈ [ 0,T), briefly, we have f e (θ)≤u e . Let g(f,u e )=f e (θ)−u e . Then in the standard formulation of a problem we will have

$$ g(f, u_{e}) \leq 0, \ \forall e \in E, \theta \in\, [\!0, T). $$
(3)

Since there is no algorithm to deal with this uncertain constraint, we have to use a chance constraint to transfer (3) to a certain constraint (a constraint including no uncertain variable). From the chance constraint (2) for g(f,u e ) we have \(\mathcal {M} \{f_{e}(\theta) \leq u_{e}\} \geq \alpha _{e}\). This implies that whenever f e (θ) gets lower values, we have higher levels of confidence about the satisfaction of f e (θ)≤u e and vice versa. In other words, the value of f e (θ) is proportional to 1−α e . This chance constraint is suitable for those problems that require lower f e (θ) such as uncertain minimum cost flow problem (see [9]). According to Theorem 3, by applying the chance constraint (2) for g(f,u e ) we will have

$$ f_{e}(\theta) \leq \Phi^{-1}_{e}(1-\alpha_{e}); $$

that is, whenever the lower amounts of flows are desirable, we can replace the uncertain capacities with \(\Phi ^{-1}_{e}(1-\alpha)\) to obtain a certain model of the problems.

On the other side, we need to reach the higher amounts of flows in many problems. Consequently, the chance constraint (2) is not suitable for these problems. Thus, we need another chance constraint to assure the higher values of f e (θ). An example of such problem is the uncertain maximum flow problem for sending large amounts of flows through the arcs as much as possible. In this case, we can apply the following chance constraint

$$ \mathcal{M} \{g(f, u_{e}) \leq 0\} \geq 1- \alpha_{e} $$
(4)

Or

$$ \mathcal{M} \{g(f, u_{e}) \geq 0\} \leq \alpha_{e}. $$
(5)

Let check the suitability of this chance constraint for these kinds of problems. By applying it to g(f,u e )=f e (θ)−u e we will have

$$ \mathcal{M} \{f_{e}(\theta) \leq u_{e}\} \geq 1- \alpha_{e}. $$

BY continuing the above discussion and taking 1−α e instead of α e , we see that the amount of flow f e (θ) is proportional to α e . In other words, by the confidence level α e we know that f e (θ) reaches to its highest value, showing that why this chance constraint is suitable for the latter case.

Lemma 1

Let ξ 1,ξ 2,…,ξ n be independent uncertain variables with regular uncertainty distributions Φ 1,Φ 2,…,Φ n respectively. If the constraint function g(x,ξ 1,ξ 2,…,ξ n )is strictly increasing with respect to ξ 1,ξ 2,…,ξ k and strictly decreasing with respect to ξ k+1,ξ k+2,…,ξ k , then we will have

$$ \mathcal{M} \{g(\pmb{x},\xi_{1}, \xi_{2}, \ldots, \xi_{n}) \leq 0\} \geq 1-\alpha $$

if and only if

$$ g(\pmb{x}, \Phi_{1}^{-1}(1-\alpha), \ldots, \Phi_{k}^{-1}(1-\alpha), \Phi_{k+1}^{-1}(\alpha), \ldots, \Phi_{n}^{-1}(\alpha)) \leq 0. $$

Proof

It is enough to replace α by 1−α in Theorem 3. □

According to Lemma 1, for the second kind of problems we will have

$$ f_{e}(\theta) \leq \Phi^{-1}_{e}(\alpha_{e}). $$

Thus, we can replace the uncertain capacities with \(\Phi ^{-1}_{e}(\alpha _{e})\) to obtain certain models for these kinds of problems (See [10]).

Now, we are ready to formulate the main problem. Suppose that the capacities u e ,eE in the model (1) are uncertain variables with the uncertainty distribution functions Φ e . By the uncertainty theory, the UMDNFP can be formulated as follows.

UMDNFP (Discrete Time Mode)

$$ \left\{\begin{array}{l} \max \limits_{f_{e}(\theta)} E(F) \\ \mathrm{subject to:} \\ \qquad F=|ex_{f}(t, T)|\\ \qquad ex_{f}(v, \theta)=0, \forall v \in V\backslash\{s, t\}, \theta \in\, [\!0, T)\\ \qquad \mathcal{M} \{f_{e}(\theta)\leq u_{e} \} \geq 1-\alpha_{e}, \ \forall e \in E, \theta \in\, [\!0, T). \end{array}\right. $$
(6)

By lemma 1 we can write \( \mathcal {M} \{f_{e}(\theta)\leq u_{e} \} \leq 1-\alpha _{e}\) as \(f_{e}(\theta) \leq \Phi _{e}^{-1}(\alpha _{e})\). Also, since the uncertain variables do not appear in the objective function F, then we use just the function F here (For the case that uncertain variables appear in the objective function, see [36]). Hence, the model (6) becomes

$$ \left\{\begin{array}{l} \max \limits_{f_{e}(\theta)} F \\ \text{subject to:} \\ \qquad F=|ex_{f}(t, T)|\\ \qquad ex_{f}(v, \theta)=0, \forall v \in V\backslash\{s, t\}, \theta \in\, [\!0, T)\\ \qquad f_{e}(\theta) \leq \Phi_{e}^{-1}(\alpha_{e}), \ \forall e \in E, \theta \in\, [\!0, T) \end{array}\right. $$
(7)

which is the certain model of (6) and has the same shape of MFOTP. In other words, this is a deterministic MFOTP with certain capacities \(c_{e}=\Phi _{e}^{-1}(\alpha _{e})\).

The uncertainty distribution functions and their inverses play an essential rule in uncertainty programming. Thus we remind some materials about them in the following.

Uncertainty Distributions

In [26], we can find the way of obtaining belief degrees to define distribution functions. Liu [27] suggested the principle of least squares to estimate the unknown parameters of uncertainty distribution. Wang et al. [39] applied the Delphi method to determine the uncertainty distribution based on expertise. Chen and Ralescu [6] used B-spline method to estimate the uncertainty distribution.

Some uncertainty distributions are:

Linear Uncertainty Distribution

$$ \Phi(x)= \left\{\begin{array}{ll} 0, & \text{if} x \leq a \\ (x-a)/(b-a), & \text{if} a \leq x \leq b \\ 1, & \text{if} x\ge b \end{array}\right. $$

denoted by \(\mathcal {L}(a, b)\) where a and b are real numbers with a<b.

Zigzag Uncertainty Distribution

$$ \Phi(x)= \left\{\begin{array}{ll} 0, & \text{if}\ \ x \leq a \\ (x-a)/[2(b-a)] & \text{if}\ \ a \leq x \leq b \\ (x+c-2b)/[2(c-b)] & \text{if}\ \ b \leq x \leq c \\ 1, & \text{if}\ \ x\ge c \end{array}\right. $$

denoted by \(\mathcal {Z}(a, b, c)\) where a, b, c are real numbers with a<b<c.

Computing Inverse Uncertainty Distribution Functions

Computing inverse uncertainty distribution functions appeared in certain mode may be impractical, thus, we have to apply some approximations. Liu [27] proposed the 99-method to this end. Together with some search algorithms such as binary search algorithm or genetic algorithm to obtain the optimal value for α, it leads to some Hybrid algorithms [23 , 36].

Remarks 1

The uncertain capacities of the max dynamic flow problem considered here might vary with time, but their corresponding uncertainty distributions be independent of time. Thus, this problem can be transformed to a certain max flow over time model. Therefore, formulation in this case of uncertainty is also straightforward for all of the dynamic network flow problems such as Earliest Arrival Flows, Minimum Cost Flows, Multi-Commodity Flows, and so on. For other problems that cannot be formulated in this straightforward manner, more care should be taken in using uncertainty theory. Complicated cases are left as future works.

The Main Algorithm of Solving the UDNFPs

As noted above, when just capacities are uncertain in the UDNFP, the uncertain problem can be transformed to a certain one with the same shape as the original problem. That is, an uncertain problem such as the UMDNFP with uncertain capacities is transformable to a certain problem such as the MDNFP with capacities equal to \(\Phi ^{-1}_{e}(\alpha _{e})\)s.

It is stressed again that to transfer an uncertain problem to a certain equivalent one under the circumstances considered here, the uncertain capacities must be replaced with some values carefully. If the higher amounts of flows are desirable, the uncertain capacities can be replaced by \(\Phi ^{-1}_{e}(\alpha _{e})\), else if the lower amounts of flows are desirable, then the uncertain capacities can be replaced by \(\Phi ^{-1}_{e}(1-\alpha _{e})\).

Note that α e ,eE guarantees the satisfaction of the corresponding constraints. Any value of α e is determined by the importance of the corresponding constraint. Thus, if all constraints have the same importance, then we set all these values equal to a fixed value α [10].

The algorithm of transforming the UDNFPs to the DNFPs

Given an UDNFP, Step 1 Detect suitable uncertain variables; Step 2 Define the corresponding uncertainty distribution functions; Step 3 Model the main problem by appropriate uncertainty criteria; Step 4 Transform the uncertain model to an equivalent certain model; Step 5 Solve the final model by an appropriate algorithm.

This algorithm implies that to solve UDNFPs considered here, it is sufficient to replace the uncertainty capacities with α e or 1−α e appropriately, then deal with the UDNFPs such as certain problems.

Numerical Example

By a simple example presented here, we illustrate the way of dealing with UDNFPs by the proposed algorithm.

Consider the network in Fig. 1. We want to find a feasible st-flow over time f with time horizon T=9 and maximum value ∣f∣.

Fig. 1
figure 1

A network consisting of arcs with uncertain capacities C ij ,i,j∈{s,1,2,3,4,t},ij. The arcs are all directed from left to right towards the sink t. The numbers at arcs indicate transit times

Now, suppose that we have obtained the same uncertainty distribution function \(\Phi (x)=\mathcal {Z}(0.5, 1, 2.25)\) for all arc capacities based on the expertise information.

For a given confidence level α=0.9 (determined by the decision maker), we have Φ −1(0.9)=2.

By replacing the uncertain capacities in the original network with Φ −1(0.9), we obtain the equivalent certain network (for this problem) with two-unit capacities. Thus, we will have

$$ \left\{\begin{array}{l} \max \limits_{f_{e}(\theta)} F \\ \text{subject to:} \\ \qquad F=|ex_{f}(t, 9)|\\ \qquad ex_{f}(v, \theta)=0, \forall v_{i}, i\in \{1, 2, 3, 4\}, \theta \in [0, 9)\\ \qquad f_{e}(\theta) \leq 2, \ \forall e \in \left\{ (i, j)| i, j \in \{s, 1, 2, 3, 4, t\}, i \neq j\right\}, \theta \in [0, 9) \end{array}\right. $$
(8)

This problem can be solved easily by Ford-Fulkerson repeated temporary flow algorithm [14 , 37], which returns the optimal value f=6. The schematic solution of this problem in the equivalent certain network with the Ford-Fulkerson algorithm is shown in Fig. 2.

Fig. 2
figure 2

Snapshots of a feasible st-flow over time with time horizon T=9 and value 6 in the equivalent certain network. In order to distinguish flow units traveling one after another along an arc, different shadings are used for the flow units

Conclusion and Future Works

Uncertain dynamic network flow problems have been studied here. The UMDNFP has been formulated and then transformed to a certain MFOTP. It is true that the problems have been studied just in the discrete time mode, but, the problems in the continuous time mode can be solved easily by adopting the results for them according to [13]. Moreover, by using [26 , 27], an algorithm has been presented to solve UDNFPs under some conditions.

The presented algorithm can be applied for any UDNFP with independent uncertain factors. It is worth to note that this algorithm cannot be applied easily for problems with correlated uncertain factors or time-dependent distribution functions. These complicated cases as well as the following works are left for future:

  • Applying this algorithm to real world problems,

  • Defining appropriate distribution functions for uncertain factors,

  • Finding efficient algorithms to solve uncertain problems directly (without transforming uncertain models to certain ones), and

  • Designing some software and providing some libraries for these problems.