1. Introduction

In many fields of industrial engineering and management sciences, there often exist some uncertainties, such as the return rate of security and the amounts of demand and supply. Recently, many attentions have been paid to construct optimization models with uncertain parameters for the decision problems in the field of management science and to design some efficient solution methods for these optimization models. For this connection, one can see [19] and the references therein.

Arising from the optimal network design problem and the fields of economic and management sciences, the following model often needs to be studied (see, e.g., [7]):

(1.1)

where is continuously differentiable, is a -dimensional stochastic vector, and are given vectors, and are given stochastic matrices. So, Problem (1.1) is a stochastic bi-criteria optimization problem. The main difficulties to solve this kind of problems lie in two aspects. The first one is that optimal decisions are required to be prior to the observation of the stochastic parameters. In this situation, one can hardly find any decision that has no constraints violation caused by unexpected random effects. The second one is that no decision can optimize the two objective functions simultaneously.

By expectation method, the authors in [9] transformed Problem (1.1) into the following deterministic model:

(1.2)

and developed an algorithm to obtain an approximate solution of the original problem.

Though the expectation method is a convenient way of dealing with stochastic programs [9, 10], it may not ensure that the optimal solution is robust as well as having optimal values of objective functions in general. For this, we are going to propose a hybrid method for the solution of Problem (1.1). The basic idea is as follows.

For the bi-criteria problem, we introduce a parameter of expectation level for the second objective, and transform the original problem into a problem with single-objective function. For the stochastic parameters, we introduce an appropriate combination of the mean and variance of the cost, which is to be minimized subject to some chance constraints. The variance appeared in the cost function can be interpreted as a risk measure, which can make the solution more robust. For the chance constraint, it ensures that the probability for the constraints to be satisfied is greater than or equal to some value. The larger this value is taken, the higher probability the constraints are satisfied. In other words, the chance constraints approach can guarantee that the obtained solution has less degree of constraint violation (see [4, 11]). Based on such a reformulation for the original problem, an interactive algorithm will also be developed to find its solution with some satisfaction degree.

The remainder of this paper is organized as follows. In Section 2, we will deduce the new robust deterministic formulation for the original stochastic model. Then, in Section 3, an interactive algorithm will be developed to solve such a deterministic problem with three parameters, reflecting the preferences of decision maker. Numerical experiments are carried out in Section 4 to show the advantage of the proposed method. Final remarks are given in the last section.

2. Reformulation of Stochastic Bi-Criteria Model by Hybrid Approach

In this section, we are going to reformulate the original stochastic bi-criteria problem into a deterministic problem.

Note that there are various ways to deal with multiple-objective problems. For details, see, for example, [6, 10, 12]. In this paper, Problem (1.1) is converted into a single-objective model by introducing a parameter, called the expectation level of decision maker.

Let denote the expectation level of decision maker to the second objective. Then, (1.1) is relaxed into the following model:

(2.1)

Notice that the solution of the above problem is a compromising solution of Problem (1.1) by a suitable . Actually, , where is the maximum of the second objective function. When , the solution of (2.1) ensures that the second objective achieve its maximal value.

Next, taking into account that the expectation value represents the average level and the variance indicates the deviation of the stochastic variable, the stochastic objective function in (2.1) is transformed into

(2.2)

where and denote, respectively, the expectation and the variance of stochastic matrix, and is introduced to describe the preference of decision maker to the average level and the robustness of objective value, and is called the preference level of decision maker. The variance appeared in the cost function can be interpreted as the risk measure, which can make the obtained solution more robust.

For the first stochastic inequality in (2.1), we introduce the so-called chance constraint method to convert it into a deterministic inequality constraint, which is used to guarantee that the stochastic constraint is satisfied with a probability as higher as possible. For the general stochastic constraints , we obtain their deterministic formulations by expectation method as done in [9].

Specifically, Problem (2.1) is reformulated as

(2.3)

where is the probability (or confidence) level for the first stochastic constraint to be satisfied.

Denote and , respectively, the expectation and the variance of the stochastic matrix , that is,

(2.4)

If all components of the stochastic matrix are statistically independent, then, (2.3) reads

(2.5)

where .

Furthermore, suppose that the probability density functions of all components of the stochastic vector are normally distributed, and are statistically independent, then, the model (2.5) can be equivalently written as:

(2.6)

where , . So, Model (2.6) has the following deterministic form:

(2.7)

Denote

(2.8)

Then, (2.7) yields

(2.9)

where is the inverse of the probability density function with standard normal distribution.

From the above deduction, we obtain a new relaxed deterministic formulation (2.6) of the original problem (1.1). Based on this model, an efficient solution method is developed in the next section.

3. Interactive Algorithm

In this section, from Model (2.9), we are going to develop an interactive algorithm to obtain an optimal solution of the original problem (1.1) such that there is less violation of constraints. It is more robust in the sense of less degree of constraint violation taking account of the satisfaction degree of decision maker. The basic idea of this algorithm is to adjust the three-level parameters of decision maker until a satisfactory solution is obtained.

It is noted that, for given , , and , we solve a subproblem that turns out to be a minimization problem of quartic polynomial with one quadratical constraint and several linear constraints [7]. Then, by comparing the features of the solutions corresponding to a series of subproblems, we decide whether or not the algorithm is to be terminated. The overall algorithm is as follows.

Algorithm 3.1 (Interactive Algorithm for Stochastic Bi-criteria Problems).

Step 1.

Choose , , and , where , , . Here, and , and denote, respectively, the minimum and the maximum of and given by the decision maker.

Let , , and be three positive constant scalars, for example, fix , , and . Take , , and . Set , , , .

Step 2.

Compute a solution of the following subproblem:

(3.1)

The optimal solution is denoted by , the corresponding value of the objective function is denoted by . Let , .

Step 3.

If , then go to Step 5. Otherwise, go to Step 4.

Step 4.

Ask the decision maker whether and are satisfactory. If they are, then go to Step 9; Otherwise, ask the decision maker whether needs to be changed. If it does not, then go to Step 2. Otherwise, ask the decision maker to update by , and go to Step 2.

Step 5.

Let , , and . If , then go to Step 7. Otherwise, go to Step 6.

Step 6.

Ask the decision maker whether needs to be changed. If it does not, then go to Step 2. Otherwise, update by , and go to Step 2.

Step 7.

Let , , , and . If , the algorithm stops, and are the desired results. Otherwise, go to Step 8.

Step 8.

Ask the decision maker whether needs to be changed. If it does not, then go to Step 2. Otherwise, update by , and go to Step 2.

Step 9.

and are the desired results. The algorithm terminates.

4. Numerical Experiments

In this section, we will study the numerical performances of Algorithm 3.1. For this, suppose that the probability density functions of all components of the stochastic vectors and , of the matrices and are normally distributed. These stochastic elements are statistically independent, that is,

(4.1)

where denotes the normally distributed probability density function with mean and variance .

Firstly, we implement Algorithm 3.1 in Lingo 9.0 to investigate how the parameters , and affect the optimal solution. Here, we take , , and . For example, we take

(4.2)

Then, the subproblem in Algorithm 3.1 to be solved is as follows:

(4.3)

where

(4.4)

In Lingo 9.0, we obtain the optimal solution of Model (4.3): , , , and the value of the objective function is 105.682. In the same setting, from Model (1.2) in [9], we obtained an optimal solution , , and , .

With different choices of the level parameters , and , it can be seen how these parameters affect the optimal solution. The numerical results are reported in Table 1.

Table 1 Effects of the three-level parameters on solutions.

From Table 1, it can be seen that the adjustment of , , and is helpful for the decision maker to choose a favorite solution.

In the end of this section, we are going to investigate the degree of constraint violation for the proposed method. By simulation, in MATLAB 6.5, 48 samples of all stochastic parameters are generated. Thus, we get 48 optimization problems. Next, we are going to investigate the degree of constraint violation for the proposed method in this paper and the expectation method presented in [9].

Let and , respectively, denote the optimal solutions of the objective function from the expectation model and the new hybrid model, while and denote the violation degrees against all constraints of the expectation model and the hybrid model, respectively. Take , , and . Table 2 reports the numerical results.

Table 2 Comparison between expectation method and hybrid method.

From Table 2, it is shown that the optimal solution by the hybrid method has no violation of the constraints for all the 48 samples with 0.95 probability level, while there are 19 times of violating constraints for the expectation method.

5. Final Remarks

In this paper, a class of stochastic bi-criteria optimization problems was studied by a new hybrid method where the chance-constrained programming (CCP) is combined with the variance-expectation (VE) method. Then an interactive algorithm was developed to find an optimal solution of the original problem, reflecting the satisfaction degree of the decision maker.

Following the proposed hybrid method, if we deal with all the stochastic inequalities by the chance constraint method, then the optimal solution would have less constraints violation degree than that obtained by the method proposed in this paper. But in this situation, a joint chance constraint is generated as follows:

(5.1)

Even if some strong assumptions are imposed, it is difficult to obtain explicit expressions of deterministic inequalities constraints that are involved only with the decision variable for the stochastic constraints. Thus, it calls for the investigation of other more efficient approaches.