Introduction

Since the 1970s, numerous studies have been made on how to solve multiple-objective linear programming problems (MOLPPs) (see Ignizio 1985; Lai and Hwang 1994). There are three approaches to solve an MOLPP (Lotfi et al. 1997):

  1. 1.

    Vector maximization.

  2. 2.

    Utility maximization.

  3. 3.

    Aspiration level approach.

Aspiration levels for the objective functions are specified by a decision maker (DM) in solving the problem. Multi-objective programming (MOP) entails mathematical optimization problems involving more than one objective function to be optimized simultaneously. It is concerned with multiple criteria decision making (MCDM). The most popular technique for solving MCDM problems, especially MODM (multiple-objective decision making) problems, is goal programming (GP). GP was first introduced by Charnes et al. (1955) and further developed by Lee (1972), Ignizio (1985), Tamiz et al. (1998), and Romero (2001) among others (see Chang 2004, 2007, 2008). In a GP approach, a set of satisfying solutions are found by solving MODM problems, enabling a DM to set her aspiration level for each objective function. The intention is to minimize the deviations of the achievement functions from their aspiration levels. This can be done by various methods such as Lexicographic GP, Weighted GP (WGP), and Min–Max (Chebyshev) GP (see Lee 1972; Romero 2001; Vitoriano and Romero 1999; Arenas-Parra et al. 2010; Aouni and Kettani 2001; Ignizio 1976; Ijiri 1965). The most widely used achievement function model for GP, namely weighted goal programming (WGP), can be shown as follows:

$$\begin{aligned} \text{(WGP)}\quad \text{Min}\;\sum\limits_{i = 1}^{n} {w_{i} (d_{i}^{ + } + d_{i}^{ - } )} \hfill \\ \text{s.t.}\quad \;\quad f_{i} (x) - d_{i}^{ + } + d_{i}^{ - } = g_{i} ,\quad i = 1, \ldots ,n, \hfill \\ \quad \quad \quad d_{i}^{ + } ,d_{i}^{ - } \ge 0,\quad i = 1, \ldots ,n, \hfill \\ \quad \quad \quad X \in F\quad \;\;\quad (F,\;{\text{a feasible}}\;{\text{set}}), \hfill \\ \end{aligned}$$

where the w i are the weight factors associated with the ith objective function (goal), the f i (X) and the g i are, respectively, linear functions and aspiration levels associated with the goals, and d + i and d i are, respectively, positive and negative deviations from the aspiration level of the ith goal.

If we are faced with some problems such as lack of available resources and information, then we may not be able to specify the actual aspiration levels. In such situations, the actual obtained achievements may be higher than the aspiration levels defined by a DM; we say that the DM underestimated the initial aspiration level setting, meaning that we could reach higher aspiration levels under the available resources/information. We know that methods for solving a GP problem usually consider only one aspiration level for the right hand side of a constraint. Chang (2007, 2008) proposed a new concept, namely multi-choice goal programming (MCGP), in order to solve goals with multiple aspiration levels. This formulation helps a DM to find the optimal aspiration level, under given constraints, for each goal.

The achievement function of MCGP is defined to be (Chang 2007)

$$\begin{aligned} \text{Min}\quad \sum\limits_{i = 1}^{n} {w_{i} } \left| {f_{i} (X) - g_{i1} \;\text{or}\;\;g_{i2} \;\text{or} \ldots \text{or}\;g_{im} } \right|, \hfill \\ \text{s.t.}\quad \;X \in F\quad \;(F,\;{\text{a feasible}}\;{\text{set}}) \hfill \\ \end{aligned}$$

where g ij is the \(j\text{th}\)aspiration level (for j = 1, 2,…, m) corresponding to the ith goal (for i = 1, 2,…, n). Other variables are as defined in WGP.

To facilitate formulation of the model above, Chang (2007) introduced \(\left\lceil {(\ln \;n/\ln \;2)} \right\rceil\) binary variables for the MCGP model with n aspiration levels.

The corresponding MCGP model is shown as follows (see, Aouni and kettani 2001):

$$\begin{aligned} \text{Min}\quad \sum\limits_{i = 1}^{n} {w_{i} (d_{i}^{ + } + d_{i}^{ - } )} \hfill \\ \text{s.t.}\quad \;f_{i} (x) - d_{i}^{ + } + d_{i}^{ - } = \sum\limits_{j = 1}^{m} {g_{ij} s_{ij} (B),} \quad i = 1, \ldots ,n, \hfill \\ \quad \quad d_{i}^{ + } ,d_{i}^{ - } \ge 0,\quad i = 1, \ldots ,n, \hfill \\ \quad \quad s_{ij} (B) \in R_{i} (x),\;\;i = 1, \ldots ,n,\quad j = 1, \ldots ,m, \hfill \\ \quad \quad X \in F\quad \;(F,\;{\text{a feasible}}\;{\text{set}}) \hfill \\ \end{aligned}$$

where \(S_{ij} (B)\) is the jth function of binary serial numbers corresponding to the ith goal, \(R_{i} (x )\) is a function of resource limitations, and the other notations are as defined in WGP.

In recent years, researchers have used the concept of MCGP to conduct their surveys. For example, Francisco da Silva et al. (2013) introduced Multi-Choice Mixed Integer Goal Programming model to make a comparison between this and WGP model and to show the usefulness of the proposed model in Multi-Choice Aspiration Level (MCAL) problems. Moreover, Patro et al. (2015) presented an equivalent model for MCGP problems.

Chang (2015) represented an MCGP model to avoid underestimation of aspiration level which is common to occur by GP problems. Also, Jadidi et al. (2015) declared a new MCGP model, in situations when there is an interval aspiration level on the right hand side of the equation, to choose a level in compliance with their preferences.

Chang (2011) proposed that, in order to consider DM’s preferences in MCGP problems, we need to add utility functions as a decision aid for the DM. A utility function represents DM’s preferences and offers her more flexibility about the goal or attribute; (see Al-Nowaihi et al. 2008; Yu et al. 2009; Podinovski 2010). According to available studies (see Licalzi and Sorato 2006), four possible utility functions are considered (concave, convex, S-shaped and reverse S-shaped). Our aim here is to consider a new model for solving MODM problems with each goal having multiple utility functions. To our knowledge, solutions of such problems have not been considered in the literature. Here, we make use of Bayesian theory to deduce probabilities of the utility functions. Abbasian et al. (2015) have recently proposed a new approach for solving non-constrained multi-objective problems having multiple utility functions using the Bayesian theory, obtaining the probabilities as aspiration levels of the objective functions.

The remainder of our work is organized as follows. In Sect. 2, an optimization model is presented for MODM problems with multiple utility functions. To depict the usefulness and effectiveness of the suggested model, examples are worked through in Sect. 3. Finally, conclusions are made in Sect. 4.

Modeling constrained MOLP problems

Here, we use Bayesian theory to calculate the probability of the utility functions corresponding to the objective function (see Olshausen 2004). Bayesian theory is a method to categorize events on the probability of occurrence or non-occurrence. Since it is difficult to calculate the probability of an event directly, we can use Bayesian theory to consider conditional probability of the event. Our approach makes use of the Bayes’ rule as follows:

Bayes’ rule:

If B 1,…, B k partition a space S, and A is an ideal event, then

$$p(B_{r} |A) = \frac{{p(B_{r} \cap A)}}{p(A)} = \frac{{p(B_{r} ) \cdot p(A|B_{r} )}}{{\sum\limits_{i} {p(B_{i} ) \cdot p(A|B_{i} )} }},\quad r = 1, \ldots ,k,$$

where \(P(B_{r} |A)\) denotes the conditional probability for the occurrence of event B r , given that event A has occurred (see Olshausen 2004). A constrained multi-objective problem can be expressed as follows:

$$\begin{aligned} \text{Min}\quad F(X) = \left\{ {f_{1} (x), \ldots ,f_{n} (x)} \right\} \hfill \\ \text{s.t.}\quad \;g_{j} (x)\left[ \begin{aligned} \le \hfill \\ \ge \hfill \\ = \hfill \\ \end{aligned} \right]\quad 0,\quad j = 1, \ldots ,m \hfill \\ \quad \quad x \in F. \hfill \\ \end{aligned}$$

Here, we consider utilities \(u_{{_{i1} }} , \ldots ,u_{{i,p_{i} }}\) for each objective function f i .

Assuming that the utilities \(u_{11} , \ldots ,u_{{1,p_{i} }}\) corresponding to the ith objective function are incompatible with one another, the probability of the utilities in conditions of dependence and independence of variables is considered separately and then the total probability and sum of probability rules are used. Here, we show the effect of independence of variables on the utility functions and later consider dependence and independence of variables in the examples of Sect. 3.

The probability of the utilities using Bayesian theory is calculated as follows:

$$p(u_{ir} |x_{j} ) = \frac{{p(u_{ir} \cap x_{j} )}}{{p(x_{j} )}} = \frac{{p(x_{j} |u_{ir} )p(u_{ir} )}}{{\sum\limits_{k = 1}^{{p_{i} }} {p(x_{j} |u_{ik} )p(u_{ik} )} }},\quad \forall i,j,r,\quad \quad i = 1, \ldots ,n,\quad j = 1, \ldots ,n,\quad r = 1, \ldots ,p_{i} .$$
(1)

Regarding the definition of prior and posterior probabilities, a DM can set prior probabilities for the utility functions in order to access the conditional probabilities above. This new conditional probability of u ir is called posterior probability. This way, we can use the total probability rule for the objective function f i to reach the final probabilities as follows:

$$p(u_{ir} ) = \sum\limits_{j = 1}^{n} {p(u_{ir} |x_{j} )\;p(x_{j} ),\quad \forall i,\;r,\quad i = 1, \ldots ,n,\quad r = 1, \ldots ,p_{i} .}$$
(2)

Now, for each f i , let \(p(u_{i1} ), \ldots ,p(u_{{i,p_{i} }} )\) be the aspiration levels achieved. We use the definition of the sum of probabilities to combine all the final probabilities, Eq. (2), for each objective separately. Each objective has only one probability. Taking the probability value as an aspiration level of the objective function, we can solve the problem with only one aspiration level using a GP formulation. The aim of GP is to minimize the deviations of the achievements of goals from their aspiration levels. This problem can be solved by weighted GP (WGP) and can be expressed as the following program:

$$\begin{aligned} \text{Min}\quad \sum\limits_{i = 1}^{n} {w_{i} (d_{i}^{ + } + d_{i}^{ - } )} \hfill \\ \text{s.t.}\quad \;f_{i} (x) - d_{i}^{ + } + d_{i}^{ - } = g_{i} ,\quad i = 1, \ldots ,n, \hfill \\ \quad \quad d_{i}^{ + } ,d_{i}^{ - } ,x \ge 0,\quad \hfill \\ \quad \quad x \in F, \hfill \\ \end{aligned}$$

where \(g_{i}\) is sum of probabilities of the utility functions corresponding to the ith goal.

If we do not use the sum of probabilities, then we have multiple probability values for each objective function. Consider the values as aspiration levels of the objective function. To the best of our knowledge, the problem with multiple aspirations cannot be solved by current GP approaches because in traditional GP with multiple aspiration levels the aspirations are input values while in reality some conditions may influence the desired deviation of goals. Thus, an approach is required to handle different conditions for computing the aspiration levels of objective functions (see Lotfi et al. 1997; Chang 2011). This problem can be solved by an MCGP approach and can be expressed as follows:

$$\begin{aligned} \text{Min}\quad \sum\limits_{i = 1}^{n} {w_{i} (d_{i}^{ + } + d_{i}^{ - } )} \hfill \\ \text{s.t.}\quad \;f_{i} (x) - d_{i}^{ + } + d_{i}^{ - } = \sum\limits_{j = 1}^{m} {g_{ij} s_{ij} (B),} \quad i = 1, \ldots ,n, \hfill \\ \quad \quad d_{i}^{ + } ,d_{i}^{ - } \ge 0,\quad i = 1, \ldots ,n, \hfill \\ \quad \quad s_{ij} (B) \in R_{i} (x),\;\;i = 1, \ldots ,n,\quad j = 1, \ldots ,m, \hfill \\ \quad \quad X \in F\quad \hfill \\ \end{aligned}$$

where \(g_{ij}\) is the jth aspiration level corresponding to the ith goal and \(S_{ij} (B )\) is the jth function of binary serial numbers corresponding to the ith goal.

Illustrative examples

Here, we consider a constrained MOLP problem with multiple utility functions for each goal, and then assume the effects of independence and dependence of variables on the utility functions. The goals and constraints are:

$$\begin{aligned} \begin{array}{*{20}c} {{\text{Goal}}}\ 1: & {{\text{max}} \;f_{1} (x) = 3x_{1} + 2x_{2} + x_{3} } \\ {\text{Goal}}\;\;2: & {\text{max }}f_{1} (x) = 3x_{2} + 2x_{3} \\ {\text{Goal}}\;\; 3: & {\text{max}} \;\;f_{3} (x) = 3.5x_{1} + 5x_{2} + 3x_{3} . \\ \end{array} \hfill \\ \begin{array}{*{20}c} {} & {\text{s.t.}} \\ {} & {x_{2} + x_{3} \ge 10} \\ {} & {x_{2} \ge 4} \\ {} & {\,x_{1} + x_{2} + x_{3} \ge 15.} \\ \end{array} \hfill \\ \end{aligned}$$

The utility functions corresponding to the goals are:

$$\left\{ \begin{aligned} u_{11} (x) = 2x_{1} + 3x_{2} + 2x_{3} \hfill \\ u_{12} (x) = x_{1} + 2x_{2} \hfill \\ u_{13} (x) = x_{1} + x_{2} + x_{3} \hfill \\ \end{aligned} \right.$$
$$\left\{ \begin{aligned} u_{21} = x_{2} + x_{3} \hfill \\ u_{22} = 2x_{2} + 2x_{3} \hfill \\ \end{aligned} \right.$$
$$\left\{ \begin{aligned} u_{31} = 2x_{1} + 3x_{3} \hfill \\ u_{32} = x_{1} + 5x_{2} + 3x_{3} . \hfill \\ \end{aligned} \right.$$

Starting with Goal 1, the inputs are: \(p(u_{{\text{11}}} ) = 0.7,\begin{array}{*{20}c} {} & {} \\ \end{array} \begin{array}{*{20}c} {p(u_{{\text{12}}} ) = 0.02,\begin{array}{*{20}c} {\begin{array}{*{20}c} {} & {} \\ \end{array} p(u_{{\text{13}}} ) = 0.28.} & {} \\ \end{array} } & {} \\ \end{array}\)

The necessary computations are:

$$\begin{aligned} p(x_{1} |u_{11} ) = 0.2 \hfill \\ p(x_{2} |u_{11} ) = 0.1 \hfill \\ p(x_{3} |u_{11} ) = 0.3 \hfill \\ p(x_{1} |u_{12} ) = 0.4 \hfill \\ p(x_{2} |u_{12} ) = 0.6 \hfill \\ p(x_{1} |u_{13} ) = 0.1 \hfill \\ p(x_{2} |u_{13} ) = 0.6 \hfill \\ p(x_{3} |u_{13} ) = 0.3. \hfill \\ \end{aligned}$$

If x 1 , x 2 and x 3 are assumed to independently affect the utility functions, then we arrive at the following probabilities:

$$\begin{aligned} p(x_{1} ) = p(x_{1} |u_{11} )p(u_{11} ) + p(x_{1} |u_{12} )p(u_{12} ) + p(x_{1} |u_{13} )p(u_{13} ) \hfill \\ \quad \;\quad \, = (0.2)(0.7) + (0.4)(0.02) + (0.1)(0.28) \cong 0.18 \hfill \\ p(x_{2} ) = p(x_{2} |u_{11} )p(u_{11} ) + p(x_{2} |u_{12} )p(u_{12} ) + p(x_{2} |u_{13} )p(u_{13} ) \hfill \\ \quad \quad \;\, = (0.1)(0.7) + (0.6)(0.02) + (0.6)(0.28) = 0.25 \hfill \\ p(x_{3} ) = p(x_{3} |u_{11} )p(u_{11} ) + p(x_{3} |u_{13} )p(u_{13} ) \hfill \\ \quad \quad \;\, = (0.3)(0.7) + (0.3)(0.28) = 0.29 \hfill \\ \end{aligned}$$
$$\begin{aligned} p(u_{11} |x_{1} ) = \frac{{p(x_{1} |u_{11} )p(u_{11} )}}{{p(x_{1} )}} = \frac{(0.2)(0.7)}{0.18} \cong 0.77 \hfill \\ p(u_{11} |x_{2} ) = \frac{{p(x_{2} |u_{11} )p(u_{11} )}}{{p(x_{2} )}} = \frac{(0.1)(0.7)}{0.25} \cong 0.28 \hfill \\ p(u_{11} |x_{3} ) = \frac{{p(x_{3} |u_{11} )p(u_{11} )}}{{p(x_{3} )}} = \frac{(0.3)(0.7)}{0.29} \cong 0.72. \hfill \\ \end{aligned}$$

Using the total probability rule, we obtain:

$$p(u_{11} ) = p(u_{11} |x_{1} )p(x_{1} ) + p(u_{11} |x_{2} )p(x_{2} ) + p(u_{11} |x_{3} )p(x_{3} )$$
$$p(u_{11} ) = (0.77)(0.18) + (0.28)(0.25) + (0.72)(0.29) \cong 0.42.$$

On the other hand, if \(x_{1} ,x_{2} ,x_{3}\) are assumed to dependently affect the utility functions, then with the following data,

$$\begin{gathered} p(x_{1} |u_{{11}} ) = 0.2 \hfill \\ p(x_{2} |u_{{11}} ) = 0.1 \hfill \\ p(x_{3} |u_{{11}} ) = 0.3 \hfill \\ p(x_{1} |u_{{12}} ) = 0.4 \hfill \\ p(x_{2} |u_{{12}} ) = 0.6 \hfill \\ p(x_{1} |u_{{13}} ) = 0.1 \hfill \\ p(x_{2} |u_{{13}} ) = 0.6 \hfill \\ p(x_{3} |u_{{13}} ) = 0.3. \hfill \\ \end{gathered}$$

we get the following probabilities:

$$\begin{aligned} p(u_{11} |x_{1} x_{2} ) = \frac{{p(x_{1} |u_{11} )p(u_{11} )p(x_{2} |u_{11} x_{1} )}}{{p(x_{1} )p(x_{2} |x_{1} )}} = \frac{(0.2)(0.7)(0.5)}{(0.18)(0.4)} \cong 0.97 \hfill \\ p(u_{11} |x_{1} x_{3} ) = \frac{{p(x_{1} |u_{11} )p(u_{11} )p(x_{3} |x_{1} u_{11} )}}{{p(x_{1} )p(x_{3} |x_{1} )}} = \frac{(0.2)(0.7)(0.45)}{(0.18)(0.4)} = 0.87 \hfill \\ p(u_{11} |x_{2} x_{3} ) = \frac{{p(x_{2} |u_{11} )p(u_{11} )p(x_{3} |x_{2} u_{11} )}}{{p(x_{2} )p(x_{3} |x_{2} )}} = \frac{(0.1)(0.7)(0.3)}{(0.25)(0.2)} \cong 0.42 \hfill \\ p(u_{11} |x_{1} x_{2} x_{3} ) = \frac{{p(x_{3} |u_{11} x_{1} x_{2} )p(x_{2} |u_{11} x_{1} )p(x_{1} |u_{11} )p(u_{11} )}}{{p(x_{3} |x_{1} x_{2} )p(x_{2} |x_{1} )p(x_{1} )}} \hfill \\ \quad \quad \quad \quad \quad \quad = \frac{(0.1)(0.5)(0.2)(0.7)}{(0.8)(0.4)(0.18)} \cong 0.12. \hfill \\ \end{aligned}$$

We then have:

$$\begin{aligned} p(x_{1} x_{2} ) = p(x_{2} |x_{1} )p(x_{1} ) = (0.4)(0.18) = 0.07 \hfill \\ p(x_{1} x_{3} ) = p(x_{3} |x_{1} )p(x_{1} ) = (0.4)(0.5) = 0.07 \hfill \\ p(x_{2} x_{3} ) = p(x_{3} |x_{2} )p(x_{2} ) = (0.2)(0.25) \cong 0.05 \hfill \\ p(x_{1} x_{2} x_{3} ) = p(x_{3} |x_{1} x_{2} )p(x_{1} x_{2} ) = (0.8)(0.07) \cong 0.06. \hfill \\ \end{aligned}$$

Using the total probability rule, we obtain:

$$\begin{aligned} p(u_{11} ) = p(u_{11} |x_{1} x_{2} )p(x_{1} x_{2} ) + p(u_{11} |x_{1} x_{3} )p(x_{1} x_{3} ) + p(u_{11} |x_{2} x_{3} )p(x_{2} x_{3} ) + p(u_{11} |x_{1} x_{2} x_{3} )p(x_{1} x_{2} x_{3} ) \hfill \\ p(u_{11} ) = (0.97)(0.07) + (0.87)(0.07) + (0.42)(0.05) + (0.12)(0.06) \cong 0.16. \hfill \\ \end{aligned}$$

The same calculations can be carried out for the other two utility functions. Using the given inputs,

$$\begin{aligned} p(x_{2} |u_{12} x_{1} ) = 0.7 \hfill \\ p(x_{2} |x_{1} u_{13} ) = 0.35 \hfill \\ p(x_{3} |x_{1} u_{13} ) = 0.24 \hfill \\ p(x_{3} |x_{2} u_{13} ) = 0.02 \hfill \\ p(x_{3} |u_{13} x_{1} x_{2} ) = 0.77 \hfill \\ p(x_{3} |x_{1} x_{2} ) = 0.8, \hfill \\ \end{aligned}$$

we get

$$p(u_{12} ) = p(u_{12} |x_{1} )p(x{}_{1}) + p(u_{12} |x_{2} )p(x_{2} ) = (0.02)(0.18) + (0.06)(0.25) \cong 0.02\;$$
$$p(u_{12} |x_{1} x_{2} ) = \frac{{p(x_{1} |u_{12} )p(u_{12} )p(x_{2} |u_{12} x_{1} )}}{{p(x_{1} )p(x_{2} |x_{1} )}} = \frac{(0.4)(0.02)(0.7)}{(0.18)(0.4)} = 0.08$$
$$p(u_{12} ) = p(u_{12} |x_{1} x_{2} )p(x_{1} x_{2} ) = (0.08)(0.07) = 0.01$$
$$\begin{aligned} p(u_{13} ) = p(u_{13} |x_{1} )p(x_{1} ) + p(u_{13} |x_{2} )p(x_{2} ) + p(u_{13} |x_{3} )p(x_{3} ) \hfill \\ \quad \quad \;\;\, = (0.16)(0.18) + (0.67)(0.25) + (0.29)(0.29) = 0.28 \hfill \\ \end{aligned}$$
$$\begin{aligned} p(u_{13} ) = p(u_{13} |x_{1} x_{2} )p(x_{1} x{}_{2}) + p(u_{13} |x_{1} x_{3} )p(x_{1} x_{3} ) + p(u_{13} |x_{2} x_{3} )p(x_{2} x_{3} ) \hfill \\ \quad \quad \quad \, + p(u_{13} |x_{1} x_{2} x_{3} )p(x_{1} x_{2} x_{3} ) \hfill \\ \quad \quad \;\, = (0.14)(0.07) + (0.09)(0.07) + (0.07)(0.05) + (0.13)(0.06) \cong 0.03. \hfill \\ \end{aligned}$$

We continue with Goal 2 and Goal 3.

Next, assume the following data for Goal 2:

$$\begin{aligned} p(u_{21} ) = 0.5 \hfill \\ p(u_{22} ) = 0.5 \hfill \\ \end{aligned}$$
$$\begin{aligned} p(x_{2} |u_{21} ) = 0.6 \hfill \\ p(x_{3} |u_{21} ) = 0.2 \hfill \\ p(x_{2} |u_{22} ) = 0.3 \hfill \\ p(x_{3} |u_{22} ) = 0.6 \hfill \\ p(x_{3} |x_{2} ) = 0.8 \hfill \\ p(x_{3} |u_{21} x_{2} ) = 0.6.\quad {\kern 1pt} \hfill \\ \end{aligned}$$

Doing similar computations as before, we arrive at the final probabilities as follows:

$$p(u_{21} ) = p(u_{21} |x_{2} )p(x_{2} ) + p(u_{21} |x_{3} )p(x_{3} ) = (0.66)(0.45) + (0.25)(0.4) \cong 0.4$$
$$p(u_{21} |x_{2} x_{3} ) = \frac{{p(x_{2} |u_{21} )p(u_{21} )p(x_{3} |u_{21} x_{2} )}}{{p(x_{2} )p(x_{3} |x_{2} )}} = \frac{(0.6)(0.5)(0.6)}{(0.45)(0.8)} = 0.5$$
$$p(u_{21} ) = p(u_{21} |x_{2} x_{3} )p(x_{2} x_{3} ) = (0.5)(0.36) = 0.18$$
$$p(u_{22} ) = p(u_{22} |x_{2} )p(x_{2} ) + p(u_{22} |x_{3} )p(x_{3} ) = (0.33)(0.45) + (0.75)(0.4) \cong 0.45$$
$$p(u_{22} |x_{2} x_{3} ) = \frac{{p(x_{2} |u_{22} )p(u_{22} )p(x_{3} |u_{22} x_{2} )}}{{p(x_{2} )p(x_{3} |x_{2} )}} = \frac{(0.3)(0.5)(0.6)}{(0.39)(0.8)} = 0.25$$
$$p(u_{22} ) = p(u_{22} |x_{2} x_{3} )p(x_{2} x_{3} ) = (0.25)(0.36) = 0.09.$$

Using the following inputs for Goal 3,

$$\begin{aligned} p(u_{31} ) = 0.4 \hfill \\ p(u_{32} ) = 0.2 \hfill \\ p(x_{1} |u_{31} ) = 0.4 \hfill \\ p(x_{3} |u_{31} ) = 0.2 \hfill \\ p(x_{1} |u_{32} ) = 0.2 \hfill \\ p(x_{2} |u_{32} ) = 0.5 \hfill \\ p(x_{3} |u_{32} ) = 0.1 \hfill \\ p(x_{3} |x_{1} ) = 0.9 \hfill \\ p(x_{3} |u_{31} x_{1} ) = 0.8 \hfill \\ \end{aligned}$$
$$\begin{aligned} p(x_{2} |u_{32} x_{1} ) = 0.43 \hfill \\ p(x_{2} |x_{1} ) = 0.6 \hfill \\ p(x_{3} |u_{32} x_{1} ) = 0.3 \hfill \\ \end{aligned}$$
$$\begin{aligned} p(x_{3} |x_{2} ) = 0.4 \hfill \\ p(x_{3} |u_{32} x_{2} ) = 0.32 \hfill \\ p(x_{3} |u_{32} x_{1} x_{2} ) = 0.65 \hfill \\ p(x_{3} |x_{1} x_{2} ) = 0.75, \hfill \\ \end{aligned}$$

we get

$$p(u_{31} ) = p(u_{31} |x_{1} )p(x_{1} ) + p(u_{31} |x_{3} )p(x_{3} ) = (0.57)(0.28) + (0.57)(0.14) \cong 0.24$$
$$p(u_{31} |x_{1} x_{3} ) = \frac{{p(x_{1} |u_{31} )p(u_{31} )p(x_{3} |u_{31} x_{1} )}}{{p(x_{1} )p(x_{3} |x_{1} )}} = \frac{(0.4)(0.4)(0.8)}{(0.28)(0.9)} \cong 0.51$$
$$p(u_{31} ) = p(u_{31} |x_{1} x_{3} )p(x_{1} x_{3} ) = (0.51)(0.25) \cong 0.13$$
$$\begin{aligned} p(u_{32} ) = p(u_{32} |x_{1} )p(x_{1} ) + p(u_{32} |x_{2} )p(x_{2} ) + p(u_{32} |x_{3} )p(x_{3} ) \hfill \\ = (0.43)(0.28) + (1)(0.3) + (0.43)(0.14) \cong 0.48 \hfill \\ \end{aligned}$$
$$\begin{aligned} p(u_{32} ) = p(u_{32} |x_{1} x_{2} )p(x_{1} x_{2} ) + p(u_{32} |x_{1} x_{3} )p(x_{1} x_{3} ) + p(u_{32} |x_{2} x_{3} )p(x_{2} x_{3} ) \hfill \\ \quad \quad \quad + p(u_{32} |x_{1} x_{2} x_{3} )p(x_{1} x_{2} x_{3} ) = (0.31)(0.17) + (0.14)(0.25) + (0.8)(0.12) + (0.27)(0.13) \cong 0.22. \hfill \\ \end{aligned}$$

Next, we are able to solve the problem in the two considered cases.

Case I: Independence of variables

Here, we need to consider two models. First, we solve the following GP model:

$$\begin{aligned} {\text{Min}}\;\begin{array}{*{20}c} {(d_{1}^{ + } + d_{1}^{ - } ) + (d_{2}^{ + } + d_{2}^{ - } ) + (d_{3}^{ + } + d_{3}^{ - } )} & {} \\ \end{array} \hfill \\ \text{s.t.}\begin{array}{*{20}c} {} & {3x_{1} + 2x_{2} + x_{3} - d_{1}^{ + } + d_{1}^{ - } = 72} \\ \end{array} \hfill \\ \begin{array}{*{20}c} {\begin{array}{*{20}c} {} & {} \\ \end{array} } & \begin{aligned} 3x_{2} + 2x_{3} - d_{2}^{ + } + d_{2}^{ - } = 85 \hfill \\ 3.5x_{1} + 5x_{2} + 3x_{3} - d_{3}^{ + } + d_{3}^{ - } = 72 \hfill \\ x_{2} + x_{3} \ge 10 \hfill \\ x_{2} \ge 4 \hfill \\ x_{1} + x_{2} + x_{3} \ge 15. \hfill \\ \end{aligned} \\ \end{array} \quad \;\;\; \hfill \\ \end{aligned}$$

We solved the program using LINGO software (see, Scharge 2008) and obtained the optimal solutions (x 1 , x 2 , x 3) = (0, 28.33, 0) as shown in Table 1, under the column entitled GP corresponding to case I.

Table 1 Solutions corresponding to cases I and II

Second, based on the proposed MCGP approach (see, Chang 2007), the problem is formulated as follows:

$$\begin{aligned} {\text{Min}}\begin{array}{*{20}c} {} & {} \\ \end{array} \begin{array}{*{20}c} {d_{1}^{ + } + d_{1}^{ - } + d_{2}^{ + } + d_{2}^{ - } + d_{3}^{ + } + d_{3}^{ - } } & {} \\ \end{array} \hfill \\ {\text{s}} . {\text{t}} .\begin{array}{*{20}c} {} & {} & {3x_{1} + 2x_{2} + x_{3} } \\ \end{array} - d_{1}^{ + } + d_{1}^{ - } = 42z_{1} z_{2} + 2z_{1} (1 - z_{2} ) + 28(1 - z_{1} )z_{2} , \hfill \\ \begin{array}{*{20}c} {} & {} & {} & \begin{aligned} 3x_{2} + 2x_{3} - d_{2}^{ + } + d_{2}^{ - } = 40z_{3} + 45(1 - z_{3} ) \hfill \\ 3.5x_{1} + 5x_{2} + 3x_{3} - d_{3}^{ + } + d_{3}^{ - } = 24z_{4} + 48(1 - z_{4} ) \hfill \\ d_{i}^{ + } ,d_{i}^{ - } \ge 0,\begin{array}{*{20}c} {} & {i = 1,2,3,} \\ \end{array} \hfill \\ x_{2} + x_{3} \ge 10 \hfill \\ x_{2} \ge 4 \hfill \\ x_{1} + x_{2} + x_{3} \ge 15, \hfill \\ \end{aligned} \\ \end{array} \hfill \\ \end{aligned}$$

where z 1 , z 2 , z 3 and z 4 are binary variables, \(d_{i}^{ + } \;\text{and}\;d_{i}^{ - }\) are the positive and negative variables, respectively.

We solved this problem using LINGO (see, Scharge 2008) again to obtain the optimal solution (x 1 , x 2 , x 3 , z 1 , z 2 , z 3 , z 4) = (0, 10, 5, 0, 1, 1, 0) as shown in Table 1, under the column entitled MCGP corresponding to case I.

Case II: Dependence of variables

Here, we formulated the problem using the WGP model. Suppose that all the weights attached to the deviations have the value one. We then have the following problem:

$$\begin{aligned} {\text{Min}}\begin{array}{*{20}c} {\quad w_{1} (d_{1}^{ + } + d_{1}^{ - } ) + w_{2} (d_{2}^{ + } + d_{2}^{ - } ) + w_{3} (d_{3}^{ + } + d_{3}^{ - } )} & {} \\ \end{array} \hfill \\ \text{s.t.}\begin{array}{*{20}c} {} & {3x_{1} + 2x_{2} + x_{3} - d_{1}^{ + } + d_{1}^{ - } = 20} \\ \end{array} \hfill \\ \begin{array}{*{20}c} {\begin{array}{*{20}c} {} & {} \\ \end{array} } & \begin{aligned} 3x_{2} + 2x_{3} - d_{2}^{ + } + d_{2}^{ - } = 27 \hfill \\ 3.5x_{1} + 5x_{2} + 3x_{3} - d_{3}^{ + } + d_{3}^{ - } = 35 \hfill \\ x_{2} + x_{3} \ge 10 \hfill \\ x_{2} \ge 4 \hfill \\ x_{1} + x_{2} + x_{3} \ge 15. \hfill \\ \end{aligned} \\ \end{array} \hfill \\ \end{aligned}$$

We solved the program using LINGO (see, Scharge 2008) to obtain the optimal solution (x 1 , x 2 , x 3) = (0.5, 4, 10.5) as shown in Table 1, under the column entitled WGP corresponding to case II.

Next, the problem is formulated as an MCGP problem as follows:

$$\begin{aligned} {\text{Min}}\begin{array}{*{20}c} {} & {} \\ \end{array} \begin{array}{*{20}c} {d_{1}^{ + } + d_{1}^{ - } + d_{2}^{ + } + d_{2}^{ - } + d_{3}^{ + } + d_{3}^{ - } } & {} \\ \end{array} \hfill \\ \text{s.t.}\begin{array}{*{20}c} {} & {} & {3x_{1} + 2x_{2} + x_{3} } \\ \end{array} - d_{1}^{ + } + d_{1}^{ - } = 16z_{1} z_{2} + 1z_{1} (1 - z_{2} ) + 3(1 - z_{1} )z_{2} , \hfill \\ \begin{array}{*{20}c} {} & {} & {} & \begin{aligned} 3x_{2} + 2x_{3} - d_{2}^{ + } + d_{2}^{ - } = 18z_{3} + 9(1 - z_{3} ) \hfill \\ 3.5x_{1} + 5x_{2} + 3x_{3} - d_{3}^{ + } + d_{3}^{ - } = 13z_{4} + 22(1 - z_{4} ) \hfill \\ d_{i}^{ + } ,d_{i}^{ - } \ge 0,\begin{array}{*{20}c} {} & {i = 1,2,3,} \\ \end{array} \hfill \\ x_{2} + x_{3} \ge 10 \hfill \\ x_{2} \ge 4 \hfill \\ x_{1} + x_{2} + x_{3} \ge 15. \hfill \\ \end{aligned} \\ \end{array} \hfill \\ \end{aligned}$$

The program was solved using LINGO (see, Scharge 2008) to obtain the optimal solution (x 1 , x 2 , x 3 , z 1 , z 2 , z 3 , z 4) = (0, 4, 11, 1, 1, 1, 0) as shown in Table 1, under the column entitled MCGP corresponding to case II.

Tables 2 and 3 give summaries of the results, respectively, obtained from the GP and MCGP models. As seen in Table 2, MCGP has the total deviation value of 20 units which is obviously better than the total deviation value of 85 units obtained by the GP model. From Table 2, we realize that for the GP model of case I, Goal 1 has achieved 78.7% of the aspiration level which was given to be 72 (i.e., 56.66 aspiration level), Goal 2 has reached the aspiration level 85 exactly, and Goal 3 has achieved 51 percent of the aspiration level given to be 72. However, we can see from Table 2 that these values are different by applying the MCGP model, and solutions obtained by the MCGP model are better than those of GP, because the percentages of goal achievements obtained from the MCGP model are better than the ones obtained by the GP model.

Table 2 Comparison of GP and MCGP models in case I
Table 3 Comparison of GP and MCGP models in case II

We see in Table 3 that the MCGP model has the total deviation value of 50 which is larger than 24.25 obtained by the GP model in case II. As seen from Table 3, for the GP model of case II, Goal 1 has reached 100 percent of the aspiration level given to be 20, Goal 2 has reached 81.8 percent of the aspiration level given to be 27, and Goal 3 has reached 65.7 percent of the aspiration level given to be 35. Also, percentages of goal achievements by the MCGP model are lower than the ones obtained by the GP model. But, if we solve the GP model by selecting the aspirations from the MCGP model for each goal separately, then we reach larger deviation values for the GP model as compared to the MCGP model. In this case, the MCGP model outperforms the GP model.

Conclusions

We proposed a new approach for solving multiple-objective linear problems (MOLPs) having multiple utility functions for each goal. Available studies considered the MOLP problems with each objective function having only one utility function. Here, we proposed multiple utility functions for each goal and calculated the probabilities of the utility functions. The probabilities were considered as aspiration levels for the goal programming (GP) or multi-choice goal programming (MCGP) models. The usefulness of the approach was illustrated by working through an example in two cases of dependence and independence of variables. The results showed that a decision maker (DM) could reach her ideal solutions by solving GP or MCGP models. We observed that the MCGP model obtained better solutions as compared to the GP model.