## Background

The earliest goal programming formulation was introduced by Charnes et al. (1955). Later, Charnes and Cooper (1977), Ijiri (1965), Lee (1972), and Ignizio (1976) are the contributors of goal programming for which goal programming became a useful tool in multi-criteria decision-making (MCDM) problem. The updated presentations of goal programming have been discussed by Tamiz et al. (1998), Lee and Olson (2000), Jones and Tamiz (2002), and Ignizio and Romero (2003). Methodologies of goal programming such as weighted goal programming, min-max goal programming, lexicographic goal programming have been discussed in the study of Romero (2004). Except for these three methods, another method, the logarithmic goal programming, is introduced (Wang et al. 2005).

Our proposed method is goal geometric programming with logarithmic deviational variables. In goal programming formulation with logarithmic deviational variables, we use geometric programming for solving because there are lots of real-life situations and many engineering applications where equations may be nonlinear. For a special type of nonlinear programming problem, geometric programming is a very useful tool. Since we use geometric programming method to solve a nonlinear goal programming problem, therefore, the degree of difficulty has a great role in this context. The degree of difficulty of the proposed method is lesser than that of other methods such as goal geometric programming using weighted sum method.

The concept of taking multiplicative deviational variables as an objective function is not new. Previously, Verma (1990) and a paper entitled ‘Goal geometric programming problem (G2P2) with product method’ by Ghosh and Roy (2012) used this concept. In this paper, we have started with additive deviational variables as the objective function which were then converted into multiplicative deviational variables as objective function using the logarithmic concept. The method of conversion is given in the form of ‘Result 1’.

The arrangement of the paper is as follows: the background of the study followed by the goal programming model are presented. A result is presented together with its proof (Result 1), and the model of weighted goal programming with logarithmic deviational variables is then presented. The sections for goal geometric programming model with logarithmic deviational variables and its solution procedure are followed by a theorem on the model of weighted goal programming with logarithmic deviational variables and its proof (Result 2). Next, a numerical example and applications on lightly loaded bearing problem, optimal production, and marketing planning are presented. Finally, the conclusions of the study is presented.

## Goal programming

A multi-objective programming can be written as follows:

$\text{Find}\phantom{\rule{.5em}{0ex}}\begin{array}{l}X={\left({x}_{1},{x}_{2},\dots ,\phantom{\rule{1em}{0ex}}{x}_{n}\right)}^{T}\end{array}$
(1)
$\begin{array}{ll}\mathrm{so}\phantom{\rule{.5em}{0ex}}\mathit{\text{as}}\phantom{\rule{.5em}{0ex}}\mathit{\text{to}}\phantom{\rule{.5em}{0ex}}\mathit{\text{minimize}}\phantom{\rule{0.3em}{0ex}}{f}_{10}\left(X\right)=& \phantom{\rule{.5em}{0ex}}\sum _{i=1}^{{P}_{10}}{C}_{10i}\\ \phantom{\rule{1em}{0ex}}\prod _{k=1}^{n}\underset{k}{\overset{{a}_{\mathit{\text{koi}}}}{x}}\phantom{\rule{0.3em}{0ex}}\mathrm{with}\phantom{\rule{1em}{0ex}}\mathit{\text{target}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{C}_{10},\end{array}$
$\phantom{\rule{-14.0pt}{0ex}}\text{minimize}\begin{array}{l}{f}_{20}\left(X\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\sum _{i=1}^{{P}_{20}}{C}_{20i}\prod _{k=1}^{n}{x}_{k}^{{a}_{\mathit{\text{koi}}}}\end{array}\phantom{\rule{1em}{0ex}}\mathrm{with}\phantom{\rule{1em}{0ex}}\mathit{\text{target}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{C}_{20},$
$\phantom{\rule{-14.0pt}{0ex}}\text{minimize}\begin{array}{l}{f}_{m0}\left(X\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\sum _{i=1}^{{P}_{m0}}{C}_{m0i}\prod _{k=1}^{n}{x}_{k}^{{a}_{\mathit{\text{koi}}}}\end{array}\phantom{\rule{1em}{0ex}}\mathrm{with}\phantom{\rule{1em}{0ex}}\mathit{\text{target}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{C}_{m0},$
$\phantom{\rule{-14.0pt}{0ex}}\mathrm{subject}\phantom{\rule{1em}{0ex}}\mathit{\text{to}}\begin{array}{l}{f}_{r}\left(X\right)\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\sum _{i=1}^{{P}_{r}}{C}_{\mathit{\text{ri}}}\prod _{k=1}^{n}{x}_{k}^{{a}_{\mathit{\text{ki}}}}\le {C}_{r};\phantom{\rule{1em}{0ex}}r\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}1,\phantom{\rule{1em}{0ex}}2,\dots ,q\end{array}$
$\begin{array}{l}{x}_{k}>0;\phantom{\rule{1em}{0ex}}k=1,\phantom{\rule{1em}{0ex}}2,\dots ,\phantom{\rule{1em}{0ex}}\mathrm{n.}\end{array}$
$\begin{array}{ll}{C}_{j0i}\phantom{\rule{1em}{0ex}}& \text{and}\phantom{\rule{1em}{0ex}}{C}_{\mathit{\text{ri}}}\phantom{\rule{1em}{0ex}}\mathrm{are}\phantom{\rule{1em}{0ex}}\mathit{\text{positive}}\phantom{\rule{1em}{0ex}}\mathit{\text{real}}\phantom{\rule{1em}{0ex}}\mathit{\text{numbers}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\forall \phantom{\rule{1em}{0ex}}j,\phantom{\rule{1em}{0ex}}r,\phantom{\rule{1em}{0ex}}i,\phantom{\rule{1em}{0ex}}\text{and}\\ {a}_{k0i},\phantom{\rule{1em}{0ex}}{a}_{\mathit{\text{ki}}}\phantom{\rule{1em}{0ex}}\mathrm{are}\phantom{\rule{1em}{0ex}}\mathit{\text{real}}\phantom{\rule{1em}{0ex}}\mathit{\text{numbers}}\phantom{\rule{1em}{0ex}}\forall \phantom{\rule{1em}{0ex}}k,\phantom{\rule{1em}{0ex}}\mathrm{i.}\end{array}$
$\phantom{\rule{-17.0pt}{0ex}}\begin{array}{l}{P}_{j0}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\mathrm{Number}\phantom{\rule{1em}{0ex}}\mathit{\text{of}}\phantom{\rule{1em}{0ex}}\mathit{\text{terms}}\phantom{\rule{1em}{0ex}}\mathit{\text{present}}\phantom{\rule{1em}{0ex}}\mathit{\text{in}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}j0\mathrm{th}\mathit{\text{objective function}},\end{array}$
$\begin{array}{l}{P}_{r}=\text{Number of terms present in}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}r\mathrm{th}\phantom{\rule{1em}{0ex}}\text{constraint},\end{array}$
$\begin{array}{l}{C}_{r}=\mathrm{\text{Boundary}}\phantom{\rule{1em}{0ex}}\mathrm{\text{value}}\phantom{\rule{1em}{0ex}}\mathrm{\text{of}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}r\mathrm{th}\phantom{\rule{1em}{0ex}}\mathrm{\text{constraint}},\end{array}$

The multi-objective programming model contains m, the number of minimizing objective functions; q, the number of inequality type constraints; and n, the number of strictly positive decision variables.

Result 1. As mentioned, the goal programming model may be reduced to the following form:

$\text{Minimize}\prod _{j=1}^{m}{u}_{j0}^{+}\prod _{r=1}^{q}{v}_{r}^{+}$

subject to

${f}_{j0}\left(X\right)/{u}_{j0}^{+}\le {C}_{j0},j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,$
$\begin{array}{l}{f}_{r}\left(X\right)/{v}_{r}^{+}\le {C}_{r},r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,\end{array}$
$\begin{array}{l}{x}_{k}>0,k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;{u}_{j0}^{+},{v}_{r}^{+}>1,\end{array}$

with the conditions

${f}_{j0}\left(X\right)\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0,\phantom{\rule{1em}{0ex}}{C}_{j0}\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0,\phantom{\rule{1em}{0ex}}{C}_{r}\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0.$

Proof. In the multi-objective programming model (1), objective functions are minimized and have target values, e.g., minimize fj 0(X) with target value Cj 0, i.e., minimize log (fj 0(X)) with target value log (Cj 0). According to the method of goal formulation, positive deviation should be minimized.

Similarly, in model (1), constraints are of ≤ type. Thus, positive deviations should also be minimized. Therefore, when

$\begin{array}{l}{f}_{r}\left(X\right)\le {C}_{r}\end{array},$

then

$\text{log}\left({f}_{r}\left(X\right)\right)\le \phantom{\rule{1em}{0ex}}\text{log}\left({C}_{r}\right).$

The goal formulation is as follows:

$\text{Minimize}\sum _{j=1}^{m}{d}_{j0}^{+}+\sum _{r=1}^{q}{d}_{r}^{+}$
(2)

subject to

$\text{log}\left({f}_{j0}\left(X\right)\right)+{d}_{j0}^{+}-{d}_{j0}^{-}=\text{log}\left({C}_{j0}\right);j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,$
$\text{log}\left({f}_{r}\left(X\right)\right)+{d}_{r}^{+}-{d}_{r}^{-}=\text{log}\left({C}_{r}\right);r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,$
${x}_{k}>0,k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n\phantom{\rule{1em}{0ex}}{d}_{j0}^{+},{d}_{j0}^{-},{d}_{r}^{+},{d}_{r}^{-}>0,$
${d}_{j0}^{+}×{d}_{j0}^{-}=0;{d}_{r}^{+}×{d}_{r}^{-}=0.$
${d}_{j0}^{+}\text{= Positive deviation of objective function,}$
${d}_{j0}^{-}\text{= Negative deviation of objective function,}$
${d}_{r}^{+}\text{= Positive deviation of constraint},$
${d}_{r}^{-}\text{= Negative deviation of constraint}.$
$\begin{array}{l}\text{However, with a logarithmic change of deviational}\\ \phantom{\rule{1em}{0ex}}\text{variables}\phantom{\rule{1em}{0ex}}{d}_{j0}^{+}=\text{log}\left({u}_{j0}^{+}\right),\phantom{\rule{1em}{0ex}}{d}_{j0}^{-}=\text{log}\left({u}_{j0}^{-}\right),\end{array}$
$\begin{array}{ll}{d}_{r}^{+}& =\text{log}\left({v}_{r}^{+}\right),\phantom{\rule{1em}{0ex}}{d}_{r}^{-}=\text{log}\left({v}_{r}^{-}\right),\phantom{\rule{1em}{0ex}}\text{we can turn model (2)}\\ \text{into the following problem:}\end{array}$
$\text{Minimize}\phantom{\rule{1em}{0ex}}\left(\text{log}\prod _{j=1}^{m}{u}_{j0}^{+}\prod _{r=1}^{q}{v}_{r}^{+}\right)$
(3)

subject to

$\text{log}\left({f}_{j0}\left(X\right)·{u}_{j0}^{-}/{u}_{j0}^{+}\right)=\text{log}\left({C}_{j0}\right),j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,$
$\text{log}\left({f}_{r}\left(X\right)·{v}_{r}^{-}/{v}_{r}^{+}\right)=\text{log}\left({C}_{r}\right),r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,$
${x}_{k}>0,k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;{u}_{j0}^{+},{u}_{j0}^{-},{v}_{r}^{+},{v}_{r}^{-}>1,$

which is obviously equivalent to the following goal programming form with logarithmic deviational variables:

$\text{Minimize}\prod _{j=1}^{m}{u}_{j0}^{+}\prod _{r=1}^{q}{v}_{r}^{+}$
(4)

subject to

${f}_{j0}\left(X\right)·{u}_{j0}^{-}/{u}_{j0}^{+}={C}_{j0},j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,$
${f}_{r}\left(X\right)·{v}_{r}^{-}/{v}_{r}^{+}={C}_{r},r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,$
${x}_{k}>0,k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;{u}_{j0}^{+},{u}_{j0}^{-},{v}_{r}^{+},{v}_{r}^{-}>1.$

The goal programming formulation where the constraints are in inequality form the following:

$\begin{array}{l}\text{Minimize}\prod _{j=1}^{m}{u}_{j0}^{+}\prod _{r=1}^{q}{v}_{r}^{+}\end{array}$
(5)

subject to

$\begin{array}{l}\phantom{\rule{1em}{0ex}}{f}_{j0}\left(X\right)/{u}_{j0}^{+}\le {C}_{j0},j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,\end{array}$
$\begin{array}{l}{f}_{r}\left(X\right)/{v}_{r}^{+}\le {C}_{r},r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,\end{array}$
$\begin{array}{l}{x}_{k}>0,k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;{u}_{j0}^{+},{v}_{r}^{+}>1,\end{array}$

hence the result. □

## Results and discussion

### Weighted goal programming with logarithmic deviational variables

According to model (1), all of the objective functions are minimized. If the decision maker wants to get a much more minimized value for any particular objective function or wants to satisfy strictly the constraints, then weight factors (priorities) are introduced. In goal programming formulation with logarithmic deviational variables, weights (priorities) are given with the deviational variable. Hence, the weighted goal programming formulation becomes the following:

$\begin{array}{l}\text{Minimize}{\prod }_{j=1}^{m}{\left({u}_{j0}^{+}\right)}^{{W}_{j0}}{\prod }_{r=1}^{q}{\left({v}_{r}^{+}\right)}^{{W}_{r}}\end{array}$
(6)

subject to

$\begin{array}{l}{f}_{j0}\left(X\right)/{u}_{j0}^{+}\le {C}_{j0},j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,\end{array}$
$\begin{array}{l}{f}_{r}\left(X\right)/{v}_{r}^{+}\le {C}_{r},r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,\end{array}$
$\begin{array}{l}{x}_{k}>0,k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;{u}_{j0}^{+},{v}_{r}^{+}>1.\end{array}$

Here, Wj 0 values are the weights for objective functions and W r values are the weights for the constraints.

Solutions of goal programming (Romero 1991), even those of weighted goal programming and lexicographic goal programming (Miettinen 1999), are pareto optimal. Here, we prove a result which also shows that goal programming with logarithmic deviation gives pareto optimal solutions.

Result 2. The following is the solution of weighted goal programming with logarithmic deviation:

$\text{Minimize}\begin{array}{l}{\prod }_{i=1}^{k}{\left({u}_{i}^{+}\right)}^{{w}_{i}}\end{array}$

subject to

$\begin{array}{l}{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{x}_{l}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}{\left({u}_{i}^{+}\right)}^{-1}\le \overline{{C}_{i}},i=1,2,\dots ,\phantom{\rule{1em}{0ex}}k,\end{array}$
$\begin{array}{l}X\in S,{u}_{i}^{+}>1,\phantom{\rule{1em}{0ex}}i=1,2,\dots ,\phantom{\rule{1em}{0ex}}k,\end{array}$

which comes from the following goal programming model:

$\phantom{\rule{-5.0pt}{0ex}}\begin{array}{ll}\text{Minimize}\phantom{\rule{0.3em}{0ex}}{f}_{i}\left(X\right)& ={\left(\sum _{r=1}^{P}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{x}_{l}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}\phantom{\rule{0.3em}{0ex}}\text{with target}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\overline{C}}_{i},\\ i& =1,2,\dots ,\phantom{\rule{1em}{0ex}}k;\phantom{\rule{1em}{0ex}}X\in S\end{array}$

is pareto optimal if ${u}_{i}^{+}$ for each function f i (X) to be minimized has a value greater than 1 at the optimum.

Proof. If xS with a positive deviation vector, then let ${\left({u}_{i}^{+}\right)}^{\ast }\phantom{\rule{.5em}{0ex}}\left(>1\right)$ be the solution of the following weighted goal programming problem:

$\begin{array}{l}\text{Minimize}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\prod }_{i=1}^{k}{\left({u}_{i}^{+}\right)}^{{w}_{i}}\end{array}$
(7)

subject to

$\begin{array}{l}{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{x}_{l}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}{\left({u}_{i}^{+}\right)}^{-1}\le \overline{{C}_{i}},i=1,2,\dots ,\phantom{\rule{1em}{0ex}}k,\end{array}$
$\begin{array}{l}X\in S,{u}_{i}^{+}>1,\phantom{\rule{1em}{0ex}}i=1,2,\dots ,\phantom{\rule{1em}{0ex}}\mathrm{k.}\end{array}$

If possible, let x be not pareto optimal, then there exists a vector x0 with a positive deviational variable ${\left({u}_{i}^{+}\right)}^{0}\left(>1\right)$ such that

$\begin{array}{ll}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}& \le {\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{i};\\ \phantom{\rule{7em}{0ex}}\phantom{\rule{4em}{0ex}}\forall i=1,2,\dots ,\phantom{\rule{1em}{0ex}}k\end{array}$
(7.1)

and

$\begin{array}{ll}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}& <{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j};\\ \phantom{\rule{7em}{0ex}}\mathit{\text{for}}\phantom{\rule{.5em}{0ex}}\mathit{\text{at}}\phantom{\rule{.5em}{0ex}}\mathit{\text{least}}\phantom{\rule{.5em}{0ex}}\mathit{\text{one}}\phantom{\rule{.5em}{0ex}}j.\end{array}$
(7.2)

or

$\frac{{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}}{{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}}>\phantom{\rule{1em}{0ex}}1.$

Let

$\frac{{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}}{{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}}=\phantom{\rule{1em}{0ex}}\beta >\phantom{\rule{1em}{0ex}}1.$
(7.3)

We set

${\left({u}_{i}^{+}\right)}^{0}={\left({u}_{i}^{+}\right)}^{\ast }\phantom{\rule{1em}{0ex}}\left(\phantom{\rule{.5em}{0ex}}>\phantom{\rule{.5em}{0ex}}1\phantom{\rule{.5em}{0ex}}\right)\phantom{\rule{1em}{0ex}}\mathit{\text{for}}\phantom{\rule{1em}{0ex}}i\phantom{\rule{.5em}{0ex}}=\phantom{\rule{.5em}{0ex}}1,2,\dots ,\phantom{\rule{.5em}{0ex}}k$
(7.4)

and

$\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\left({u}_{j}^{+}\right)}^{0}\phantom{\rule{.3em}{0ex}}=max\left(1,\phantom{\rule{.3em}{0ex}}{\left(\underset{j}{\overset{+}{u}}\right)}^{\ast }/\beta \right)\phantom{\rule{.5em}{0ex}}\ge \phantom{\rule{.5em}{0ex}}1\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{.5em}{0ex}}i\ne \phantom{\rule{.5em}{0ex}}\mathrm{j.}$
(7.5)

Here, ${\left({u}_{i}^{+}\right)}^{0}$ is the positive deviational variable corresponding to x0, i=1,2,…, k. From (7.1),

$\begin{array}{ll}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}\phantom{\rule{1em}{0ex}}{\left({\left({u}_{i}^{+}\right)}^{0}\right)}^{-1}& \le \phantom{\rule{.5em}{0ex}}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\phantom{\rule{0.3em}{0ex}}\right)}_{i}\\ \phantom{\rule{1em}{0ex}}×{\left({\left({u}_{i}^{+}\right)}^{0}\right)}^{-1}\end{array}$
$\le \phantom{\rule{1em}{0ex}}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}{\left({\left({u}_{i}^{+}\right)}^{\ast }\right)}^{-1}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\mathrm{using}\left(7.4\right)$
$\le \phantom{\rule{1em}{0ex}}\overline{{C}_{i}}\phantom{\rule{.5em}{0ex}}\text{as}\phantom{\rule{.5em}{0ex}}{x}^{\ast }\phantom{\rule{.5em}{0ex}}\mathrm{\text{be}}\phantom{\rule{.5em}{0ex}}\mathrm{\text{the}}\phantom{\rule{.5em}{0ex}}\mathrm{\text{solution}}\phantom{\rule{.5em}{0ex}}\mathrm{\text{of}}\phantom{\rule{.5em}{0ex}}\left(7\right),\phantom{\rule{.5em}{0ex}}\mathrm{i.e.},$
$\phantom{\rule{-14.0pt}{0ex}}\begin{array}{ll}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{i}{\left({\left({u}_{i}^{+}\right)}^{0}\right)}^{-1}& \le \phantom{\rule{1em}{0ex}}\overline{{C}_{i}}\phantom{\rule{0.3em}{0ex}}\text{for}\\ i=1,2,\dots ,\phantom{\rule{1em}{0ex}}k,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\text{but}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}i\phantom{\rule{0.3em}{0ex}}\ne \phantom{\rule{0.3em}{0ex}}\mathrm{j.}\end{array}$
(7.6)

From 7.5,

$\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{\left({u}_{j}^{+}\right)}^{0}=max\left(1,\phantom{\rule{1em}{0ex}}\frac{{\left(\underset{j}{\overset{+}{u}}\right)}^{\ast }}{\beta }\right),$

Thus,

${\left({u}_{j}^{+}\right)}^{0}=\frac{{\left({u}_{j}^{+}\right)}^{\ast }}{\beta },\phantom{\rule{1em}{0ex}}\text{if}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\frac{{\left({u}_{j}^{+}\right)}^{\ast }}{\beta }>\phantom{\rule{1em}{0ex}}1$
(7.7)
$=\phantom{\rule{1em}{0ex}}1\phantom{\rule{1em}{0ex}}\text{if}\phantom{\rule{1em}{0ex}}\frac{{\left({u}_{j}^{+}\right)}^{\ast }}{\beta }\phantom{\rule{1em}{0ex}}\le \phantom{\rule{1em}{0ex}}1$
(7.8)

Case 1

$\frac{{\left({u}_{j}^{+}\right)}^{\ast }}{\beta }>\phantom{\rule{1em}{0ex}}1,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\text{then using (7.7)},$
$\begin{array}{ll}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}{\left({\left({u}_{j}^{+}\right)}^{0}\right)}^{-1}=& {\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}\\ ×{\left({\left({u}_{j}^{+}\right)}^{\ast }\right)}^{-1}\beta \end{array}$
$={\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}{\left({\left({u}_{j}^{+}\right)}^{\ast }\right)}^{-1}\phantom{\rule{1em}{0ex}}\le \overline{{C}_{j}}$

Therefore,

${\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}{\left({\left({u}_{j}^{+}\right)}^{0}\right)}^{-1}\le \overline{{C}_{j}}$
(7.9)

Thus, x0 satisfies the constraints of (7). From (7.7), ${\left({u}_{j}^{+}\right)}^{0}=\phantom{\rule{1em}{0ex}}\frac{{\left({u}_{j}^{+}\right)}^{\ast }}{\beta }\phantom{\rule{1em}{0ex}}<\phantom{\rule{1em}{0ex}}{\left({u}_{j}^{+}\right)}^{\ast }.$

$\text{Since}\phantom{\rule{1em}{0ex}}\beta >1\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{\left({u}_{j}^{+}\right)}^{\ast }\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}1,$

using (7.4),

${\left({u}_{i}^{+}\right)}^{0}\le {\left({u}_{i}^{+}\right)}^{\ast },\phantom{\rule{1em}{0ex}}\forall \phantom{\rule{1em}{0ex}}i=1,2,\dots ,\phantom{\rule{1em}{0ex}}\mathrm{k.}$

Case 2

$\frac{{\left({u}_{j}^{+}\right)}^{\ast }}{\beta }\le 1,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\text{then using (7.8)},$
$\phantom{\rule{-18.0pt}{0ex}}\begin{array}{ll}{\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\phantom{\rule{0.3em}{0ex}}\right)}_{\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}j}\phantom{\rule{0.3em}{0ex}}{\left({\left({u}_{j}^{+}\right)}^{0}\right)}^{-1}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}& ={\left(\sum _{r=1}^{p}{C}_{\mathit{\text{mr}}}\prod _{l=1}^{n}{\left({x}_{l}^{0}\right)}^{{a}_{\mathit{\text{lr}}}}\phantom{\rule{0.3em}{0ex}}\right)}_{j}\\ =\frac{{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}}{\beta }.\end{array}$
$\begin{array}{ll}\text{Using (7.3),}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \le \frac{{\left({\sum }_{r=1}^{p}{C}_{\mathit{\text{mr}}}{\prod }_{l=1}^{n}{\left({x}_{l}^{\ast }\right)}^{{a}_{\mathit{\text{lr}}}}\right)}_{j}}{{\left({u}_{j}^{+}\right)}^{\ast }};\\ \text{from}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left(7.8\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}& \le \overline{{C}_{j}}\end{array}$
(7.10)

Thus, x0 satisfies the constraints of (7), and from (7.8),

${\left({u}_{j}^{+}\right)}^{0}=\phantom{\rule{1em}{0ex}}1\phantom{\rule{1em}{0ex}}<\phantom{\rule{1em}{0ex}}{\left({u}_{j}^{+}\right)}^{\ast }.$
(7.11)

Therefore, from (7.4) and (7.11), ${\left({u}_{i}^{+}\right)}^{0}\le {\left({u}_{i}^{+}\right)}^{\ast }\phantom{\rule{1em}{0ex}}\forall \phantom{\rule{1em}{0ex}}i=1,2,\dots ,\phantom{\rule{1em}{0ex}}\mathrm{k.}$ Thus, ∀ positive weights W i , (i=1,2,…, k)

$\phantom{\rule{-14.0pt}{0ex}}{\left({\left({u}_{i}^{+}\right)}^{0}\right)}^{{W}_{i}}\le {\left({\left({u}_{i}^{+}\right)}^{\ast }\right)}^{{W}_{i}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\prod _{i=1}^{k}{\left({\left({u}_{i}^{+}\right)}^{0}\right)}^{{W}_{i}}\le \prod _{i=1}^{k}{\left({\left({u}_{i}^{+}\right)}^{\ast }\right)}^{{W}_{i}}.$
(7.12)

Thus, from (7.6), (7.9), (7.10), and (7.12), we have seen that x0 is a solution of (7), which contradicts the fact that x is a solution of (7). Hence, x is pareto optimal.

### Goal geometric programming model with logarithmic deviational variables and its solution procedure

Linear goal programming is a very commonly used tool of the MCDM problem. However, nonlinear goal programming is very rare in this context. In many engineering problems, as well as problems of science, there are nonlinear equations to optimize. To solve that type of nonlinear goal programming problem, the geometric programming method can be used. Hence, we can turn model (6) into a goal geometric programming form as in the following:

$\text{Minimize}\prod _{j=1}^{m}{\left({u}_{j0}^{+}\right)}^{{W}_{j0}}\prod _{r=1}^{q}{\left({v}_{r}^{+}\right)}^{{W}_{r}}$
(8)

subject to

${f}_{j0}\left(X\right){\left({u}_{j0}^{+}\right)}^{-1}/{C}_{j0}\le \phantom{\rule{1em}{0ex}}1,\phantom{\rule{1em}{0ex}}j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m,$
${f}_{r}\left(X\right){\left({v}_{r}^{+}\right)}^{-1}/{C}_{r}\le \phantom{\rule{1em}{0ex}}1,\phantom{\rule{1em}{0ex}}r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q,$
${x}_{k}\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0,\phantom{\rule{1em}{0ex}}k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;\phantom{\rule{1em}{0ex}}{u}_{j0}^{+},\phantom{\rule{1em}{0ex}}{v}_{r}^{+}\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}1.$

The corresponding dual geometric programming of model (8) can be written as follows:

$\begin{array}{ll}\text{Maximize}\phantom{\rule{1em}{0ex}}d\left(\delta \right)=& \left[{\left(\frac{1}{{\delta }_{10}}\right)}^{{\delta }_{10}}\prod _{j=1}^{m}\prod _{i=1}^{{P}_{j0}}{\left(\frac{{C}_{j0i}}{{C}_{j0}{\delta }_{\mathit{\text{ji}}}}\right)}^{{\delta }_{\mathit{\text{ji}}}}\prod _{r=1}^{q}\prod _{i=1}^{{P}_{r}}\right\\ ×{\left(\frac{{C}_{\mathit{\text{ri}}}}{{C}_{r}{\delta }_{\mathit{\text{ri}}}}\right)}^{{\delta }_{\mathit{\text{ri}}}}\prod _{j=1}^{m}{\lambda }_{j}{\left(\delta \right)}^{{\lambda }_{j}\left(\delta \right)}\prod _{r=1}^{q}{\lambda }_{r}{\left(\delta \right)}^{{\lambda }_{r}\left(\delta \right)}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}]\end{array}$

such that

$\begin{array}{l}\phantom{\rule{8em}{0ex}}{\delta }_{10}=\phantom{\rule{1em}{0ex}}1,\phantom{\rule{1em}{0ex}}{W}_{j0}{\delta }_{10}-\sum _{i=1}^{{P}_{j0}}{\delta }_{\mathit{\text{ji}}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}0,\phantom{\rule{1em}{0ex}}j\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}1,2,\dots ,\phantom{\rule{1em}{0ex}}m;\\ {W}_{r}{\delta }_{10}-\sum _{i=1}^{{P}_{r}}{\delta }_{\mathit{\text{ri}}}=\phantom{\rule{1em}{0ex}}0,\phantom{\rule{1em}{0ex}}r=1,2,\dots ,\phantom{\rule{1em}{0ex}}q;\end{array}$
$\begin{array}{ll}\sum _{j=1}^{m}\sum _{i=1}^{{P}_{j0}}{a}_{k0i}{\delta }_{\mathit{\text{ji}}}-\sum _{r=1}^{q}\sum _{i=1}^{{P}_{r}}{a}_{\mathit{\text{ki}}}{\delta }_{\mathit{\text{ri}}}& =\phantom{\rule{1em}{0ex}}0;\phantom{\rule{1em}{0ex}}k=1,2,\dots ,\phantom{\rule{1em}{0ex}}n;\\ \phantom{\rule{9em}{0ex}}\phantom{\rule{5em}{0ex}}{\lambda }_{j}\left(\delta \right)& =\sum _{i=1}^{{P}_{j0}}{\delta }_{\mathit{\text{ji}}},\phantom{\rule{1em}{0ex}}j=1,2,\dots ,\phantom{\rule{1em}{0ex}}m;\\ \phantom{\rule{9em}{0ex}}\phantom{\rule{5em}{0ex}}{\lambda }_{r}\left(\delta \right)& =\phantom{\rule{1em}{0ex}}\sum _{i=1}^{{P}_{r}}{\delta }_{\mathit{\text{ri}}},\phantom{\rule{1em}{0ex}}r=1,2,\dots ,\phantom{\rule{1em}{0ex}}\mathrm{q.}\end{array}$

## Numerical example

A multi-objective goal programming problem is:

$\text{Minimize}\phantom{\rule{1em}{0ex}}{f}_{1}\left({x}_{1},{x}_{2}\right)\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}{x}_{1}^{-1}{x}_{2}^{-2}\phantom{\rule{1em}{0ex}}\text{with target value 4,}$
(9)
$\text{Minimize}\phantom{\rule{1em}{0ex}}{f}_{2}\left({x}_{1},{x}_{2}\right)\phantom{\rule{1em}{0ex}}=\phantom{\rule{1em}{0ex}}2{x}_{1}^{-2}{x}_{2}^{-3}\phantom{\rule{1em}{0ex}}\text{with target value 50,}$
$\text{subject to}\phantom{\rule{1em}{0ex}}{x}_{1}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}{x}_{2}\le \phantom{\rule{1em}{0ex}}1,\phantom{\rule{1em}{0ex}}{x}_{1},\phantom{\rule{1em}{0ex}}{x}_{2}\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0.$

In goal geometric programming model with logarithmic deviational variables, model (8) can be written as follows:

$\text{Minimize}\phantom{\rule{1em}{0ex}}{u}^{{W}_{1}}{v}^{{W}_{2}}$
(10)

subject to

${x}_{1}^{-1}{x}_{2}^{-2}{u}^{-1}\le \phantom{\rule{1em}{0ex}}4,$
$2{x}_{1}^{-2}{x}_{2}^{-3}{v}^{-1}\le \phantom{\rule{1em}{0ex}}50,$
${x}_{1}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}{x}_{2}\le \phantom{\rule{1em}{0ex}}1,\phantom{\rule{1em}{0ex}}{x}_{1},\phantom{\rule{1em}{0ex}}{x}_{2}\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0,\phantom{\rule{1em}{0ex}}u,\phantom{\rule{1em}{0ex}}v\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}1.$

### Illustration

Degree of difficulty = 5−(4+1)=0, dual of (10) is given by the following: such that

$\begin{array}{ll}\text{Maximize}\phantom{\rule{1em}{0ex}}d\left(\delta \right)& =\left[\phantom{\rule{0.3em}{0ex}}{\left(\phantom{\rule{0.3em}{0ex}}\frac{1}{{\delta }_{10}}\phantom{\rule{0.3em}{0ex}}\right)}^{\phantom{\rule{0.3em}{0ex}}{\delta }_{10}}\phantom{\rule{0.3em}{0ex}}{\left(\phantom{\rule{0.3em}{0ex}}\frac{1}{4{\delta }_{11}}\phantom{\rule{0.3em}{0ex}}\right)}^{\phantom{\rule{0.3em}{0ex}}{\delta }_{11}}\phantom{\rule{0.3em}{0ex}}{\left(\phantom{\rule{0.3em}{0ex}}\frac{2}{50{\delta }_{21}}\phantom{\rule{0.3em}{0ex}}\right)}^{\phantom{\rule{0.3em}{0ex}}{\delta }_{21}}\phantom{\rule{0.3em}{0ex}}{\left(\phantom{\rule{0.3em}{0ex}}\frac{1}{{\delta }_{31}}\phantom{\rule{0.3em}{0ex}}\right)}^{\phantom{\rule{0.3em}{0ex}}{\delta }_{31}}\right\\ \phantom{\rule{1em}{0ex}}×{\left(\phantom{\rule{0.3em}{0ex}}\frac{1}{{\delta }_{32}}\phantom{\rule{0.3em}{0ex}}\right)}^{\phantom{\rule{0.3em}{0ex}}{\delta }_{32}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\lambda }_{1}{\left(\delta \right)}^{{\lambda }_{1}\left(\delta \right)}{\lambda }_{2}{\left(\delta \right)}^{{\lambda }_{2}\left(\delta \right)}{\lambda }_{3}{\left(\delta \right)}^{{\lambda }_{3}\left(\delta \right)}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}]\end{array}$
${\delta }_{10}=\phantom{\rule{1em}{0ex}}1,$
(10.1)
${W}_{1}{\delta }_{10}-\delta 11=0,$
(10.2)
${W}_{2}{\delta }_{10}-\delta 21=0,$
(10.3)
$-{\delta }_{11}-2{\delta }_{21}+{\delta }_{31}=0,$
(10.4)
$-2{\delta }_{11}-3{\delta }_{21}+{\delta }_{32}=0,$
(10.5)
${\lambda }_{1}\left(\delta \right)={\delta }_{11},\phantom{\rule{1em}{0ex}}{\lambda }_{2}\left(\delta \right)={\delta }_{21},\phantom{\rule{1em}{0ex}}{\lambda }_{3}\left(\delta \right)={\delta }_{31}+{\delta }_{32}.$

Solving (10.1) to (10.5), we get the following:

$\begin{array}{ll}{\delta }_{10}& =1,\phantom{\rule{1em}{0ex}}{\delta }_{11}=\phantom{\rule{1em}{0ex}}{W}_{1},\phantom{\rule{1em}{0ex}}{\delta }_{21}=\phantom{\rule{1em}{0ex}}{W}_{2},\phantom{\rule{1em}{0ex}}{\delta }_{31}=\phantom{\rule{1em}{0ex}}{W}_{1}+2{W}_{2},\\ {\delta }_{32}& =2{W}_{1}+3{W}_{2},\phantom{\rule{1em}{0ex}}{\lambda }_{1}\left(\delta \right)={W}_{1},\phantom{\rule{1em}{0ex}}{\lambda }_{2}\left(\delta \right)={W}_{2},\\ {\lambda }_{3}\left(\delta \right)& =3{W}_{1}+5{W}_{2}.\end{array}$

From primal dual relation

$\frac{{x}_{1}^{-1}{x}_{2}^{-2}{u}^{-1}}{4}=\frac{{\delta }_{11}}{{\lambda }_{1}\left(\delta \right)}=1\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}u=\frac{1}{4{x}_{1}{x}_{2}^{2}},$
$\frac{2{x}_{1}^{-2}{x}_{2}^{-3}{v}^{-1}}{50}\frac{{\delta }_{21}}{{\lambda }_{2}\left(\delta \right)}=1\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\text{or}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}v=\frac{2}{50{x}_{1}^{2}{x}_{2}^{3}},$
${x}_{1}=\frac{{W}_{1}+2{W}_{2}}{3{W}_{1}+5{W}_{2}},\phantom{\rule{1em}{0ex}}{x}_{2}=\frac{2{W}_{1}+3{W}_{2}}{3{W}_{1}+5{W}_{2}}.$

Solving from primal dual relation for different values of weights, we get the optimal values of the decision variables which are given in Table 1.

From the table, we see that each deviation (u i ,v i ) has values greater than 1 when minimized. Thus, according to our theorem, the solutions are pareto optimal.

Again, we have solved the mentioned example in goal geometric programming with weighted sum method. Here, we have compared the results of the mentioned example in equal weights solved in two different methods: goal geometric programming with weighted sum method and goal geometric programming with logarithmic deviational variables which are given in Table 2.

From the comparison, it is clear that in both methods, the optimum values of the first and second objectives are almost the same. We have solved the same example in both processes where we have used geometric programming to solve a nonlinear goal programming problem. The advantage of the proposed method lies in the method of solution, i.e., geometric programming where degree of difficulty is less than the degree of difficulty of the previous process (goal geometric programming with weighted sum method). For this reason, the solution procedure of this process becomes easier than that of the previous.

### Application on lightly loaded bearing problem

A lightly loaded bearing is to be designed to minimize the linear combination of frictional moment and angle of twist of the shaft and the temperature rise of the oil while carrying a load of 1,000 lb, and the angular velocity of the shaft is to be greater than 100 rad s-1. Assume that 1 in-lb of frictional moment in the bearing is equal to 0.0025 rad of the angle of twist. The following are the goals:

Priority 1: Linear combination of frictional moment, angel of twist of the shaft, and temperature rise of the oil should be minimized and near 10.

Priority 2: Angular velocity of the shaft per 100 rad s-1 should be minimized and near 0.2.

In formulating the mentioned goal programming problem and finding the dimension of the bearing that is to be built for this purpose, it should be done in such a way that it can carry the maximum load.

Solution Let R (in.) be the radius of the journal and L (in.) be the half length of the bearing, T be the temperature rise of the oil, and frictional moment of the bearing (M) = $\frac{8\mathrm{\Pi \mu \omega }{R}^{2}L}{\sqrt{1-{n}^{2}}\phantom{\rule{1em}{0ex}}c}$ where ω is the angular velocity of the shaft, μ is the viscosity of the oil (lubricant), n is the eccentricity ratio, and c is the radial clearance.

The angle of twist of the shaft (ϕ) =$\frac{{S}_{e}l}{\mathit{\text{GR}}}$, where S e is the shear stress, l is the length between the driving point and rotating mass, and G is the shear modulus. The temperature rise of the oil in the bearing is given by $T=\frac{0.045\mathrm{\mu \omega }{R}^{2}}{{c}^{2}n\sqrt{1-{n}^{2}}}.$ For the given data, $\frac{c}{R}=0.0015$, n=0.9, μ=10−6 lb s in.-2, l=10 in., S e =30,000 psi, and G=12×106 psi.Hence, linear combination of frictional moment, angle of twist of the shaft, and temperature rise of the oil equals

$\phantom{\rule{-14.0pt}{0ex}}0.038\omega {R}^{2}L\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}0.025{R}^{-1}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}0.592R{L}^{-3}\phantom{\rule{0.3em}{0ex}}\text{with target value 10}$
(11.1)

and angular velocity

$\omega \ge 100\phantom{\rule{1em}{0ex}}{\mathrm{\text{rad}}\phantom{\rule{1em}{0ex}}\mathrm{s}}^{-1}.$
(11.2)

From the given data in the chart of ‘Dimensionless performance parameters for full journal bearing’ ω R−1L3=11.6, i.e., ω=11.6R/L3.

As per the assumption that 1 in. lb of frictional moment in bearing is equal to 0.0025 rad angle of twist, Equation 11.1 becomes Z1= 0.44R3L−2 + 10R−1 + 0.592R L−3 with the target value of 10.

Equation 11.2 becomes Z2= 8.62R−1L3 with the target value of 0.2. Hence, the model of lightly loaded bearing problem in G2P2 with logarithmic deviational variable is as follows:

$\text{Minimize}\phantom{\rule{1em}{0ex}}{u}^{{W}_{1}}{v}^{{W}_{2}}$
(11.3)

subject to

$0.44{R}^{3}{L}^{-2}{u}^{-1}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}10{R}^{-1}{u}^{-1}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}0.592R{L}^{-3}{u}^{-1}\phantom{\rule{1em}{0ex}}\le \phantom{\rule{1em}{0ex}}10,$
$8.62{R}^{-1}{L}^{3}{v}^{-1}\phantom{\rule{1em}{0ex}}\le \phantom{\rule{1em}{0ex}}0.2,$
$u,v>1,R,L>0.$

In solving with the use of geometric programming method where the degree of difficulty is 5−(4+1)=0, we get the optimal values of the radius of the journal (R) and half length of the bearing (L) which are given in Table 3. From the table, we have seen that each deviation (u,v) has values greater than 1. Thus, the solution is pareto optimal.

### Application on optimal production and marketing planning

Consider a manufacturer who produces a single product where the demand is affected by the selling price. Let P be the selling price per unit, α be the price elasticity to the demand, M be the marketing expenditure per unit, and γ be the marketing expenditure elasticity to the demand (Sadjadi et al. 2005). Assume that demand D=K PαMγ, where K is the predetermined constant and production cost C, which is inversely related to production lot size (units) Q, i.e., C=r Qβ, where r is the predefined constant for unit production cost and β is the lot size elasticity of production unit cost. Again, let μ and a be the production rate and the setup cost of production, respectively. We assume the production rate μ to vary with the demand D proportionally. Hence, μ=u D where u>1. There are some restrictions on variables such as α,γ, and β. The equation α>1 indicates that D increases at a diminishing rate as P decreases. The equation 0<β < 1 is almost the same as α and 0< γ < 1.

We want to minimize the equation (Marketing cost + Production cost + Setup cost + Holding cost), which is subject to some constraint that total revenue should be bigger. These are the following goals:

Priority 1: Total revenue should be greater than 0.1386 ×105,

Priority 2: (Marketing cost + Production cost + Setup cost + Holding cost) should be minimized and near 0.692791.

Thus, the model is as follows:

$\phantom{\rule{-15.0pt}{0ex}}\begin{array}{l}\text{Minimize}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\mathit{\text{MD}}+\mathit{\text{CD}}+\frac{\mathit{\text{aD}}}{Q}+\mathit{\text{iC}}\left(1-\frac{D}{\mu }\right)\frac{Q}{2}\phantom{\rule{0.3em}{0ex}}\text{with target}\\ \phantom{\rule{1em}{0ex}}\text{value}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}0.692791\end{array}$

subject to

$\mathit{\text{PD}}\ge 0.1386×1{0}^{5}$
$P,M,Q>0.$

Let $\stackrel{̂}{u}=1-\frac{1}{u}$, and from assumptions and consideration, the above model becomes the following:

$\phantom{\rule{-14.0pt}{0ex}}\text{Minimize}\phantom{\rule{1em}{0ex}}K{P}^{-\alpha }{M}^{\gamma +1}\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}\mathit{\text{rK}}{P}^{-\alpha }{M}^{\gamma }{Q}^{-\beta }\phantom{\rule{1em}{0ex}}+\phantom{\rule{1em}{0ex}}\mathit{\text{aK}}{P}^{-\alpha }{M}^{\gamma }{Q}^{-1}$
(12.1)
$+\phantom{\rule{1em}{0ex}}\mathit{\text{ir}}\stackrel{̂}{u}\frac{{Q}^{-\beta +1}}{2}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\mathrm{with}\phantom{\rule{1em}{0ex}}\mathit{\text{target}}\phantom{\rule{1em}{0ex}}\mathit{\text{value}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}0.692791$

subject to

$K{P}^{-\alpha }{M}^{\gamma }\ge \phantom{\rule{1em}{0ex}}0.1386×1{0}^{5}$
$P,\phantom{\rule{.5em}{0ex}}M,\phantom{\rule{.5em}{0ex}}Q\phantom{\rule{.5em}{0ex}}>\phantom{\rule{.5em}{0ex}}0.$

Consider the following data: $\alpha =2.5,\phantom{\rule{.5em}{0ex}}\beta =0.01,\phantom{\rule{.5em}{0ex}}\gamma =0.03,\phantom{\rule{.5em}{0ex}}r=5,\phantom{\rule{.5em}{0ex}}K=1{0}^{6},\phantom{\rule{.5em}{0ex}}a=50,\phantom{\rule{.5em}{0ex}}i=0.1,\phantom{\rule{.5em}{0ex}}\stackrel{̂}{u}=0.7,$ and converting the model (12.1) according to the goal geometric programming model, we have the following:

$\begin{array}{ll}\text{Minimize (Z)}\phantom{\rule{0.3em}{0ex}}& 1{0}^{6}{P}^{-2.5}{M}^{1.03}+5×1{0}^{6}{P}^{-2.5}{M}^{0.03}{Q}^{-0.01}\\ +50×1{0}^{6}{P}^{-2.5}{M}^{0.03}{Q}^{-1}+\frac{0.1×0.7×5}{2}\\ ×{Q}^{0.99}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\mathrm{with}\phantom{\rule{1em}{0ex}}\mathit{\text{target}}\phantom{\rule{1em}{0ex}}\mathit{\text{value}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}0.692791\end{array}$
(12.2)

subject to

$0.1386×1{0}^{5}×1{0}^{-6}{P}^{1.5}{M}^{-0.03}\le \phantom{\rule{1em}{0ex}}1$
$P,\phantom{\rule{1em}{0ex}}M,\phantom{\rule{1em}{0ex}}Q\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0.$

Transforming the model (12.2) into G2P2 with logarithmic deviation variables, we get the following:

$\text{Minimize}\phantom{\rule{1em}{0ex}}{u}^{{W}_{1}}{v}^{{W}_{2}}$
(12.3)

subject to

$\begin{array}{ll}\frac{1{0}^{6}{P}^{-2.5}{M}^{1.03}{u}^{-1}}{0.692791}& +\frac{5×1{0}^{6}{P}^{-2.5}{M}^{0.03}{Q}^{-0.01}{u}^{-1}}{0.692791}\\ +\frac{50×1{0}^{6}{P}^{-2.5}{M}^{0.03}{Q}^{-1}{u}^{-1}}{0.692791}\\ +\frac{\frac{0.1×0.7×5}{2}{Q}^{0.99}{u}^{-1}}{0.692791}\le 1\end{array}$
$0.1386×1{0}^{5}×1{0}^{-6}{P}^{1.5}{M}^{-0.03}{v}^{-1}\le 1$
$P,\phantom{\rule{1em}{0ex}}M,\phantom{\rule{1em}{0ex}}Q\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}0,u,v\phantom{\rule{1em}{0ex}}>\phantom{\rule{1em}{0ex}}1.$

Solving with the use of geometric programming method where the degree of difficulty is 6−(5+1)=0, we get the optimal values of decision variables, e.g., price per unit (P), production lot size (Q), and marketing expenditure per unit (M), which are given in Table 4.

Here, we have also observed from the table that each deviation (u,v) has values greater than 1. Thus, the solution is pareto optimal.

### Conclusions

The aim of this paper was to introduce a new approach to solve a nonlinear goal programming problem. The geometric programming approach is the best tool to solve nonlinear programming problems as compared with the other approach (Khun-Tucker conditions) that is already discussed in this paper. We have used logarithmic deviational variables in the goal programming model instead of the commonly used addition of deviational variables. Also, the applications on lightly loaded bearing problem and optimal production and marketing planning shows the efficiency of this method. Two applications have two different aims. In the first application, the decision maker has given more priority to the first objective function, whereas in the second application, priority is given to the second objective. Further, this method could be more applicable in imprecise environment rather than in precise environment.

### Authors’ information

PG is an assistant professor at the Department of Mathematics in Adamas Institute of Technology. She received her master’s of science degree from Bengal Engineering and Science University in 2009. Her research interests are optimization and fuzzy mathematics. TKR is a professor at the Department of Mathematics in Bengal Engineering and Science University. His research interests are in the areas of fuzzy mathematics, fuzzy optimization, and inventory control. He received his bachelor’s degree in mathematics from Burdwan University in 1977. He completed his master’s degree in mathematics from Jadavpur University in 1986 and his PhD from Vidyasagar University in 1999.