Introduction

The existence of multiobjective optimization problems is trivial in real life. The optimization (maximization/minimization) of more than one commensurable and/or conflicting objective under a set of well-defined constraints is termed as multiobjective optimization problems (MOOPs). Most often, many real-world applications, such as transportation, supplier selection, inventory control, supply chain planning, etc., take the form of MOOPs. There is no guarantee to get a single solution that satisfies all the objectives efficiently. However, a compromise solution can be obtained that satisfies each objective marginally. A huge amount of literature comprises various optimization techniques to solve the MOOPs. First, Zimmermann [39] proposed the fuzzy programming approach for MOOPs based on the fuzzy set (FS) theory. After that, Chang [17] proposed a goal programming method to solve fractional MOOPs. Ebrahimnejad [21] presented computational algorithm to solve multiobjective linear programming. Tarabia et al. [34] suggested a modified approach for MOOPs. Zheng et al. [38] discussed an efficient concept for solving MOOPs. Later on, intuitionistic fuzzy set (IFS) is presented by Atanassov [13] and Angelov [12] proposed an intuitionistic fuzzy programming approach for solving MOPPs. Ahmadini and Ahmad [10] proposed a novel preference scheme for multiobjective goal programming problem under intuitionistic fuzzy environment. Singh et al. [30] also solved the intuitionistic fuzzy MOOPs using various membership functions. Many researchers such as Bharati and Singh [15], Ebrahimnejad and Verdegay [22], Jana and Roy [25], Mahajan and Gupta [27], Rani et al. [28], and Singh and Yadav [31, 32] explored the IF optimization techniques in different real-life applications.

The extension and generalization of FS and IFS are presented by Smarandache [33] named neutrosophic set (NS). Based on NS decision set, Ahmad and Adhami [3, 4], and Ahmad et al. [5, 7] presented neutrosophic optimization techniques to solve the MOOPs. Furthermore, Abdel-Basset et al. [1] and Ye [36] also solved the neutrosophic linear programming problems under neutrosophic numbers. Recently, Adhami and Ahmad [2] investigated a novel Pythagorean-hesitant fuzzy computational algorithm to solve MOOPs and applied to transportation problem with fuzzy parameters. Deli and Şubaş [19] suggested a novel ranking method for single-valued neutrosophic numbers and applied it to multi-criteria decision-making problems. Deli [18] investigated the linear optimization method on a single-valued neutrosophic set and performed its sensitivity analysis. Ahmad et al. [8] discussed the energy–food–water nexus security management through neutrosophic modeling and optimizing approaches. [Ahmad et al.] presented a study on supplier selection problem with Type-2 fuzzy parameters and solved using an interactive neutrosophic optimization algorithm.

Uncertainty measures are also pervasive. Due to real-world complexity, uncertainty among the various parametric values is more realistic attributes while making decisions. Different sorts of uncertainty, such as vague, random, and other forms, may exist in real life. Uncertainty due to vagueness is treated with a fuzzy set theory. A fuzzy parameter only deals with the degree of belongingness (acceptance) of the element into a feasible solution set. It does not consider the degree of non-belongingness (rejections) of the element into the same feasible solution set, an integrated part of the decision-making processes. Furthermore, uncertainty due to randomness is dealt with random parameters. The estimation of parameters based on random variables according to some specified probability distribution function is much dependent on the behavior and nature of the historical data. Sometimes, it may not be possible to have historical data for which the random parameters are estimated. An intuitionistic fuzzy parameter deals with the degree of belongingness (acceptance) and degree of non-belongingness (rejections) of the element into the same feasible solution set, simultaneously. Also, there is no scope for the historical data while dealing with intuitionistic fuzzy parameters. Unlike fuzzy and random parameters, uncertain parameters are depicted as triangular intuitionistic fuzzy numbers. Thus, the main motive behind the selection of intuitionistic fuzzy parameters is to avoid the shortcomings of fuzzy and random parameters.

Optimization techniques have much importance and popularity in real life while solving optimization problems. Despite the FS- and IFS-based optimization techniques, handling vague and imprecise uncertainty in various regions are still lagging behind the more realistic decision-making scenarios, where the indeterminate knowledge or neutral thoughts cannot be tackled. For instance, suppose we seek the opinion from a research scholar regarding a journal, one may say that the possibility in which the journal is good is 0.7, the journal is not acceptable is 0.5, and not sure about the journal is 0.3, respectively. Such issues are beyond the scope of FS and IFS theories and, consequently, the periphery of FPA and IFPA. Therefore, managing the indeterminate circumstances of uncertain pieces of knowledge and experiences becomes a challenging task. Indeterminacy is a region of ignorance of propositions’ values between the truth and a falsity degree. Hence, the neutrosophic set captures the indeterminate knowledge or neutral thoughts efficiently. A very concise part of the literature is dedicated to the solution approach for MOPP under the neutrosophic environment. Therefore, a novel interactive neutrosophic programming approach (INPA) is developed to solve the proposed intuitionistic fuzzy MOOPs (IFMOOPs). A continuous effort is being made by many researchers or practitioners in the modeling and optimization texture of multiobjective optimization problems. Since our proposed MOOP took the form of an IFMOOPs, the development of a novel optimization technique to solve IFMOOPs also signifies this current study’s aim and objective. A large part of the literature is full of fuzzy-based optimization techniques for IFMOOPs. In the past few years, the generalized concept of a fuzzy set is utilized to solve the IFMOOPs. Many researchers have also implemented an intuitionistic fuzzy-based optimization method and gained a wide range of applicability and acceptability while optimizing multiobjective optimization problems. However, the existing approaches have some limitations or drawbacks and can be overcome by applying the proposed interactive neutrosophic programming approach. Additionally, the following points can be regarded as a research contribution to this study.

  • Uncertainty among parameters due to vagueness is dealt with by the intuitionistic fuzzy set theory, which is more generalized and advanced than the fuzzy set. Therefore, this study has considered the uncertain parameters as a triangular intuitionistic fuzzy number that takes care of the degree of belongingness and non-belongingness of an element into the feasible solution set and deals with the hesitation aspects.

  • Indeterminacy/neutral thoughts are the ignorance region of propositions’ values between the truth and falsity degrees. This aspect can only be tackled with the neutrosophic optimization method.

  • The existing methods of solving MOOPs Gupta and Kumar [23], Singh et al. [30], and Zangiabadi and Maleki [37] considered only the membership function whereas Mahajan and Gupta [27], and Singh and Yadav [31, 32] included the membership and non-membership degrees of each objective function. They do not cover the indeterminacy/neutral thoughts while making decisions. We have successfully coped with the concept of neutrality, and suggested indeterminacy degrees and membership and non-membership degrees simultaneously.

  • The study presented by Mahajan and Gupta [27], Singh and Yadav [31], and Zangiabadi and Maleki [37] do not allow the flexibility of vagueness degree (shape parameters) in neutral thoughts, but while applying exponential-type membership function under neutrosophic environment, it can be availed.

  • The proposed INPA can be considered as an extension of Ahmad and Adhami [3], Ahmad et al. [5,6,7], Li and Hu [26], and Torabi and Hassini [35].

The remaining portion of the manuscript is structured as follows: in “Basic concepts”, some underlying concept regarding the intuitionistic fuzzy set is discussed, while “Intuitionistic fuzzy multiobjective programming problem” represents the modeling of intuitionistic fuzzy MOOP. The proposed INPA is presented in “Proposed interactive neutrosophic programming approach”. In “Numerical examples”, various numerical examples and a case study are discussed to verify and validate the suggested approach. The conclusions and future research scope are addressed in “Conclusions”.

Basic concepts

Some basic concepts regarding intuitionistic fuzzy set (IFS) are discussed.

Definition 1

[13] (Intuitionistic fuzzy set) Assume that there be a universal set X. Then, an intuitionistic fuzzy set (IFS) \({\widetilde{Y}}\) in X is defined by the ordered triplets as follows:

$$\begin{aligned} {\widetilde{Y}}=\{x,~\mu _{{\widetilde{Y}}}(x),~~\nu _{{\widetilde{Y}}}(x)~|~x\in ~X\}, \end{aligned}$$

where \(\mu _{{\widetilde{Y}}}(x) : X \,\rightarrow \, [0,~1]\) denotes the membership function and \(\nu _{{\widetilde{Y}}}(x):X \,\rightarrow \, [0,~1]\) denotes the non-membership function of the element \(x \in X\) into the set \({\widetilde{Y}}\), respectively, with the conditions \(0~\le ~\mu _{{\widetilde{Y}}}(x) + \nu _{{\widetilde{Y}}}(x)~\le 1\). The value of \(\phi _{{\widetilde{Y}}}(x)= 1- \mu _{{\widetilde{Y}}}(x) - \nu _{{\widetilde{Y}}}(x)\) is called the degree of uncertainty of the element \(x \in X\) to the IFS \({\widetilde{Y}}\). If \(\phi _{{\widetilde{Y}}}(x)=0\), an IFS changes into fuzzy set and becomes \({\widetilde{Y}}=\{x,~\mu _{{\widetilde{Y}}}(x),~1-\mu _{{\widetilde{Y}}}(x)~|~x\in ~X\}\).

Definition 2

[6] (Intuitionistic fuzzy number) An IFS \({\widetilde{Y}}=\{x,~\mu _{{\widetilde{Y}}}(x),~~\nu _{{\widetilde{Y}}}(x)~|~x\in ~X\} \) is said to be an intuitionistic fuzzy number if and only iff:

  1. 1.

    There exist a real number \(x_{0} \in \mathrm I\!R\) for which \(\mu _{{\widetilde{Y}}}(x)=1\) and \(\nu _{{\widetilde{Y}}}(x)=0\).

  2. 2.

    The membership function \(\mu _{{\widetilde{Y}}}(x)\) of \({\widetilde{Y}}\) is fuzzy convex and non-membership function \(\nu _{{\widetilde{Y}}}(x)\) of \({\widetilde{Y}}\) is fuzzy concave.

  3. 3.

    Also, \(\mu _{{\widetilde{Y}}}(x)\) is upper semi-continuous and \(\nu _{{\widetilde{Y}}}(x)\) is lower semi-continuous.

  4. 4.

    The support of \({\widetilde{Y}}\) is depicted as \(\left( x \in \mathrm I\!R : \nu _{{\widetilde{Y}}}(x) \le 1\right) \).

Definition 3

[6] (Triangular intuitionistic fuzzy number) A triangular intuitionistic fuzzy number (TrIFN) is represented by \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) where \(z_{1} , y_{1} , y_{2}, y_{3} , z_{3} \in \mathrm I\!R\), such that \(z_{1} \le y_{1} \le y_{2} \le y_{3} \le z_{3}\); and its membership function \(\mu _{{\widetilde{Y}}}(x)\) and non-membership function \(\nu _{{\widetilde{Y}}}(x)\) is of the form:

$$\begin{aligned} \mu _{{\widetilde{Y}}}(x)= \left\{ \begin{array}{ll} \dfrac{x-y_{1}}{y_{2}-y_{1}}, &{} \text {~~if}~~ y_{1}< x< y_{2},\\ 1, &{} \text {~~if}~~ x=y_{2},\\ \dfrac{y_{3}-x}{y_{3}-y_{2}}, &{} \text {~~if}~~ y_{2}< x< y_{3},\\ 0, &{} \text {~~if}~~ \text {~~otherwise}. \end{array} \right. \\~\text {and}~ \nu _{{\widetilde{Y}}}(x)= \left\{ \begin{array}{ll} \dfrac{x-y_{2}}{z_{3}-y_{2}}, &{} \text {~~if}~~ z_{1}< x< y_{2},,\\ 0, &{} \text {~~if}~~ x=y_{2},\\ \dfrac{y_{2}-x}{y_{2}-z_{1}}, &{} \text {~~if}~~ y_{2}< x < z_{3},\\ 1, &{} \text {~~if}~~ \text {~~otherwise}. \end{array} \right. \end{aligned}$$

Definition 4

[6] Consider that a TrIFN is given by \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) where \(z_{1} , y_{1} , y_{2}, y_{3} , z_{3} \in \mathrm I\!R\), such that \(z_{1} \le y_{1} \le y_{2} \le y_{3} \le z_{3}\). Then, the parametric form of \({\widetilde{Y}}\) are \(u(\tau )= \left( \overline{u(\tau )}, \underline{u(\tau )}\right) \) and \(v(\tau )= \left( \overline{v(\tau )}, \underline{v(\tau )}\right) \). Furthermore, \(u(\tau )\) and \(v(\tau )\) are the parametric form of TrIFN corresponding to membership and non-membership functions, such that \(\overline{u(\tau )} = y_{3}-\tau (y_{3}-y_{1})\), \(\underline{u(\tau )} = y_{1}-\tau (y_{2}-y_{1})\) and \(\overline{v(\tau )} = y_{2}-(1-\tau ) (y_{2}-z_{1})\), \(\underline{v(\tau )} = y_{2}+(1-\tau ) (z_{3}-y_{2})\), respectively. A TrIFN \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) is said to be positive TrIFN if \(z_{1}>0\), and hence, \(y_{1},y_{2},y_{3},z_{3}\) are all positive numbers.

Definition 5

Assume that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) and \({\widetilde{W}}= \left( (w_{1},w_{2},w_{3});(v_{1},w_{2},v_{3})\right) \) are two TrIFNs. Then, addition of \({\widetilde{Y}}\) and \({\widetilde{W}}\) is again a TrIFN:

$$\begin{aligned} {\widetilde{Y}} + {\widetilde{W}}= & {} \bigg [ \left( y_{1}+w_{1}, y_{2}+w_{2}, y_{3}+w_{3} \right) ;\\ \quad&\left( z_{1}+v_{1}, y_{2}+w_{2}, z_{3}+v_{3} \right) \bigg ]. \end{aligned}$$

Definition 6

Consider that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) be a TrIFN and \(k \in \mathrm I\!R\). Then, scaler multiplication of \({\widetilde{Y}}\) is again a TrIFN:

$$\begin{aligned} k ({\widetilde{Y}})= \left\{ \begin{array}{ll} (k y_{1}, k y_{2}, k y_{3}; k z_{1}, k y_{2}, k z_{3})&{}k>0 \\ (k y_{3}, k y_{2}, k y_{1}; k z_{3}, k y_{2}, k z_{1})&{}k<0 \\ (0,0,0;0,0,0), &{} k=0. \end{array} \right. \end{aligned}$$

Property 1 The two TrIFNs \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) and \({\widetilde{W}}= \left( (w_{1},w_{2},w_{3});(v_{1},w_{2},v_{3})\right) \) are said to be equal iff \(y_{1}=w_{1},y_{2}=w_{2},y_{3}=w_{3};z_{1}=v_{1},y_{2=w_{2}},z_{3}=v_{3}\).

Definition 7

(Expected interval and expected value of TrIFNs) The concept of expected interval and expected value was defined by Heilpern [24]. Thus, we re-defined it for TrIFNs. Suppose that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) be a TrIFN and \(EI^{\mu }\) and \(EI^{\nu }\) depict the expected intervals for membership and non-membership functions, respectively. Thus, these can be defined as follows:

$$\begin{aligned} EI^{\mu } ({\widetilde{Y}})&= \left[ \int _{0}^{1} \underline{u(\tau )} d_{k}\tau ,\int _{0}^{1} \overline{u(\tau )} d_{k}\tau \right] \\&=\left[ \int _{0}^{1} y_{3}-\tau (y_{3}-y_{1}) d_{k}\tau , \int _{0}^{1} y_{1}-\tau (y_{2}-y_{1}) d_{k}\tau \right] \\ EI^{\nu } ({\widetilde{Y}})&= \left[ \int _{0}^{1} \underline{v(\tau )} d_{k}\tau ,\int _{0}^{1} \overline{v(\tau )} d_{k}\tau \right] \\&=\biggl [ \int _{0}^{1} y_{2}-(1-\tau ) (y_{2}-z_{1}) d_{k}\tau , \int _{0}^{1} y_{2} \\&+(1-\tau ) (z_{3}-y_{2}) d_{k}\tau \biggr ]. \end{aligned}$$

Moreover, consider that \(EV^{\mu } ({\widetilde{Y}})\) and \(EV^{\nu } ({\widetilde{Y}})\) represent the expected values corresponding to membership and non-membership functions, respectively. These can be depicted as follows:

$$\begin{aligned} EV^{\mu } ({\widetilde{Y}})= & {} \dfrac{\int _{0}^{1} \underline{u(\tau )} d_{k}\tau + \int _{0}^{1} \overline{u(\tau )} d_{k}\tau }{2} = \dfrac{y_{1}+2y_{2}+y_{3}}{4}\end{aligned}$$
(1)
$$\begin{aligned} EV^{\nu } ({\widetilde{Y}})= & {} \dfrac{\int _{0}^{1} \underline{v(\tau )} d_{k}\tau + \int _{0}^{1} \overline{v(\tau )} d_{k}\tau }{2} = \dfrac{z_{1}+2y_{2}+z_{3}}{4}.\nonumber \\ \end{aligned}$$
(2)

The expected value EV of a TrIFN \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) is given as follows:

$$\begin{aligned} EV ({\widetilde{Y}})= \psi EV^{\mu } ({\widetilde{Y}}) + (1-~\psi ) EV^{\nu } ({\widetilde{Y}}),~\text {where}~ \psi \in [0,~1]. \end{aligned}$$

Definition 8

(Accuracy function) The expected value (EV) for TrIFN \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) with the help of Eqs. (1) and (2) and for \(\psi = 0.5\) can be represented as follows:

$$\begin{aligned} EV ({\widetilde{Y}})= \dfrac{y_{1} + y_{3} + 4y_{2}+ z_{1} + z_{3}}{8}; \end{aligned}$$

thus, \(EV ({\widetilde{Y}})\) is also known as accuracy function of \({\widetilde{Y}}\).

Theorem 1

Suppose that \({\widetilde{Y}}\) be a TrIFN. Then, for any \(EV:IF(\mathrm I\!R) \,\rightarrow \, \mathrm I\!R\); the expected value \(EV (k {\widetilde{A}}) = k EV ({\widetilde{A}})\) for all \(k \in \mathrm I\!R\).

Proof

Let us consider that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) be a TrIFN. Then, based on the nature of k, three different cases will arise:

Case I when \(k = 0\), there is no need to prove.

Case II when \(k > 0\), then with the help of Property 2, we have \(k{\widetilde{Y}}= k \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) = (k y_{1}, k y_{2}, k y_{3}; k z_{1}, k y_{2}, k z_{3})\). On applying expected value of \(k{\widetilde{Y}}\), we get:

$$\begin{aligned} EV (k {\widetilde{A}})&= EV (k y_{1}, k y_{2}, k y_{3}; k z_{1}, k y_{2}, k z_{3})\\&=\dfrac{\left( k y_{1}+ 2k y_{2}+ k y_{3}+ k z_{1}+ 2k y_{2}+ k z_{3} \right) }{8} \\&= k \dfrac{\left( y_{1}+ 4 y_{2}+ y_{3}+ z_{1}+ z_{3} \right) }{8}\\&= k EV ({\widetilde{A}}). \end{aligned}$$

Case III when \(k < 0\), then with the help of Property 2, we have \(k{\widetilde{Y}}= k \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) = (k y_{3}, k y_{2}, k y_{1}; k z_{3}, k y_{2}, k z_{1})\). On applying expected value of \(k{\widetilde{Y}}\), we get:

$$\begin{aligned} EV (k {\widetilde{A}})&= EV (k y_{3}, k y_{2}, k y_{1}; k z_{3}, k y_{2}, k z_{1})\\&=\dfrac{\left( k y_{1}+ 2k y_{2}+ k y_{3}+ k z_{1}+ 2k y_{2}+ k z_{3} \right) }{8} \\&= k \dfrac{\left( y_{1}+ 4 y_{2}+ y_{3}+ z_{1}+ z_{3} \right) }{8}\\&= k EV ({\widetilde{A}}). \end{aligned}$$

In each case, we have proven that \(EV (k {\widetilde{A}}) = k EV ({\widetilde{A}})\).

Theorem 2

Suppose that \({\widetilde{Y}}\) and \({\widetilde{W}}\) be two TrIFNs. Then, the accuracy function \(EV:IF(\mathrm I\!R) \,\rightarrow \, \mathrm I\!R\) is a linear function, i.e., \(EV ({\widetilde{Y}} + k {\widetilde{W}}) = EV ({\widetilde{Y}}) + k EV ({\widetilde{W}})\) for all \(k \in \mathrm I\!R\).

Proof

Let us consider that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) and \({\widetilde{W}}= \left( (w_{1},w_{2},w_{3});(v_{1},w_{2},v_{3})\right) \) be two TrIFNs. Then, based on the nature of k, three different cases will arise:

Case I when \(k = 0\), there is no need to prove.

Case II when \(k > 0\), then with the help of Property 1, we have \({\widetilde{Y}} + {\widetilde{W}}= \left[ \left( y_{1}+w_{1}, y_{2}+w_{2}, y_{3}+w_{3} \right) ; \left( z_{1}+v_{1}, y_{2}+w_{2}, z_{3}+v_{3} \right) \right] \). On applying expected value of \({\widetilde{Y}} + k {\widetilde{W}}\), we have:

$$\begin{aligned}&EV ({\widetilde{Y}} + k {\widetilde{W}}) \\&\quad = \dfrac{ (y_{1}+kw_{1})+ 4(y_{2}+kw_{2})+ (y_{3}+kw_{3})+ ( z_{1}+kv_{1})+ (z_{3}+kv_{3}) }{8}\\&\quad = \dfrac{ (y_{1}+z_{1}+ 4y_{2}+ y_{3}+z_{3})+ ( kw_{1}+kv_{1}+ 4kw_{2}+kw_{3} +kv_{3}) }{8}\\&\quad =\dfrac{\left( y_{1}+z_{1}+ 4y_{2}+ y_{3}+z_{3} \right) }{8} + \dfrac{\left( kw_{1}+kv_{1}+ 4kw_{2}+kw_{3} +kv_{3} \right) }{8}\\&\quad = EV ({\widetilde{Y}}) + k EV ({\widetilde{W}}). \end{aligned}$$

Case III when \(k < 0\), we have:

$$\begin{aligned}&EV ({\widetilde{Y}} + k {\widetilde{W}})\\&\quad = \dfrac{(z_{3}+kv_{3})+( z_{1}+kv_{1})+ (y_{3}+kw_{3})+ 4(y_{2}+kw_{2})+ (y_{1}+kw_{1})}{8}\\&\quad = \dfrac{( kw_{1}+kv_{1}+ 4kw_{2}+kw_{3} +kv_{3}) + (y_{1}+z_{1}+ 4y_{2}+ y_{3}+z_{3}) }{8}\\&\quad = \dfrac{\left( kw_{1}+kv_{1}+ 4kw_{2}+kw_{3} +kv_{3} \right) }{8} + \dfrac{\left( y_{1}+z_{1}+ 4y_{2}+ y_{3}+z_{3} \right) }{8}\\&\quad = k EV ({\widetilde{W}}) + EV ({\widetilde{Y}}). \end{aligned}$$

In each case, we have proven that \(EV ({\widetilde{Y}} + k {\widetilde{W}}) = EV ({\widetilde{Y}}) + k EV ({\widetilde{W}})\) for all \(k \in \mathrm I\!R\). Thus, (accuracy function) expected value EV is linear.

Theorem 3

Suppose that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) be a TrIFN. If \(z_{1}=y_{1}, z_{3}=y_{3}\), then \(EV ({\widetilde{Y}}) = \dfrac{y_{1}+2y_{2}+y_{3}}{4}\), represents the defuzzified value of triangular fuzzy number.

Proof

Let us consider that \({\widetilde{Y}}= \left( (y_{1},y_{2},y_{3});(z_{1},y_{2},z_{3})\right) \) be a TrIFN. Since, we have::

$$\begin{aligned} z_{1}=y_{1}, z_{3}=y_{3}. \end{aligned}$$
(3)

Therefore, expected value of \({\widetilde{Y}}\) is given by:

$$\begin{aligned} EV ({\widetilde{Y}}) = \dfrac{\left( y_{1}+z_{1}+ 4y_{2}+ y_{3}+z_{3}\right) }{8}. \end{aligned}$$
(4)

Using Eqs. (3) and (4), we get:

$$\begin{aligned} EV ({\widetilde{Y}}) = \dfrac{\left( y_{1}+2y_{2}+ y_{3}\right) }{4}. \end{aligned}$$

Theorem 4

The expected value \(EV (k) = k\), where \(k \in \mathrm I\!R\).

Proof

Let us suppose that there is real number k, such that \(k= \{(k,k,k);(k,k,k)\}\). Then:

$$\begin{aligned} EV (k) = \dfrac{k+k+4k+k+k}{8}=\dfrac{8k}{8}=k. \end{aligned}$$

Note that, more precisely, if \(k=0\), then \(EV (k)=EV (0) = 0\).

Definition 9

[3] (Neutrosophic set) Suppose \(x \in X\) denotes the universal discourse. A neutrosophic set (NS) A in X can be depicted by the truth \(\mu _{A}(x)\), indeterminacy \(\lambda _{A}(x)\), and a falsity \(\nu _{A}(x)\) membership functions, and is expressed as follows:

$$\begin{aligned} A = \{ < x, \mu _{A}(x) , \lambda _{A}(x) , \nu _{A}(x) > | x \in X \}, \end{aligned}$$

where \(\mu _{A}(x) , \lambda _{A}(x)\) and \(\nu _{A}(x)\) are real standard or non-standard subsets belong to \(]0^{-}, 1^{+}[\), also given as, \(\mu _{A}(x): X \,\rightarrow \, ]0^{-}, 1^{+}[\), \(\lambda _{A}(x): X \,\rightarrow \, ]0^{-}, 1^{+}[\), and \(\nu _{A}(x): X \,\rightarrow \, ]0^{-}, 1^{+}[\). Also, the sum of \( \mu _{A}(x) , \lambda _{A}(x)\) and \(\nu _{A}(x)\) is free from all restrictions. Thus, we have:

$$\begin{aligned} 0^{-} \le ~sup~\mu _{A}(x) + \lambda _{A}(x) +~sup~ \nu _{A}(x) \le 3^{+}. \end{aligned}$$

Definition 10

[3] An NS is said to be single-valued neutrosophic set A if the following condition will hold:

$$\begin{aligned} A = \{ < x, \mu _{A}(x) , \lambda _{A}(x) , \nu _{A}(x) > | x \in X \}, \end{aligned}$$

where \(\mu _{A}(x) , \lambda _{A}(x)\) and \(\nu _{A}(x) \in [0,1]\) and \(0 \le \mu _{A}(x) + \lambda _{A}(x) + \nu _{A}(x) \le 3\) for each \(x \in X\).

Definition 11

[6] The union of two single valued neutrosophic sets A and B is also a single-valued neutrosophic set C, i.e., \(C=(A \cup B)\) with the truth \(\mu _{C}(x)\), indeterminacy \(\lambda _{C}(x)\), and falsity \(\nu _{C}(x)\) membership functions as follows:

\(\mu _{C}(x)= \text {max} ~(\mu _{A}(x),~ \mu _{B}(x))\)

\(\lambda _{C}(x)= \text {max} ~(\lambda _{A}(x),~ \lambda _{B}(x))\)

\(\nu _{C}(x)= \text {min} ~(\nu _{A}(x),~ \nu _{B}(x))\) for each \(x \in X\).

Definition 12

[6] The intersection of two single-valued neutrosophic sets A and B is also a single-valued neutrosophic set C, i.e., \(C=(A \cap B)\) with the truth \(\mu _{C}(x)\), indeterminacy \(\lambda _{C}(x)\), and falsity \(\nu _{C}(x)\) membership functions as follows:

\(\mu _{C}(x)= \text {min} ~(\mu _{A}(x),~ \mu _{B}(x))\)

\(\lambda _{C}(x)= \text {min} ~(\lambda _{A}(x),~ \lambda _{B}(x))\)

\(\nu _{C}(x)= \text {max} ~(\nu _{A}(x),~ \nu _{B}(x))\) for each \(x \in X\).

Intuitionistic fuzzy multiobjective programming problem

Most often, real-life problems exhibit optimization of more than one objective at a time. The most promising solution set that satisfies each objective efficiently is termed as the best compromise solution. Hence, the conventional form of MOOP with k objectives is given as follows (5):

$$\begin{aligned} \begin{array}{ll} &{}\text {Optimize (Max/Min)}O(x)= \left[ O_{1} (x),O_{2}(x), \ldots ,O_{k}(x) \right] \\ \text {s.t.}&{}\\ &{}\quad \sum _{j=1}^{J} a_{ij} x_{j} \ge b_{i},~~i=1,2, \ldots , I_{1},\\ &{}\quad \sum _{j=1}^{J} a_{ij} x_{j} \le b_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\quad \sum _{j=1}^{J} a_{ij} x_{j} = b_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}\quad x_{j} \ge 0,~~j=1,2, \ldots , J, \end{array} \end{aligned}$$
(5)

where \(O_{k}(x)= \sum _{k=1}^{K} c_{kj} x_{j}, ~~\forall ~~k=1,2, \ldots , K\) is the kth objective function and is linear in nature, \(b_{i},~~\forall ~~i=1,2, \ldots , I\), and \(x_{j}, ~~\forall ~~j=1,2, \ldots , J\) are the right-hand sides and a set of decision variables, respectively.

Thus, the formulation of intuitionistic fuzzy MOOP (IFMOOP) (6) can be summarized as follows:

$$\begin{aligned} \begin{array}{ll} &{}\text {Optimize (Max/Min)}\,\,{\widetilde{O}}^{IF}(x)= \left[ {\widetilde{O}}^{IF}_{1} (x),{\widetilde{O}}^{IF}_{2}(x), \ldots ,{\widetilde{O}}^{IF}_{k}(x) \right] \\ &{}\quad \text {s.t.}\\ &{}\quad \sum _{j=1}^{J} {\widetilde{a}}^{IF}_{ij} x_{j} \ge {\widetilde{b}}^{IF}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\quad \sum _{j=1}^{J} {\widetilde{a}}^{IF}_{ij} x_{j} \le {\widetilde{b}}^{IF}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\quad \sum _{j=1}^{J} {\widetilde{a}}^{IF}_{ij} x_{j} = {\widetilde{b}}^{IF}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}\quad x_{j} \ge 0,~~j=1,2, \ldots , J, \end{array} \end{aligned}$$
(6)

where \({\widetilde{O}}^{IF}_{k}(x)= \sum _{k=1}^{K} \left( {\widetilde{c}}_{kj}\right) ^{IF} x_{j}, ~~\forall ~~k=1,2, \ldots , K\) is the kth objective function with trapezoidal intuitionistic fuzzy parameters.

With the aid of accuracy function (Theorem 1) which is linear, the IFMOOP (6) can be converted into the following deterministic MOOP (7):

$$\begin{aligned} \begin{array}{ll} &{}\text {Optimize (max/min)}O^{'}(x)= \left[ O^{'}_{1}(x), O^{'}_{2}(x), \ldots , O^{'}_{k}(x) \right] \\ &{}\quad \text {s.t.}\\ &{}\quad \sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\quad \sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\quad \sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}\quad x_{j} \ge 0,~~j=1,2, \ldots , J, \end{array} \end{aligned}$$
(7)

where \(O^{'}_{k}(x)= EV \left( {\widetilde{O}}^{IF}_{k}(x) \right) = \sum _{k=1}^{K} EV \left( \left( {\widetilde{c}}_{kj}\right) ^{IF} \right) x_{j}, \forall ~k=1,2, \ldots , K\); \(b^{'}_{i}= EV \left( {\widetilde{b}}^{IF}_{i} \right) \) and \(a^{'}_{ij}= EV \left( {\widetilde{a}}^{IF}_{ij} \right) \), for all \(i=1,2, \ldots , I,~~j=1,2, \ldots , J\) are the crisp version of all the objective functions and parameters.

Of particular interest, we have proven the existence of an efficient solution of the IFMOOP (6) and the convexity property of crisp MOOP (7) in Theorems 5 and 6, respectively. Hence, the obtained crisp MOOP (7) can be solved using the proposed interactive neutrosophic programming approach (see “Proposed interactive neutrosophic programming approach”) to obtain the optimal global solutions.

Definition 13

Assume that X be the set of feasible solution for the crisp MOOP (7). Then, a point \(x^{*}\) is said to be an efficient or Pareto optimal solution of the crisp MOOP (7) if and only iff there does not exist any \(x \in X\), such that, \(O_{k}(x^{*}) \ge O_{k}(x), ~~\forall ~~k=1,2, \ldots , K\) and \(O_{k}(x^{*}) > O_{k}(x)\) for all at least one \(\forall ~~k=1,2, \ldots , K\). Here, k is the number of objective function present in the crisp MOOP (7).

Definition 14

A point \(x^{*} \in X\) is said to be weak Pareto optimal solution for the crisp MOOP (7) if and only iff there does not exist any \(x \in X\), such that \(O_{k}(x^{*}) \ge O_{k}(x), ~~\forall ~~k=1,2, \ldots , K\).

Theorem 5

An efficient solution of the crisp MOOP (7) is also an efficient solution for the IFMOOP (6).

Proof

Consider that \(x \in X\) be an efficient solution of the crisp MOOP (7). Then, X is also feasible for the crisp MOOP (7). It means that the following condition will hold:

$$\begin{aligned} \begin{array}{ll} &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}x_{j} \ge 0,~~j=1,2, \ldots , J. \end{array} \end{aligned}$$

Since it is proven that EV is a linear function (Theorem 2), we have:

$$\begin{aligned} \begin{array}{ll} &{}\sum _{j=1}^{J} EV \left( {\widetilde{a}}^{IF}_{ij}\right) x_{j} \ge EV \left( {\widetilde{b}}^{IF}_{i}\right) ,~~i=1,2, \ldots , I_{1},\\ &{}\sum _{j=1}^{J} EV \left( {\widetilde{a}}^{IF}_{ij}\right) x_{j} \le EV \left( {\widetilde{b}}^{IF}_{i}\right) ,~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\sum _{j=1}^{J} EV \left( {\widetilde{a}}^{IF}_{ij}\right) x_{j} = EV \left( {\widetilde{b}}^{IF}_{i}\right) ,~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}x_{j} \ge 0,~~j=1,2, \ldots , J. \end{array} \end{aligned}$$

Consequently, we have:

$$\begin{aligned} \begin{array}{ll} &{}\sum _{j=1}^{J} {\widetilde{a}}^{IF}_{ij} x_{j} \ge {\widetilde{b}}^{IF}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\sum _{j=1}^{J} {\widetilde{a}}^{IF}_{ij} x_{j} \le {\widetilde{b}}^{IF}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\sum _{j=1}^{J} {\widetilde{a}}^{IF}_{ij} x_{j} = {\widetilde{b}}^{IF}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}x_{j} \ge 0,~~j=1,2, \ldots , J. \end{array} \end{aligned}$$

Hence, X is a feasible solution for the IFMOOP (6).

Moreover, since X is an efficient solution for the crisp MOOP (7), there does not exist any \(X^{*}= \left( x^{*}_{1}, x^{*}_{2}, \ldots , x^{*}_{n} \right) \), such that \(O_{k}(X^{*}) \le O_{k}(X)~\forall ~~k=1,2, \ldots , K\) and \(O_{k}(X^{*}) < O_{k}(X)\) for at least one \(k=1,2, \ldots , K\). Thus, we have no \(X^{*}\), such that \(\text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X) \right) \le \text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X^{*}) \right) ~\forall ~~k=1,2, \ldots , K\) and \(\text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X) \right) < \text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X^{*}) \right) ~\forall ~~k=1,2, \ldots , K\) for at least one \(k=1,2, \ldots , K\).

Since EV is a linear function (Theorem 2), we have no \(X^{*}\), such that \(\text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X) \right) \le \text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X^{*}) \right) ~\forall ~~k=1,2, \ldots , K\) and \(\text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X) \right) < \text {Min} \sum _{k=1}^{K} EV \left( {\widetilde{O}}_{k}(X^{*}) \right) ~\forall ~~k=1,2, \ldots , K\) for at least one \(k=1,2, \ldots , K\). Thus, X is an efficient solution for the IFMOOP (6).

Definition 15

Let \(O_{1}\) and \(O_{2}\) be comonotonic functions, and then, for any intuitionistic fuzzy parameter \({\widetilde{Y}}\), we have:

$$\begin{aligned} EV \left[ O_{1}({\widetilde{Y}}) + O_{2}({\widetilde{Y}}) \right] = EV \left[ O_{1}({\widetilde{Y}})\right] + EV \left[ O_{2}({\widetilde{Y}})\right] . \end{aligned}$$

For the sake of simplicity, let us consider an auxiliary model (8) which is an equivalent to the crisp MOOP (7) and can be given as follows:

$$\begin{aligned} \begin{array}{ll} &{}\text {Optimize (Max/Min)}EV \left[ O(X, {\widetilde{Y}})\right] ~~= \left( EV\left[ O_{1}(X, {\widetilde{Y}})\right] ,\ldots , EV\right. \\ &{}\qquad \left. \left[ O_{k}(X, {\widetilde{Y}})\right] \right) ~\forall ~k=1,2,3.\\ &{}\quad \text {subject to}\\ &{}\quad \sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\quad \sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\quad \sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}\quad x_{j} \ge 0,~~j=1,2, \ldots , J, \end{array} \end{aligned}$$
(8)

where \(EV[\cdot ]\) in auxiliary model (8) represents the expected values (accuracy function) of the intuitionistic fuzzy parameters.

In Theorem 5, we have already proven the expected value EV efficient solution for the IFMOOP (6). This concept is obtained by presenting the crisp MOOP (7), which comprise the expected value of intuitionistic fuzzy uncertain objectives of the IFMOOP (6).

Intuitionally, if the intuitionistic fuzzy uncertain vectors in the auxiliary model (8) degenerate into intuitionistic fuzzy parameters, then the following convexity Theorem 6 of the auxiliary model (8) can be proved.

Theorem 6

Suppose that the function \(O(X, {\widetilde{Y}})\) is differentiable and a convex vector function with respect to X and \({\widetilde{Y}}\). Thus, for any given \(X_{1},~X_{2} \in X\), if \(O_{k}(X_{1}, {\widetilde{Y}})\) and \(O_{k}(X_{2}, {\widetilde{Y}})\) are comonotonic on intuitionistic fuzzy parameters \({\widetilde{Y}}\), then the auxiliary model (8) is a convex programming problem.

Proof

Since, the feasible solution set X is a convex set, intuitionally, it is sufficient to obtain that the auxiliary model (8) is a convex vector function.

Note that the \(O(X, {\widetilde{Y}})\) is a convex vector function on X for any given \({\widetilde{Y}}\), the inequality:

$$\begin{aligned} O \left( \delta X_{1} + (1- \delta ) X_{2}, {\widetilde{Y}} \right) \leqq \delta O(X_{1}, {\widetilde{Y}}) + (1- \delta ) O(X_{2}, {\widetilde{Y}}) \end{aligned}$$

holds for any \(\delta \in [0.1]\) and \(X_{1},~X_{2} \in X\), that is:

$$\begin{aligned} O_{k} \left( \delta X_{1} + (1- \delta ) X_{2}, {\widetilde{Y}} \right) \leqq \delta O_{k}(X_{1}, {\widetilde{Y}}) + (1- \delta ) O_{k}(X_{2}, {\widetilde{Y}}) \end{aligned}$$

holds for each \(k,~1\le k \le 3\).

Using the assumed condition that \(O_{k}(X_{1}, {\widetilde{Y}})\) and \(O_{k}(X_{2}, {\widetilde{Y}})\) are comonotonic on \({\widetilde{Y}}\), it follows from Definition 13 that:

$$\begin{aligned}&EV \left[ O_{k} \left( \delta X_{1} + (1- \delta ) X_{2}, {\widetilde{Y}} \right) \right] \leqq \delta EV \left[ O_{k}(X_{1}, {\widetilde{Y}})\right] \\&\quad + (1- \delta ) EV \left[ O_{k}(X_{2}, {\widetilde{Y}}) \right] , ~\forall ~ k, \end{aligned}$$

which implies that:

$$\begin{aligned}&EV \left[ O \left( \delta X_{1} + (1- \delta ) X_{2}, {\widetilde{Y}} \right) \right] \leqq \delta EV \left[ O(X_{1}, {\widetilde{Y}})\right] \\&\quad + (1- \delta ) EV \left[ O(X_{2}, {\widetilde{Y}}) \right] . \end{aligned}$$

The above inequality shows that \(EV \left[ O(X, {\widetilde{Y}})\right] \) is a convex vector function. Hence, the auxiliary model (8) is a convex programming problem. Consequently, the crisp MOOP (7) is also a convex programming problem. Thus, Theorem 6 is proved.

Proposed interactive neutrosophic programming approach

In many decision-making processes, the neutral thoughts or indeterminacy degree may occur about elements into the feasible decision set. Since FS and IFS can only tackle the degrees of belonging and non-belongingness of the element, the indeterminacy degrees cannot be managed with these sets. To capture the neutral thoughts or indeterminacy degree, Smarandache [33] introduced the neutrosophic set (NS). The NS deals with the degrees of belongingness and non-belongingness along with the indeterminacy degree of the element. Thus, the NS inevitably involves neutral thoughts and can be considered the FS and IFS’s generalized set. In MOOP, the marginal evaluations of each objective function are addressed by three different membership grades, such as the truth (degree of belongingness), indeterminacy (degree of belongingness up to some extent), and a falsity (degree of non-belongingness) membership functions, respectively. Literature reflects that a significant amount of research work is carried out in the neutrosophic domain (see Ahmad and Adhami [3], Ahmad et al. [5,6,7], and Smarandache [33]). We have also adopted the neutrosophic-based optimization technique for solving the MOOP. The proposed INPA also contemplates the quantification of marginal evaluations under the truth, indeterminacy, and falsity membership functions. Thus, in dealing with neutral thoughts in the MOOP, neutrosophic optimization techniques have the most favorable characteristics and significant role in making decisions.

Various membership functions

In multiobjective programming problems, each objective function’s marginal evaluation is depicted by its respective membership functions. The linear, exponential, and hyperbolic membership functions are constructed under the neutrosophic environment. Each of them is defined for the truth, indeterminacy, and a falsity membership function, which seems more realistic.

To depict the different membership functions for the crisp MOOP (7), the minimum and maximum values of each objective functions have been represented by \(L_{k}\) and \(U_{k}\), and can be obtained as follows:

$$\begin{aligned} U_{k}= \text {max}~ [O_{k}(x)]~~ \text {and} ~~L_{k} = \text {min}~ [O_{k}(x)]~~~\forall ~k=1,2,3,\ldots ,K.\nonumber \\ \end{aligned}$$
(9)

The bounds for kth objective function under the neutrosophic environment can be obtained as follows:

$$\begin{aligned} U_{k}^{\mu }= & {} U_{k},~~~~L_{k}^{\mu }= L_{k}~~~~\text {for}~\text {truth}~\text {membership} \end{aligned}$$
(10)
$$\begin{aligned} U_{k}^{\lambda }= & {} L_{k}^{\mu } + s_{k},~~~~L_{k}^{\lambda }= L_{k}^{\mu }~~~~{\text {for}}~{\text {indeterminacy}}~{\text {membership}} \end{aligned}$$
(11)
$$\begin{aligned} U_{k}^{\nu }= & {} U_{k}^{\mu },~~~~L_{k}^{\nu }=L_{k}^{\mu }+ t_{k} ~~~~ \text {for}~\text {falsity}~\text {membership}, \end{aligned}$$
(12)

where \(s_{k}\) and \(t_{k} \in (0,1)\) are predetermined real numbers prescribed by decision-makers.

  • Linear type membership functions In general, the most extensive and widely used membership function is a linear one due to its simple structure and more straightforward implications. The linear membership function contemplates over the fixed rate of satisfactory degrees toward an objective. The linear-type truth \(\mu ^{L}_{k}(O_{k}(x))\), indeterminacy \(\lambda ^{L}_{k}(O_{k}(x))\), and a falsity \(\nu ^{L}_{k}(O_{k}(x))\) membership functions under neutrosophic environment can be furnished as follows:

    $$\begin{aligned} \mu ^{L}_{k}(O_{k}(x))= & {} \left\{ \begin{array}{ll} 1 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\mu }\\ \frac{U_{k}^{\mu } -O_{k}(x)}{U_{k}^{\mu } - L_{k}^{\mu }} &{} {\text {if}}~~ L_{k}^{\mu } \le O_{k}(x) \le U_{k}^{\mu }\\ 0 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\mu } \end{array} \right. \end{aligned}$$
    (13)
    $$\begin{aligned} \lambda ^{L}_{k}(O_{k}(x))= & {} \left\{ \begin{array}{ll} 1 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\lambda }\\ \frac{U_{k}^{\lambda } -O_{k}(x)}{U_{k}^{\lambda } - L_{k}^{\lambda }} &{} {\text {if}}~~ L_{k}^{\lambda } \le O_{k}(x) \le U_{k}^{\lambda }\\ 0 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\lambda } \end{array} \right. \end{aligned}$$
    (14)
    $$\begin{aligned} \nu ^{L}_{k}(O_{k}(x))= & {} \left\{ \begin{array}{ll} 0 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\nu }\\ \frac{O_{k}(x)- L_{k}^{\nu }}{U_{k}^{\nu } - L_{k}^{\nu }} &{} {\text {if}}~~ L_{k}^{\nu } \le O_{k}(x) \le U_{k}^{\nu }\\ 1 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\nu }, \end{array} \right. \end{aligned}$$
    (15)

    where \(L_{k}^{(.)} \ne U_{k}^{(.)}\) for all k objective functions.

  • Exponential type membership functions The exponential-type truth \(\mu ^{E}_{k}(O_{k}(x))\), indeterminacy \(\lambda ^{E}_{k}(O_{k}(x))\), and a falsity \(\nu ^{E}_{k}(O_{k}(x))\) membership functions under neutrosophic environment can be stated as follows:

    $$\begin{aligned} \mu ^{E}_{k}(O_{k}(x))= \left\{ \begin{array}{ll} 1 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\mu }\\ \dfrac{e^{-d_{k} \left( \frac{O_{k}(x)- L_{k}^{\mu }}{U_{k}^{\mu } - L_{k}^{\mu }}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} &{} {\text {if}}~~ L_{k}^{\mu } \le O_{k}(x) \le U_{k}^{\mu }\\ 0 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\mu } \end{array} \right. \end{aligned}$$
    (16)
    $$\begin{aligned} \begin{array}{ll} \lambda ^{E}_{k}(O_{k}(x))= \left\{ \begin{array}{ll} 1 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\lambda }\\ \dfrac{e^{-d_{k} \left( \frac{U_{k}^{\lambda } -O_{k}(x)}{U_{k}^{\lambda } - L_{k}^{\lambda }}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} &{} {\text {if}}~~ L_{k}^{\lambda } \le O_{k}(x) \le U_{k}^{\lambda }\\ 0 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\lambda } \end{array} \right. \end{array}\end{aligned}$$
    (17)
    $$\begin{aligned} \begin{array}{ll} \nu ^{E}_{k}(O_{k}(x))= \left\{ \begin{array}{ll} 0 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\nu }\\ \dfrac{e^{-d_{k} \left( \frac{U_{k}^{\nu } -O_{k}(x)}{U_{k}^{\nu } - L_{k}^{\nu }}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} &{} {\text {if}}~~ L_{k}^{\nu } \le O_{k}(x) \le U_{k}^{\nu }\\ 1 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\nu }, \end{array} \right. \end{array}\nonumber \\ \end{aligned}$$
    (18)

    where \(d_{k}\) is the measures of vagueness degree (shape parameter) and assigned by the decision-makers.

  • Hyperbolic-type membership functions A hyperbolic membership function shows the flexible characteristic behavior concerning objective function. The hyperbolic-type truth \(\mu ^{H}_{k}(O_{k}(x))\), indeterminacy \(\lambda ^{H}_{k}(O_{k}(x))\), and a falsity \(\nu ^{H}_{k}(O_{k}(x))\) membership functions under neutrosophic environment can be depicted as follows:

    $$\begin{aligned}&\mu ^{H}_{k}(O_{k}(x))\nonumber \\&\quad = \left\{ \begin{array}{ll} 1 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\mu }\\ \dfrac{1}{2} \left[ 1+ \text {tanh} \left( \theta _{k} \left( \dfrac{U^{\mu }_{k}+L^{\mu }_{k}}{2} - O_{k}(x)\right) \right) \right] &{} {\text {if}}~~ L_{k}^{\mu } \le O_{k}(x) \le U_{k}^{\mu }\\ 0 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\mu } \end{array} \right. \end{aligned}$$
    (19)
    $$\begin{aligned}&\lambda ^{H}_{k}(O_{k}(x))\nonumber \\&\quad = \left\{ \begin{array}{ll} 1 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\lambda }\\ \dfrac{1}{2} \left[ 1+ \text {tanh} \left( \theta _{k} \left( \dfrac{U^{\lambda }_{k}+L^{\lambda }_{k}}{2} - O_{k}(x)\right) \right) \right] &{} {\text {if}}~~ L_{k}^{\lambda } \le O_{k}(x) \le U_{k}^{\lambda }\\ 0 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\lambda } \end{array} \right. \end{aligned}$$
    (20)
    $$\begin{aligned}&\nu ^{H}_{k}(O_{k}(x))\nonumber \\&\quad = \left\{ \begin{array}{ll} 0 &{} {\text {if}}~~ O_{k}(x) \le L_{k}^{\nu }\\ \dfrac{1}{2} \left[ 1+ \text {tanh} \left( \theta _{k} \left( O_{k}(x) - \dfrac{U^{\nu }_{k}+L^{\nu }_{k}}{2} \right) \right) \right] &{} {\text {if}}~~ L_{k}^{\nu } \le O_{k}(x) \le U_{k}^{\nu }\\ 1 &{} {\text {if}}~~ O_{k}(x) \ge U_{k}^{\nu }, \end{array} \right. \nonumber \\ \end{aligned}$$
    (21)

    where \(\theta _{k}=\frac{6}{U_{k}-L_{k}},~ \forall ~k=1,2, \ldots , K\).

First, Bellman and Zadeh [14] introduced the idea of fuzzy decision set. Later on, it has been immensely adopted and widely used by many researchers in various real-life applications. Hence, the fuzzy decision set can be expressed as follows:

$$\begin{aligned} D=O \cap C. \end{aligned}$$

Consequently, with the aid of above set, the neutrosophic decision set \(D_{N}\) can be expressed as follows:

$$\begin{aligned} D_{N}= (\cap _{k=1}^{K} O_{k}) (\cap _{i=1}^{I} C_{i}) = (x,~\mu _{D}(x),~\lambda _{D}(x),~\nu _{D}(x)), \end{aligned}$$

where:

$$\begin{aligned} \mu _{D}(x)= & {} {\text {min}} \left\{ \begin{array}{ll} \mu _{O_{1}}(x), \mu _{O_{2}}(x), \ldots , \mu _{O_{k}}(x) \\ \mu _{C_{1}}(x), \mu _{C_{2}}(x), \ldots , \mu _{C_{i}}(x) \\ \end{array} \right\} \forall ~~ x \in X \\ \lambda _{D}(x)= & {} {\text {max}} \left\{ \begin{array}{ll} \lambda _{O_{1}}(x), \lambda _{O_{2}}(x), \ldots , \lambda _{O_{k}}(x) \\ \lambda _{C_{1}}(x), \lambda _{C_{2}}(x), \ldots , \lambda _{C_{i}}(x) \\ \end{array} \right\} \forall ~~ x \in X \\ \nu _{D}(x)= & {} {\text {max}} \left\{ \begin{array}{ll} \nu _{O_{1}}(x), \nu _{O_{2}}(x), \ldots , \nu _{O_{k}}(x) \\ \nu _{C_{1}}(x), \nu _{C_{2}}(x), \ldots , \nu _{C_{i}}(x) \\ \end{array} \right\} \forall ~~ x \in X, \end{aligned}$$

where \( \mu _{D}(x) , \lambda _{D}(x)\), and \(\nu _{D}(x)\) are the truth, indeterminacy, and a falsity membership functions of neutrosophic decision set \(D_{N}\), respectively.

By utilizing the concept of Bellman and Zadeh [14], we intend to maximize the minimum truth (degree of belongingness) and minimize the maximum of indeterminacy (belongingness up to some extent) and falsity (degree of non-belongingness) degrees at a time. Therefore, an overall achievement function can be defined as the differences of truth, indeterminacy, and falsity degrees to reach each objective’s optimal solution under a neutrosophic environment. Thus, the mathematical expression for achievement function is defined as follows (22):

$$\begin{aligned} \begin{array}{ll} &{} {\text {Max min}}_{k=1,2,3,\ldots ,K} ~~ \mu ^{(\cdot )}_{k}(O_{k}(x)) \\ &{} {\text {Min max}}_{k=1,2,3, \ldots ,K} ~~ \lambda ^{(\cdot )}_{k}(O_{k}(x))\\ &{} {\text {Min max}}_{k=1,2,3, \ldots ,K} ~~ \nu ^{(\cdot )}_{k}(O_{k}(x))\\ &{} \text {subject~to}\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}x_{j} \ge 0,~~j=1,2, \ldots , J, \end{array} \end{aligned}$$
(22)

where the superscript \((\cdot )\) in the truth \(\mu ^{(\cdot )}_{k}(O_{k}(x))\), indeterminacy \(\lambda ^{(\cdot )}_{k}(O_{k}(x))\), and falsity \(\nu ^{(\cdot )}_{k}(O_{k}(x))\) membership functions represent the different types of membership function such as linear (L), exponential (E), and hyperbolic (H), respectively.

Using the auxiliary variables \(\alpha ,~\beta \) and \(\gamma \), the problem (22) is converted into the following problem (23):

$$\begin{aligned} \begin{array}{ll} &{}{\text {Max}}~~ \left( \alpha -\beta -\gamma \right) \\ &{} \text {subject~to}\\ &{} \mu ^{(\cdot )}_{k}(O_{k}(x)) \ge \alpha ,\\ &{}\lambda ^{(\cdot )}_{k}(O_{k}(x)) \le \beta ,\\ &{}\nu ^{(\cdot )}_{k}(O_{k}(x)) \le \gamma ,\\ &{} \alpha \ge \beta ,~~ 0 \le \alpha +\beta +\gamma \le 3,\\ &{} \alpha ,\beta , \gamma \in \left[ 0,1\right] \\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}x_{j} \ge 0,~~j=1,2, \ldots , J. \end{array} \end{aligned}$$
(23)

To solve the multiobjective programming problems, Torabi and Hassini [35] presented a single-phase solution method and named TH method. In Torabi and Hassin [35] approach, the achievement function is represented by a convex combination of the lower bound for satisfactory degree of objectives \((\alpha )\), and the weighted sum of these achievement degrees \(\left( \mu ^{(\cdot )}_{k}(O_{k}(x))\right) \) to confirm the yielding an adjustably balanced compromise solution. Of particular interest, the TH method [35] deals only with each objective function’s satisfaction degree and does not consider the indeterminacy and falsity degrees, which is also an important integrated part of the decision-making processes. Thus, the TH method does not consider the degrees of neutrality and dissatisfaction in real-life decision-making scenarios. To integrate the neutrality and dissatisfaction degrees in the TH method [35], we have re-defined a new achievement function and consequently proposed a novel interactive neutrosophic programming approach (hereafter the FA method) to obtain the optimal compromise solution. The proposed INPA can be considered the extended version of the TH method [35]. Therefore, proposed INPA (24) can be an equivalent modeling and optimizing approach for solving the crisp MOOP (7). Thus, the problem (23) can be transformed into an equivalent proposed INPA (24) and can be summarized as follows:

$$\begin{aligned} \begin{array}{ll} (INPA)&{}{\text {Max}}~\psi (x)= \eta \left( \alpha -\beta -\gamma \right) + \left( 1-\eta \right) \sum _{k=1}^{K} \\ &{}\left( \mu ^{(\cdot )}_{k}(O_{k}(x)) -\lambda ^{(\cdot )}_{k}(O_{k}(x))- \nu ^{(\cdot )}_{k}(O_{k}(x)) \right) \\ &{}\text {subject~to}\\ &{} \mu ^{(\cdot )}_{k}(O_{k}(x)) \ge \alpha ,\\ &{}\lambda ^{(\cdot )}_{k}(O_{k}(x)) \le \beta ,\\ &{}\nu ^{(\cdot )}_{k}(O_{k}(x)) \le \gamma ,\\ &{} \alpha \ge \beta ,~~ 0 \le \alpha +\beta +\gamma \le 3,\\ &{} \alpha ,\beta , \gamma \in \left[ 0,1\right] \\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\\ &{}\sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\\ &{}x_{j} \ge 0,~~j=1,2, \ldots , J. \end{array} \end{aligned}$$
(24)

If we assign individual weights \((w_{k})\) to each objectives, then the problem (23) can be transformed into an equivalent Weighted INPA (WINPA) (25) and can be summarized as follows:

$$\begin{aligned} (WINPA)&{\text {Max}}~\psi (x)= \eta \left( \alpha -\beta - \gamma \right) + \left( 1-\eta \right) \nonumber \\&\sum _{k=1}^{K} w_{k} \left( \mu ^{(\cdot )}_{k}(O_{k}(x)) -\lambda ^{(\cdot )}_{k}(O_{k}(x))- \nu ^{(\cdot )}_{k}(O_{k}(x)) \right) \nonumber \\&\text {subject~to}\nonumber \\&\mu ^{(\cdot )}_{k}(O_{k}(x)) \ge \alpha ,\nonumber \\&\lambda ^{(\cdot )}_{k}(O_{k}(x)) \le \beta ,\nonumber \\&\nu ^{(\cdot )}_{k}(O_{k}(x)) \le \gamma ,\nonumber \\&\alpha \ge \beta ,~~ 0 \le \alpha +\beta +\gamma \le 3,\nonumber \\&\alpha ,\beta , \gamma , k \in \left[ 0,1\right] \nonumber \\&\sum _{j=1}^{J} a^{'}_{ij} x_{j} \ge b^{'}_{i},~~i=1,2, \ldots , I_{1},\nonumber \\&\sum _{j=1}^{J} a^{'}_{ij} x_{j} \le b^{'}_{i},~~i=I_{1}+1,I_{1}+2, \ldots , I_{2},\nonumber \\&\sum _{j=1}^{J} a^{'}_{ij} x_{j} = b^{'}_{i},~~i=I_{2}+1,I_{2}+2, \ldots , I.\nonumber \\&x_{j} \ge 0,~~j=1,2, \ldots , J, \end{aligned}$$
(25)

where \(\mu ^{(\cdot )}_{k}(O_{k}(x)),~\lambda ^{(\cdot )}_{k}(O_{k}(x))\) and \(\nu ^{(\cdot )}_{k}(O_{k}(x))\) represent the truth, indeterminacy, and falsity degrees of kth objective function under neutrosophic environment. Also, \(\alpha = \text {min} \left[ \mu ^{(\cdot )}_{k}(O_{k}(x))\right] ,~\beta = \text {max} \left[ \lambda ^{(\cdot )}_{k}(O_{k}(x))\right] \) and \(\gamma = \text {max} \left[ \nu ^{(\cdot )}_{k}(O_{k}(x))\right] \) denote the minimum satisfaction and maximum neutral and dissatisfaction degrees of each objectives, respectively. Thus, the formulation of proposed INPA (24) and WINPA (25) has a new achievement function, which is elicited as a convex combination of the difference among lower and upper bounds for satisfaction and dissatisfaction degrees of objectives \((\alpha - \beta -\gamma )\), and some of the difference between these achievement degrees \(\left( \mu ^{(\cdot )}_{k}(O_{k}(x)) -\lambda ^{(\cdot )}_{k}(O_{k}(x))- \nu ^{(\cdot )}_{k}(O_{k}(x)) \right) \) to make sure generating an established balanced compromise solution. Also, \(\eta \) depicts the co-efficient of compensation. Furthermore, \(\eta \) monitors the overall satisfaction level of objectives and the compromise achievement degrees among the objective functions implicitly. It means that the proposed INPA (24) and WINPA (25) are the most promising and reliable of generating both unbalanced and balanced compromised solution for a given problem based on the decision-maker’s importance through tuning the value of parameter \(\eta \).

Remark 1

In the current context, a greater value for \(\eta \) means that more concern is offered to determine higher overall bounds for truth, indeterminacy, and falsity degrees of objectives \((\alpha - \beta - \gamma )\) and, consequently, more balanced compromise solutions. On the other hand, the smaller value for \(\eta \) means that more concern is shown to get a solution with high overall satisfaction degrees for each objective function without any attention paid to the individual satisfaction degree of other objective functions.

Definition 16

A vector \(x^{*} \in X\) is said to be an optimal solution to proposed INPA (24) or an efficient solution to the crisp MOOP (7) if and only iff there does not exist any \(x \in X\), such that, \(\mu _{k}(x) \ge \mu _{k}(x^{*})\), \(\lambda _{k}(x) \le \lambda _{k}(x^{*})\) and \(\nu _{k}(x) \le \nu _{k}(x^{*}), ~~\forall ~~k=1,2,3\).

Theorem 7

A unique optimal solution of proposed INPA (24) is also an efficient solution to the crisp MOOP (7).

Proof

Consider that \(x^{*}\) be a unique optimal solution of proposed INPA (24) which is not an efficient solution to crisp MOOP (7). It means that there must be an efficient solution, say \(x^{**}\), for the crisp MOOP (7), so that we can have: \(\mu _{k}(x^{**}) \ge \mu _{k}(x^{*})\), \(\lambda _{k}(x^{**}) \le \lambda _{k}(x^{*})\), and \(\nu _{k}(x^{**}) \le \nu _{k}(x^{*}); ~~\forall ~~k=1,2, \ldots , K\). Thus, for the overall satisfaction level of each objective functions in \(x^{*}\) and \(x^{**}\) solutions, we would have \(\left( \alpha -\beta -\gamma \right) (x^{**}) \ge \left( \alpha -\beta -\gamma \right) (x^{*})\), and concerning the related objective values, we would have the following inequalities:

$$\begin{aligned} \psi (x^{*})&= \eta \left( \alpha -\beta -\gamma \right) (x^{*}) + \left( 1-\eta \right) \nonumber \\&\times \left[ \sum _{k=1}^{K} \left( \mu ^{(\cdot )}_{k}(O_{k}(x^{*}))-\lambda ^{(\cdot )}_{k}(O_{k}(x^{*})) - \nu ^{(\cdot )}_{k}(O_{k}(x^{*})) \right) \right] \\&< \eta \left( \alpha -\beta -\gamma \right) (x^{**}) + \left( 1-\eta \right) \nonumber \\&\times \left[ \sum _{k \ne t} \left( \mu ^{(\cdot )}_{k}(O_{k}(x^{**})) -\lambda ^{(\cdot )}_{k}(O_{k}(x^{**}))- \nu ^{(\cdot )}_{k}(O_{k}(x^{**})) \right) \right] \\&= \psi (x^{**}). \end{aligned}$$

Hence, we have arrived at a contradiction that \(x^{*}\) is not a unique optimal solution of proposed INPA (24). This completes the proof of Theorem 7.

Theorem 8

A unique optimal solution of proposed WINPA (25) is also an efficient solution to the crisp MOOP (7).

Proof

Consider that \(x^{*}\) be a unique optimal solution of proposed WINPA (25) which is not an efficient solution to crisp MOOP (7). It means that there must be an efficient solution, say \(x^{**}\), for crisp MOOP (7), so that we can have: \(\mu _{k}(x^{**}) \ge \mu _{k}(x^{*})\), \(\lambda _{k}(x^{**}) \le \lambda _{k}(x^{*})\), and \(\nu _{k}(x^{**}) \le \nu _{k}(x^{*}); ~~\forall ~~k=1,2, \ldots , K\). Also, there exists \( t~|~ \mu _{t}(x^{**}) > \mu _{t}(x^{*})\), \(\lambda _{t}(x^{**}) < \lambda _{t}(x^{*})\) and \(\nu _{t}(x^{**}) < \nu _{t}(x^{*})\) for at least one t. Thus, for the overall satisfaction level of each objective functions in \(x^{*}\) and \(x^{**}\) solutions, we would have \(\left( \alpha -\beta -\gamma \right) (x^{**}) \ge \left( \alpha -\beta -\gamma \right) (x^{*})\), and concerning the related objective values, we would have the following inequalities:

$$\begin{aligned} \psi (x^{*})&= \eta \left( \alpha -\beta -\gamma \right) (x^{*}) \\&\quad + \left( 1-\eta \right) \sum _{k} w_{k} \left( \mu ^{(\cdot )}_{k}(O_{k}(x^{*}))-\lambda ^{(\cdot )}_{k}(O_{k}(x^{*})) - \nu ^{(\cdot )}_{k}(O_{k}(x^{*})) \right) \\&= \eta \left( \alpha -\beta -\gamma \right) (x^{*}) + \left( 1-\eta \right) \\&\left[ \sum _{k \ne t} w_{k} \left( \mu ^{(\cdot )}_{k}(O_{k}(x^{*}))-\lambda ^{(\cdot )}_{k}(O_{k}(x^{*})) - \nu ^{(\cdot )}_{k}(O_{k}(x^{*})) \right) \right. \\&\quad \left. + w_{t} \left( \mu ^{(\cdot )}_{t}(O_{t}(x^{*}))-\lambda ^{(\cdot )}_{k}(O_{k}(x^{*})) - \nu ^{(\cdot )}_{t}(O_{t}(x^{*})) \right) \right] \\&< \eta \left( \alpha -\beta -\gamma \right) (x^{**}) + \left( 1-\eta \right) \\&\left[ \sum _{k \ne t} w_{k} \left( \mu ^{(\cdot )}_{k}(O_{k}(x^{**}))-\lambda ^{(\cdot )}_{k}(O_{k}(x^{**})) - \nu ^{(\cdot )}_{k}(O_{k}(x^{**})) \right) \right. \\&\quad \left. + w_{t} \left( \mu ^{(\cdot )}_{t}(O_{t}(x^{**}))-\lambda ^{(\cdot )}_{k}(O_{k}(x^{**})) - \nu ^{(\cdot )}_{t}(O_{t}(x^{**})) \right) \right] \\&= \psi (x^{**}). \end{aligned}$$

Hence, we have arrived at a contradiction that \(x^{*}\) is not a unique optimal solution of proposed WINPA (25). This completes the proof of Theorem 8.

Definition 17

[8] (Distance function) To measure the performances of various approaches, the neutrosophic distance functions \(D^{N}(x) = \Big [ \sum _{k=1}^{K} \Big \{1-\big (\mu ^{(\cdot )}_{k}(O_{k}(x)) -\lambda ^{(\cdot )}_{k}(O_{k}(x))- \nu ^{(\cdot )}_{k}(O_{k}(x)) \big )\Big \}^{2}\Big ]^{\frac{1}{2}}\) are presented to select a priority solution. The smaller the value of distance function is, the better a solution will be.

Linear-type membership functions approach (LTMFA)

Assume that \(\mu ^{L}_{k}(O_{k}(x)) \ge \alpha \), \(\lambda ^{L}_{k}(O_{k}(x)) \le \beta \) and \(\nu ^{L}_{k}(O_{k}(x)) \le \gamma \), for all k.

Using auxiliary parameters \(\alpha ,~\beta \) and \(\gamma \), the problem (24) can be transformed into the following problem (26):

$$\begin{aligned} \begin{array}{ll} (LTMFA)&{} {\text {Max}}~\psi (x)= \eta \left( \alpha -\beta -\gamma \right) + \left( 1-\eta \right) \sum _{k=1}^{K} \\ &{}\quad \times \left( \mu ^{(L)}_{k}(O_{k}(x)) -\lambda ^{(L)}_{k}(O_{k}(x))- \nu ^{(L}_{k}(O_{k}(x)) \right) \\ &{} \text {subject~to}\\ &{}O_{k}(x) +( U_{k}^{\mu } - L_{k}^{\mu }) \alpha \le U_{k}^{\mu }, \\ &{}O_{k}(x) -( U_{k}^{\lambda } - L_{k}^{\lambda }) \beta \le L_{k}^{\lambda }, \\ &{}O_{k}(x) -( U_{k}^{\nu } - L_{k}^{\nu }) \gamma \le L_{k}^{\nu }, \\ &{} \alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3 \\ &{} \alpha ,~\beta ,~\gamma \in (0,1)\\ &{} \text {all the constraints of (7)}. \end{array} \end{aligned}$$
(26)

Remark 2

In LTMFA (26), we try to determine a solution in such a way that it maximizes the minimum truth degree and minimizes the maximum indeterminacy and a falsity degrees by taking all objectives simultaneously, to attain the optimal compromise solution.

Theorem 9

A unique optimal solution of problem (26) (LTMFA) is also an efficient solution for the problem (7).

Proof

Suppose that \(\left( {\bar{x}}, {\bar{\alpha }}, {\bar{\beta }}, {\bar{\gamma }} \right) \) be a unique optimal solution of problem (26) (LTMFA). Then, \(\left( {\bar{\alpha }}- {\bar{\beta }}- {\bar{\gamma }} \right) > \left( \alpha - \beta - \gamma \right) \) for any \(\left( x, \alpha , \beta , \gamma \right) \) feasible to the problem (26) (LTMFA). On the contrary, assume that \(\left( {\bar{x}}, {\bar{\alpha }}, {\bar{\beta }}, {\bar{\gamma }} \right) \) is not an efficient solution of the problem (7). For that, there exists \(x^{*} \left( x^{*} \ne {\bar{x}} \right) \) feasible to problem (7), such that \(O_{k}(x^{*}) \le O_{k}({\bar{x}})\) for all \(k=1,2, \ldots , K\) and \(O_{k}(x^{*}) < O_{k}({\bar{x}})\) for at least one k.

Therefore, we have \( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}} \le \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}}\) for all \(k=1,2, \ldots , K\) and \( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}} < \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}}\) for at least one k.

Hence, \(\underset{k}{\text {max}} \left( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}} \right) \le (<) ~ \underset{k}{\text {max}} \left( \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}} \right) \).

Suppose that \(\gamma ^{*}=\underset{k}{\text {max}} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} \right) \), and then, \(\gamma ^{*} \le (<) ~ {\bar{\gamma }}\).

Also, consider that \(\beta ^{*}=\underset{k}{\text {max}} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} \right) \), and then, \(\beta ^{*} \le (<) ~ {\bar{\beta }}\).

In the same manner, we have \( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} \ge \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}}\) for all \(k=1,2, \ldots , K\) and \( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} > \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}}\) for at least one k.

Thus, \(\underset{k}{\text {min}} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} \right) \ge (>) ~ \underset{k}{\text {min}} \left( \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}} \right) \).

Assume that \( \alpha ^{*}=\underset{k}{\text {min}} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} \right) \), this gives \(\left( {\bar{\alpha }}- {\bar{\beta }}- {\bar{\gamma }} \right) < \left( \alpha ^{*}- \beta ^{*}- \gamma ^{*} \right) \), which means that the solution is not unique optimal. Thus, we have arrived at a contradiction with the fact that \(\left( {\bar{x}}, {\bar{\alpha }}, {\bar{\beta }}, {\bar{\gamma }} \right) \) is the unique optimal solution of (LTMFA). Therefore, it is also an efficient solution of the problem (26). This completes the proof of Theorem 9.

Exponential-type membership functions approach (ETMFA)

We assume that \(\mu ^{E}_{k}(O_{k}(x)) \ge \alpha \), \(\lambda ^{E}_{k}(O_{k}(x)) \le \beta \), and \(\nu ^{E}_{k}(O_{k}(x)) \le \gamma \), for all k. Using auxiliary parameters \(\alpha ,~\beta \) and \(\gamma \), the problem (24) can be converted into the following problem (27):

$$\begin{aligned} \begin{array}{ll} (ETMFA)&{} {\text {Max}}~\psi (x)= \eta \left( \alpha -\beta -\gamma \right) + \left( 1-\eta \right) \\ &{}\qquad \sum _{k=1}^{K} \left( \mu ^{(E)}_{k}(O_{k}(x))\right. \\ &{}\qquad \left. -\lambda ^{(E)}_{k}(O_{k}(x))- \nu ^{(E)}_{k}(O_{k}(x)) \right) \\ &{} \text {subject~to}\\ &{} \dfrac{e^{-d_{k} \left( \frac{O_{k}(x)- L_{k}^{\mu }}{U_{k}^{\mu } - L_{k}^{\mu }}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \ge \alpha , \\ &{} \dfrac{e^{-d_{k} \left( \frac{U_{k}^{\lambda } -O_{k}(x)}{U_{k}^{\lambda } - L_{k}^{\lambda }}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \le \beta , \\ &{} \dfrac{e^{-d_{k} \left( \frac{U_{k}^{\nu } -O_{k}(x)}{U_{k}^{\nu } - L_{k}^{\nu }}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \le \gamma , \\ &{} \alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3 \\ &{} \alpha ,~\beta ,~\gamma \in (0,1)\\ &{} \text {all the constraints of (7)}. \end{array} \end{aligned}$$
(27)

Remark 3

If \(d_{k} \, \rightarrow \, 0\), then the exponential-type membership function will be reduced into linear-type membership function.

Theorem 10

A unique optimal solution of problem (27) (ETMFA) is also an efficient solution for the problem (7).

Proof

This will be proved by arriving at a contradiction.

Suppose that \(\left( {\bar{x}}, {\bar{\alpha }}, {\bar{\beta }}, {\bar{\gamma }} \right) \) be a unique optimal solution of problem (27) (ETMFA) which is not an efficient solution for the problem (7). Then, there exists \(x^{*} \left( x^{*} \ne {\bar{x}} \right) \) feasible to problem (7), such that \(O_{k}(x^{*}) \le O_{k}({\bar{x}})\) for all \(k=1,2, \ldots , K\) and \(O_{k}(x^{*}) < O_{k}({\bar{x}})\) for at least one k.

Consequently, we have \( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}} \le \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}}\) for all \(k=1,2, \ldots , K\) and \( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}} < \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}}\) for at least one k.

Hence, we have:

\( \frac{e^{-d_{k} \left( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \ge \frac{e^{-d_{k} \left( \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}} \right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \) for all \(k=1,2, \ldots , K\) and

\( \frac{e^{-d_{k} \left( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} > \frac{e^{-d_{k} \left( \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}} \right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \) for at least one k.

Thus, \(\underset{k}{\text {min}} \left( \frac{e^{-d_{k} \left( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \ge (>) ~ \underset{k}{\text {min}} \left( \frac{e^{-d_{k} \left( \frac{O_{k}({\bar{x}})-L_{k}}{U_{k}-L_{k}} \right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \).

If \(\alpha ^{*}=\underset{k}{\text {min}} \left( \frac{e^{-d_{k} \left( \frac{O_{k}(x^{*})-L_{k}}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \), then \(\alpha ^{*} \ge (>)~ {\bar{\alpha }}\).

Similarly, we have \( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} \ge \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}}\) for all \(k=1,2, \ldots , K\) and \( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}} > \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}}\) for at least one k.

Consequently, it gives:

\( \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \le \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}} \right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \) for all \(k=1,2, \ldots , K\) and

\( \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} < \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}} \right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \) for at least one k.

Hence, \(\underset{k}{\text {max}} \left( \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \le (<) ~ \underset{k}{\text {max}} \left( \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}({\bar{x}})}{U_{k}-L_{k}} \right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \).

Assuming \(\beta ^{*}=\underset{k}{\text {max}} \left( \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \), we have \(\beta ^{*} \le (<)~ {\bar{\beta }}\).

Again, by considering \(\gamma ^{*}=\underset{k}{\text {max}} \left( \frac{e^{-d_{k} \left( \frac{U_{k}-O_{k}(x^{*})}{U_{k}-L_{k}}\right) } - e^{-d_{k}}}{1- e^{-d_{k}}} \right) \), we get \(\gamma ^{*} \le (<)~ {\bar{\gamma }}\).

This gives \(\left( {\bar{\alpha }}- {\bar{\beta }}- {\bar{\gamma }} \right) < \left( \alpha ^{*}- \beta ^{*}- \gamma ^{*} \right) \), that contradicts the fact that \(\left( {\bar{x}}, {\bar{\alpha }}, {\bar{\beta }}, {\bar{\gamma }} \right) \) is the unique optimal solution of the problem (27) (ETMFA). Hence, the Theorem 10 is proved.

Hyperbolic-type membership functions approach (HTMFA)

Suppose that \(\mu ^{H}_{k}(O_{k}(x)) \ge \alpha \), \(\lambda ^{H}_{k}(O_{k}(x)) \le \beta \) and \(\nu ^{H}_{k}(O_{k}(x)) \le \gamma \), for all k. Using auxiliary parameters \(\alpha ,~\beta \) and \(\gamma \), the problem (24) can be transformed into the following problem (28):

$$\begin{aligned} \begin{array}{ll} &{}{\text {Max}}~\psi (x)= \eta \left( \alpha -\beta -\gamma \right) + \left( 1-\eta \right) \sum _{k=1}^{K} \\ &{}\qquad \times \left( \mu ^{(H)}_{k}(O_{k}(x)) -\lambda ^{(H)}_{k}(O_{k}(x))- \nu ^{(H)}_{k}(O_{k}(x)) \right) \\ &{} \text {subject~to}\\ &{} \dfrac{1}{2} \left[ 1+ \text {tanh} \left( \theta _{k} \left( \dfrac{U^{\mu }_{k}+L^{\mu }_{k}}{2} - O_{k}(x)\right) \right) \right] \ge \alpha , \\ &{} \dfrac{1}{2} \left[ 1+ \text {tanh} \left( \theta _{k} \left( \dfrac{U^{\lambda }_{k}+L^{\lambda }_{k}}{2} - O_{k}(x)\right) \right) \right] \le \beta , \\ &{} \dfrac{1}{2} \left[ 1+ \text {tanh} \left( \theta _{k} \left( O_{k}(x) - \dfrac{U^{\nu }_{k}+L^{\nu }_{k}}{2} \right) \right) \right] \le \gamma , \\ &{} \alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3 \\ &{} \alpha ,~\beta ,~\gamma \in (0,1),~~\theta _{k}=\frac{6}{U_{k}-L_{k}},~ \forall ~k=1,2, \dots , K\\ &{} \text {all the constraints of (7)}. \end{array} \end{aligned}$$
(28)

Equivalently, we have problem (29) as follows:

$$\begin{aligned} \begin{array}{ll} (HTMFA)&{} {\text {Max}}~\psi (x)= \eta \left( \alpha -\beta -\gamma \right) + \left( 1-\eta \right) \sum _{k=1}^{K} \\ &{}\qquad \left( \mu ^{(H)}_{k}(O_{k}(x)) -\lambda ^{(H)}_{k}(O_{k}(x))- \nu ^{(H)}_{k}(O_{k}(x)) \right) \\ &{} \text {subject~to}\\ &{}\theta _{k} O_{k}(x) + \text {tanh}^{-1} \left( 2 \alpha -1 \right) \le \frac{\theta _{k}}{2} \left( U^{\mu }_{k}+L^{\mu }_{k}\right) , \\ &{} \theta _{k} O_{k}(x) - \text {tanh}^{-1} \left( 2 \beta -1 \right) \le \frac{\theta _{k}}{2} \left( U^{\mu }_{k}+L^{\mu }_{k}\right) , \\ &{} \theta _{k} O_{k}(x) - \text {tanh}^{-1} \left( 2 \gamma -1 \right) \le \frac{\theta _{k}}{2} \left( U^{\mu }_{k}+L^{\mu }_{k}\right) , \\ &{} \alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3, \\ &{} \alpha ,~\beta ,~\gamma \in (0,1),~~\theta _{k}=\frac{6}{U_{k}-L_{k}},~ \forall ~k=1,2, \dots , K\\ &{} \text {all the constraints of (7)}. \end{array} \end{aligned}$$
(29)

Theorem 11

A unique optimal solution of problem (29) (HTMFA) is also an efficient solution for the problem (7).

Proof

Let us consider that \(\left( {\bar{x}}, {\bar{\alpha }}, {\bar{\beta }}, {\bar{\gamma }} \right) \) be a unique optimal solution of problem (29) (HTMFA), but not an efficient solution for the problem (7). This gives that there exists \(x^{*} \left( x^{*} \ne {\bar{x}} \right) \) feasible to problem (7), such that \(O_{k}(x^{*}) \le O_{k}({\bar{x}})\) for all \(k=1,2, \ldots , K\) and \(O_{k}(x^{*}) < O_{k}({\bar{x}})\) for at least one k.

Simultaneously, \(\text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}(x^{*})\right) \right) \ge \text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}({\bar{x}})\right) \right) \) for all \(k=1,2, \ldots , K\) and \( \text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}(x^{*})\right) \right) > \text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}({\bar{x}})\right) \right) \) for at least one k.

Furthermore, it gives:

\(\underset{k}{\text {min}} \left( \text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}(x^{*})\right) \right) \right) \ge (>) ~ \underset{k}{\text {min}} \left( \text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}({\bar{x}})\right) \right) \right) \).

If \(\alpha ^{*}=\underset{k}{\text {min}} \left( \dfrac{1}{2} \text {tanh} \left( \theta _{k} \left( \dfrac{U_{k}+L_{k}}{2} - O_{k}(x^{*})\right) \right) + \dfrac{1}{2}\right) \), then \(\alpha ^{*} \ge (>)~ {\bar{\alpha }}\).

Similarly, we have \(\beta ^{*}=\underset{k}{\text {max}} \left( \dfrac{1}{2} \text {tanh} \left( \theta _{k} \left( O_{k}(x^{*}) - \dfrac{U_{k}+L_{k}}{2}\right) \right) + \dfrac{1}{2}\right) \), then \(\beta ^{*} \le (<)~ {\bar{\beta }}\) and \(\gamma ^{*}=\underset{k}{\text {min}} \left( \dfrac{1}{2} \text {tanh} \left( \theta _{k} \left( O_{k}(x^{*})- \dfrac{U_{k}+L_{k}}{2}\right) \right) + \dfrac{1}{2}\right) \), and then, \(\gamma ^{*} \ge (>)~ {\bar{\gamma }}\).

Thus, we get \(\left( {\bar{\alpha }}- {\bar{\beta }}- {\bar{\gamma }} \right) < \left( \alpha ^{*}- \beta ^{*}- \gamma ^{*} \right) \). This arises a contradiction with the fact that \(\left( {\bar{\alpha }}- {\bar{\beta }}- {\bar{\gamma }} \right) \) is the unique optimal solution of the problem (29) (HTMFA). Hence, the Theorem 11 is proved.

The selection of membership function solely depends on the decision-makers based on their satisfaction level. The linear, exponential, and hyperbolic membership function has their importance while assessing each objective function’s marginal evaluations. Some additional parameters in the exponential and hyperbolic membership functions make it more flexible than the linear one. The representation of marginal evaluations of each objective function is constant while using a linear-type membership function. Therefore, the use or selection of appropriate membership functions can be made on the flexible nature of the membership functions that will provide more opportunity to generate a variety of compromise solutions by tuning the additional parameters.

Proposed solution algorithm

The flowchart is also depicted in Fig. 1 based on the following step-wise solution algorithm.

Step-1 Formulate the IFMOOP (6).

Step-2 Using accuracy function (EV), obtain the crisp MOOP (7).

Step-3 Solve each objective function individually and determine the upper and lower bound using Eq. (9).

Step-4 With the aid of \(U_{k}\) and \(L_{k}\), the upper and lower bound can be constructed for the truth, indeterminacy, and a falsity membership Eqs. (10)–(12) under neutrosophic environment.

Step-5 Elicit the different types of membership functions under neutrosophic environment using Eqs. (1315), (1618), and (1921) according to decision-makers’ preference respectively.

Step-6 Develop the proposed INPA or WINPA with linear (LTMFA) or exponential (ETMFA) or hyperbolic (HTMFA) membership functions under the given set of well-defined constraints.

Step-7 Solve the proposed INPA or WINPA (26), (27), and (29) to get the best compromise result by applying appropriate solution methods or suitable optimizing softwares.

Fig. 1
figure 1

Flowchart for the proposed solution algorithm

Numerical examples

Let us consider the following four numerical illustrations. Despite that, we have also presented a cloud computing pricing problem. All the numerical examples and cloud computing optimization models are coded in the AMPL language, and the outcomes are obtained using the solver Kintro 10.3.0 through NEOS server version 5.0 free access permitted by University of Wisconsin in Madison Dolan [20], Server [29]. Table 1 depicts the triangular intuitionistic fuzzy parameters for all the numerical examples.

Table 1 Intuitionistic fuzzy parameters for all the numerical examples
Table 2 Optimal solution results of all the numerical examples

Example 1

$$ \begin{aligned} \begin{array}{ll} \text {Min}&{}{\widetilde{O}}^{IF}_{1}(x)= {\widetilde{1}}^{IF}x_{1} +{\widetilde{1}}^{IF}x_{2}+ {\widetilde{1}}^{IF}x_{3}\\ \text {Max}&{}{\widetilde{O}}^{IF}_{2}(x)= {\widetilde{18}}^{IF}x_{1}+{\widetilde{14}}^{IF}x_{2}+ {\widetilde{8}}^{IF}x_{3}\\ \text {Max}&{}{\widetilde{O}}^{IF}_{3}(x)={\widetilde{1}}^{IF}x_{3}\\ \text {s.t.}&{}\\ &{}{\widetilde{15}}^{IF}x_{1} +{\widetilde{12}}^{IF}x_{2}+ {\widetilde{7}}^{IF}x_{3} \le {\widetilde{43}}^{IF}\\ &{}x_{j} \ge 0~\forall ~j=1,2,3~ \& ~x_{3}~integer. \end{array} \end{aligned}$$

Example 2

$$ \begin{aligned} \begin{array}{ll} \text {Min}&{}{\widetilde{O}}^{IF}_{1}(x)= {\widetilde{1}}^{IF}x_{1} +{\widetilde{3}}^{IF}x_{2}\\ \text {Max}&{}{\widetilde{O}}^{IF}_{2}(x)= {\widetilde{1}}^{IF}x_{1}+{\widetilde{1}}^{IF}x_{2}\\ \text {s.t.}&{}\\ &{}{\widetilde{6}}^{IF}x_{1} +{\widetilde{5}}^{IF}x_{2} \le {\widetilde{27}}^{IF}\\ &{}{\widetilde{2}}^{IF}x_{1} +{\widetilde{5}}^{IF}x_{2} \le {\widetilde{16}}^{IF}\\ &{}x_{j} \ge 0~\forall ~j=1,2~ \& ~x_{2}~integer. \end{array} \end{aligned}$$

Example 3

$$ \begin{aligned} \begin{array}{ll} \text {Min}&{}{\widetilde{O}}^{IF}_{1}(x)= {\widetilde{1}}^{IF}x_{1} +{\widetilde{1}}^{IF}x_{2}\\ \text {Max}&{}{\widetilde{O}}^{IF}_{2}(x)= {\widetilde{5}}^{IF}x_{1}+{\widetilde{4}}^{IF}x_{2}\\ \text {s.t.}&{}\\ &{}{\widetilde{10}}^{IF}x_{1} +{\widetilde{6}}^{IF}x_{2} \le {\widetilde{45}}^{IF}\\ &{}{\widetilde{1}}^{IF}x_{1} +{\widetilde{1}}^{IF}x_{2} \le {\widetilde{5}}^{IF}\\ &{}x_{j} \ge 0~\forall ~j=1,2~ \& ~integer. \end{array} \end{aligned}$$

Example 4

$$ \begin{aligned} \begin{array}{ll} \text {Min}&{}{\widetilde{O}}^{IF}_{1}(x)= {\widetilde{30}}^{IF}x_{1} +{\widetilde{50}}^{IF}x_{2} - {\widetilde{70}}^{IF}x_{3}\\ \text {Max}&{}{\widetilde{O}}^{IF}_{2}(x)= {\widetilde{20}}^{IF}x_{1}+{\widetilde{40}}^{IF}x_{2}+{\widetilde{20}}^{IF}x_{3} +{\widetilde{15}}^{IF}x_{4} +{\widetilde{30}}^{IF}x_{5}\\ \text {s.t.}&{}\\ &{}{\widetilde{8}}^{IF}x_{1} +{\widetilde{10}}^{IF}x_{2} +{\widetilde{2}}^{IF}x_{3} +{\widetilde{1}}^{IF}x_{4}+{\widetilde{10}}^{IF}x_{5} \le {\widetilde{25}}^{IF}\\ &{}{\widetilde{5}}^{IF}x_{1} +{\widetilde{4}}^{IF}x_{2} +{\widetilde{3}}^{IF}x_{2} +{\widetilde{7}}^{IF}x_{4}+{\widetilde{8}}^{IF}x_{5} \le {\widetilde{25}}^{IF}\\ &{}{\widetilde{1}}^{IF}x_{1} +{\widetilde{7}}^{IF}x_{2} +{\widetilde{9}}^{IF}x_{3} +{\widetilde{4}}^{IF}x_{4}+{\widetilde{6}}^{IF}x_{5} \le {\widetilde{25}}^{IF}\\ &{}x_{j} \ge 0~\forall ~j=1,2~ \& ~integer. \end{array} \end{aligned}$$

All the solution results are shown in Table 2. For each example, our proposed INPA outperforms the TH method [35]. A less will be a value of distance function, and the higher will be the optimal global solution’s satisfaction level. It also ensures less deviation from the ideal solutions of the compromise optimal solution sets. The graphical representation of distance functions and numerical examples is depicted in Fig. 2.

Fig. 2
figure 2

Comparison between proposed INPA and TH method based on distance function

The existing approaches have some limitations or drawbacks, and can be overcome by applying the proposed interactive neutrosophic programming approach. Indeterminacy/neutral thoughts are the ignorance region of propositions’ values between the truth and falsity degrees. This aspect can only be tackled with the neutrosophic optimization method. The methods for solving MOOPs given by Gupta and Kumar [23], Singh et al. [30], and Zangiabadi and Maleki [37] consider only the membership function, whereas Mahajan and Gupta [27], and Singh and Yadav [31, 32] included the membership and non-membership degrees of each objective function. They do not cover the indeterminacy/neutral thoughts while making decisions. We have successfully coped with the concept of neutrality and suggested indeterminacy degrees and membership and non-membership degrees simultaneously. The studies presented by Mahajan and Gupta [27], Singh and Yadav [31], and Zangiabadi and Maleki [37] do not allow the flexibility of vagueness degree (shape parameters) in neutral thoughts, but while applying exponential-type membership function under neutrosophic environment, it can be availed. The proposed INPA can be considered as an extension of Ahmad and Adhami [3], Ahmad et al. [5,6,7], Li and Hu [26], and Torabi and Hassini [35] approaches.

Cloud computing pricing problem

A case study description based on the cloud computing pricing problem is discussed in this section. The validity and applicability of the proposed INPA are tested by implementing a multiobjective optimization problem with an intuitionistic fuzzy dataset. The cloud computing market offers the software as a service (SaaS) in the form of cloud software resources for the customers (person, organization, and government) and utilizes infrastructure as a service (IaaS) provided by the IaaS agency. The customers select to utilize cloud resources and their intrinsic resources, depending on their prices. For instance, Dropbox was developed and founded by Drew Houston and Arash Ferdowsi in 2007. Dropbox provides the facilities to its end-users or organization storage software resources and avails IaaS from Amazon S3 without maintaining and managing the necessary resources at lower prices. In this case study, we present cloud storage software resources (SaaS) pricing problems such as box.com, GoogleDrive, Dropbox, and essential storage resources (IaaS) such as Amazon simple storage service (Amazon S3). The IaaS and the SaaS service facility provider’s main motive is simultaneous to maximize their gross profits. The customers intend to buy coherent, effective, and economical cloud storage software resources. The decision-makers of each service provider design policies by interacting together. The consumers’ purchases critically contrive the customers’ responses to the cost of the SaaS that is fixed according to the price of the IaaS, and the end-user purchases also strain the pricing of the IaaS facility provider. To highlight the above inter-relationship between the SaaS and the IaaS, and formulate the cloud computing pricing problems, we develop the cloud computing IFMOOP model with intuitionistic fuzzy parameters. In the proposed cloud computing IFMOOP, the first objective is to maximize the IaaS service provider’s profits that offer the necessary storage capacity. The second objective also maximizes the SaaS provider’s total profit that offers the facility to end-users with software resources. Finally, the third objective is associated with the customers who always try to minimize these service facilities’ purchasing costs.

Step 1: The IFMOOP model for cloud computing pricing model is depicted as follows:

$$\begin{aligned} \begin{array}{ll} \text {Max}&{}{\widetilde{O}}^{IF}_{1}(P_{I},x)= \sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}\widetilde{0.01}^{IF}x_{i1}\\ \text {Max}&{}{\widetilde{O}}^{IF}_{2}(P_{S},x)= \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}\widetilde{0.05}^{IF}x_{i1} - \sum _{i \in N}P_{I}x_{i1}\\ \text {Min}&{}{\widetilde{O}}^{IF}_{3}(C,x)=\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}\widetilde{0.635}^{IF}x_{i2}\\ \text {s.t.}&{}\\ &{}P_{I} - \widetilde{0.01}^{IF} \ge 0,\\ &{}P_{I} \le \widetilde{0.88}^{IF} P_{S},\\ &{}P_{S} \le \widetilde{0.635}^{IF},\\ &{}10 \le x_{i} \le n_{i},~\forall ~i \in N, \end{array} \end{aligned}$$

where \(\widetilde{0.01}^{IF}=(0.00,0.01,0.04; 0.00,0.01,0.06),~\widetilde{0.05}^{IF}=(0.03,0.05,0.07; 0.01,0.05,0.09),~\widetilde{0.635}^{IF}= (0.630,0.635,0.640; 0.625,0.635,0.645),~\widetilde{0.88}^{IF}=(0.86,0.88,0.90; 0.84,0.88,0.92)\) are the triangular intuitionistic fuzzy parameters. Also, \(P_{I}\) and \(P_{S}\) are the prices of the IaaS and the SaaS, respectively, and C is the purchasing cost for customer. If the customers do not feel justified with the price of SaaS, he/she can utilize the conventional tools for the resource storage instead of storing the SaaS resources. Suppose \(x_{i1}\) be the demand of ith customer for the SaaS, \(x_{i2}\) be the service level capacity for utilizing intrinsic resources, and \(x= \left( x_{i1},x_{i2}, \ldots , x_{iN}\right) \). Furthermore, \(P_{I},~P_{S}\), and x are the set of decision variables. The values \(n_{i}=10^{5} T\) and \(N=50\) are chosen according to box.com [16] and Amazon [11].

Step 2: Using accuracy function (Definition 8), the crisp version of the IFMOOP can be presented as follows:

$$\begin{aligned} \begin{array}{ll} \text {Max}&{}{\widetilde{O}}^{'}_{1}(P_{I},x)= \sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1}\\ \text {Max}&{}{\widetilde{O}}^{'}_{2}(P_{S},x)= \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1}\\ \text {Min}&{}{\widetilde{O}}^{'}_{3}(C,x)= \sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2}\\ \text {s.t.}&{}\\ &{}P_{I} - 0.01 \ge 0,\\ &{}P_{I} \le 0.88 P_{S},\\ &{}P_{S} \le 0.635,\\ &{}10 \le x_{i} \le n_{i},~\forall ~i \in N. \end{array} \end{aligned}$$
(30)

Step 3: The lower and upper bounds can be obtained as follows: \(U_{1}=11.2528 \times 10^{5},~L_{1}= 7.5482 \times 10^{5},~U_{2}=12.2528 \times 10^{5},~L_{2}= 6.4758 \times 10^{5},~U_{3}=4.2546 \times 10^{6}, \text {and}~L_{3}=0.5482 \times 10^{6}\), respectively.

Step 4: The bounds for truth, indeterminacy, and falsity membership functions can be given as follows: \( U_{1}^{\mu }= 11.2528 \times 10^{5},~L_{1}^{\mu }= 7.5482 \times 10^{5},~U_{2}^{\mu }= 12.2528 \times 10^{5},~L_{2}^{\mu }= 6.4758 \times 10^{5},U_{3}^{\mu }= 4.2546 \times 10^{6},~L_{3}^{\mu }= 0.5482 \times 10^{6} U_{1}^{\lambda }= 7.5482 \times 10^{5} + s_{1},\) \(L_{1}^{\lambda }= L_{1}^{\mu },~U_{2}^{\lambda }= 6.4758 \times 10^{5} + s_{2},~L_{2}^{\lambda }= 6.4758 \times 10^{5},~U_{3}^{\lambda }= 0.5482 \times 10^{6} + s_{3},~L_{3}^{\lambda }= 0.5482 \times 10^{6} U_{1}^{\nu }= 11.2528 \times 10^{5},~L_{1}^{\nu }=7.5482 \times 10^{5}+ t_{1},\) \(U_{2}^{\nu }= 12.2528 \times 10^{5},~L_{2}^{\nu }=6.4758 \times 10^{5}+ t_{2},~U_{3}^{\nu }= 4.2546 \times 10^{6},~L_{3}^{\nu }=0.5482 \times 10^{6}+ t_{3} \).

Step 5 and 6: The different model such as LTMFA (26), ETMFA (27), and HTMFA (29) can be formulated as follows:

  • Using the LTMFA (26), Problem (30) can be represented as follows (31):

    $$\begin{aligned}&{\text {Max}}~~\alpha - \beta - \gamma \nonumber \\&\text {s.t.}\nonumber \\&\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1} +( U_{1}^{\mu } - L_{1}^{\mu }) \alpha \le U_{1}^{\mu }, \nonumber \\&\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1} -( U_{1}^{\lambda } - L_{1}^{\lambda }) \beta \le L_{1}^{\lambda }, \nonumber \\&\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1} -( U_{1}^{\nu } - L_{1}^{\nu }) \gamma \le L_{1}^{\nu }, \nonumber \\&\sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1} +( U_{2}^{\mu } - L_{2}^{\mu }) \alpha \le U_{2}^{\mu }, \nonumber \\&\sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1} -( U_{2}^{\lambda } - L_{2}^{\lambda }) \beta \le L_{2}^{\lambda }, \nonumber \\&\sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1} -( U_{2}^{\nu } - L_{2}^{\nu }) \gamma \le L_{2}^{\nu }, \nonumber \\&\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2} +( U_{3}^{\mu } - L_{3}^{\mu }) \alpha \le U_{3}^{\mu }, \nonumber \\&\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2} -( U_{3}^{\lambda } - L_{3}^{\lambda }) \beta \le L_{3}^{\lambda }, \nonumber \\&\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2} -( U_{3}^{\nu } - L_{3}^{\nu }) \gamma \le L_{3}^{\nu }, \nonumber \\&P_{I} - 0.01 \ge 0,\nonumber \\&P_{I} \le 0.88 P_{S},\nonumber \\&P_{S} \le 0.635,\nonumber \\&\alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3 \nonumber \\&\alpha ,~\beta ,~\gamma \in (0,1)\nonumber \\&10 \le x_{i} \le n_{i},~\forall ~i \in N. \end{aligned}$$
    (31)
  • Using the ETMFA (27), Problem (30) can be represented as follows (32):

    $$\begin{aligned}&{\text {Max}}~~\alpha - \beta - \gamma \nonumber \\&\text {s.t.}\nonumber \\&\dfrac{e^{-d \left( \frac{\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1}- L_{1}^{\mu }}{U_{1}^{\mu } - L_{1}^{\mu }}\right) } - e^{-d}}{1- e^{-d}} \ge \alpha , \nonumber \\&\dfrac{e^{-d \left( \frac{U_{1}^{\lambda } -\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1}}{U_{1}^{\lambda } - L_{1}^{\lambda }}\right) } - e^{-d}}{1- e^{-d}} \le \beta , \nonumber \\&\dfrac{e^{-d \left( \frac{U_{1}^{\nu } -\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1}}{U_{1}^{\nu } - L_{1}^{\nu }}\right) } - e^{-d}}{1- e^{-d}} \le \gamma , \nonumber \\&\dfrac{e^{-d \left( \frac{ \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1}- L_{2}^{\mu }}{U_{2}^{\mu } - L_{2}^{\mu }}\right) } - e^{-d}}{1- e^{-d}} \ge \alpha , \nonumber \\&\dfrac{e^{-d \left( \frac{U_{2}^{\lambda } - \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1}}{U_{2}^{\lambda } - L_{2}^{\lambda }}\right) } - e^{-d}}{1- e^{-d}} \le \beta , \nonumber \\&\dfrac{e^{-d \left( \frac{U_{2}^{\nu } - \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1}}{U_{2}^{\nu } - L_{2}^{\nu }}\right) } - e^{-d}}{1- e^{-d}} \le \gamma , \nonumber \\&\dfrac{e^{-d \left( \frac{\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2}- L_{3}^{\mu }}{U_{3}^{\mu } - L_{3}^{\mu }}\right) } - e^{-d}}{1- e^{-d}} \ge \alpha , \nonumber \\&\dfrac{e^{-d \left( \frac{U_{3}^{\lambda } -\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2}}{U_{3}^{\lambda } - L_{3}^{\lambda }}\right) } - e^{-d}}{1- e^{-d}} \le \beta , \nonumber \\&\dfrac{e^{-d \left( \frac{U_{3}^{\nu } -\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2}}{U_{3}^{\nu } - L_{3}^{\nu }}\right) } - e^{-d}}{1- e^{-d}} \le \gamma , \nonumber \\&P_{I} - 0.01 \ge 0,\nonumber \\&P_{I} \le 0.88 P_{S},\nonumber \\&P_{S} \le 0.635,\nonumber \\&\alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3 \nonumber \\&\alpha ,~\beta ,~\gamma \in (0,1)\nonumber \\&10 \le x_{i} \le n_{i},~\forall ~i \in N. \end{aligned}$$
    (32)
  • Using the HTMFA (29), Problem (30) can be represented as follows (33):

    $$\begin{aligned}&{\text {Max}}~~\alpha - \beta - \gamma \nonumber \\&\text {s.t.}\nonumber \\&\theta _{1} (\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1}) \nonumber \\&+ \text {tanh}^{-1} \left( 2 \alpha -1 \right) \le \frac{\theta _{1}}{2} \left( U^{\mu }_{1}+L^{\mu }_{1}\right) , \nonumber \\&\theta _{1} (\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1})\nonumber \\&- \text {tanh}^{-1} \left( 2 \beta -1 \right) \le \frac{\theta _{1}}{2} \left( U^{\mu }_{1}+L^{\mu }_{1}\right) , \nonumber \\&\theta _{1} (\sum _{i \in N}P_{I}x_{i1} - \sum _{i \in N}0.01x_{i1}) \nonumber \\&- \text {tanh}^{-1} \left( 2 \gamma -1 \right) \le \frac{\theta _{1}}{2} \left( U^{\mu }_{1}+L^{\mu }_{1}\right) , \nonumber \\&\theta _{2} ( \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} \nonumber \\&- \sum _{i \in N}P_{I}x_{i1}) + \text {tanh}^{-1} \left( 2 \alpha -1 \right) \le \frac{\theta _{2}}{2} \left( U^{\mu }_{2}+L^{\mu }_{2}\right) , \nonumber \\&\theta _{2} ( \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1}) \nonumber \\&- \text {tanh}^{-1} \left( 2 \beta -1 \right) \le \frac{\theta _{2}}{2} \left( U^{\mu }_{2}+L^{\mu }_{2}\right) , \nonumber \\&\theta _{2} ( \sum _{i \in N}P_{S}x_{i1} - \sum _{i \in N}0.05x_{i1} - \sum _{i \in N}P_{I}x_{i1})\nonumber \\&- \text {tanh}^{-1} \left( 2 \gamma -1 \right) \le \frac{\theta _{2}}{2} \left( U^{\mu }_{2}+L^{\mu }_{2}\right) , \nonumber \\&\theta _{3} (\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2}) \nonumber \\&+ \text {tanh}^{-1} \left( 2 \alpha -1 \right) \le \frac{\theta _{2}}{2} \left( U^{\mu }_{3}+L^{\mu }_{3}\right) , \nonumber \\&\theta _{3} (\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2})\nonumber \\&- \text {tanh}^{-1} \left( 2 \beta -1 \right) \le \frac{\theta _{2}}{2} \left( U^{\mu }_{3}+L^{\mu }_{3}\right) , \nonumber \\&\theta _{3} (\sum _{i \in N}P_{S}x_{i1} +\sum _{i \in N}0.635x_{i2}) \nonumber \\&- \text {tanh}^{-1} \left( 2 \gamma -1 \right) \le \frac{\theta _{2}}{2} \left( U^{\mu }_{3}+L^{\mu }_{3}\right) , \nonumber \\&P_{I} - 0.01 \ge 0,\nonumber \\&P_{I} \le 0.88 P_{S},\nonumber \\&P_{S} \le 0.635,\nonumber \\&\alpha \ge \beta ,~~\alpha \ge \gamma , ~~ \alpha + \beta + \gamma \le 3, \nonumber \\&\alpha ,~\beta ,~\gamma \in (0,1),\nonumber \\&10 \le x_{i} \le n_{i},~\forall ~i \in N. \end{aligned}$$
    (33)

Step 7: Solve the Problems (31), (32), and (33) using the INPA or WINPA to get the best compromise solution.

Evaluation of solution results

On implementing the proposed INPA, we have obtained the following compromise optimal solution outcomes of each objective function. Using proposed INPA with LTMFA, the pricing values of the IaaS and the SaaS facilities provider are \(P_{I}= \$0.1942\) and \(P_{S}= \$0.4250\), respectively. Consequently, the respective objective function values of cloud computing providers are \(O_{1}= \$9.2024 \times 10^{5}\), \(O_{2}= \$9.0326 \times 10^{5}\), and \(O_{3}= \$2.1234 \times 10^{6}\) with \(\frac{\sum _{i \in N}x_{i1}}{\sum _{i \in N}x_{i}}=1.00\). The minimum value of distance function is obtained as 0.3569.

For proposed INPA with ETMFA, the pricing values of the IaaS and the SaaS facilities provider are \(P_{I}= \$0.2022\) and \(P_{S}= \$0.4330\), respectively. Consequently, the corresponding objective function values of cloud computing providers are \(O_{1}= \$9.4116 \times 10^{5}\), \(O_{2}= \$8.8527 \times 10^{5}\), and \(O_{3}= \$2.1241 \times 10^{6}\) with \(\frac{\sum _{i \in N}x_{i1}}{\sum _{i \in N}x_{i}}=0.9987\). Furthermore, the minimum value of distance function is determined as 0.5126.

On applying the proposed INPA with HTMFA, the pricing values of the IaaS and the SaaS facilities provider are \(P_{I}= \$0.2029\) and \(P_{S}= \$0.4337\), respectively. Consequently, the corresponding objective function values of cloud computing providers are \(O_{1}= \$9.2029 \times 10^{5}\), \(O_{2}= \$9.0329 \times 10^{5}\), and \(O_{3}= \$2.1234 \times 10^{6}\) with \(\frac{\sum _{i \in N}x_{i1}}{\sum _{i \in N}x_{i}}=1.00\). Moreover, the minimum value of distance function is calculated as 0.5485.

The outcomes reveal that the cloud computing pricing problem is solved using the proposed INPA with LTMFA, ETMFA, and HTMFA. By the comparison among the values of \(O_{1},O_{2},O_{3}\) and \(\frac{\sum _{i \in N}x_{i1}}{\sum _{i \in N}x_{i}}\), it is found that the proposed INPA with LTMFA can give more gross profit for the SaaS facility provider at a very low cost. Therefore, LTMFA is more fruitful and appropriate than other approaches for this cloud computing pricing problem. Moreover, to reflect the reality, suppose \(n_{i}\) be randomly generated in the specified interval \([0, 10^{5}]\) T. Again, applying the proposed INPA with LTMFA, ETMFA, and HTMFA, we can determine the following outcomes.

Using proposed INPA with LTMFA, the pricing values of the IaaS and the SaaS facilities provider are \(P_{I}= \$0.1942\) and \(P_{S}= \$0.4250\), respectively. Consequently, the respective objective function values of cloud computing providers are \(O_{1}= \$5.1707 \times 10^{5}\), \(O_{2}= \$5.0740 \times 10^{5}\), and \(O_{3}= \$1.1937 \times 10^{6}\). Furthermore, the corresponding minimum value of distance function is obtained as 0.3724.

For proposed INPA with ETMFA, the pricing values of the IaaS and the SaaS facilities provider are \(P_{I}= \$0.2058\) and \(P_{S}= \$0.4366\), respectively. Consequently, the corresponding objective function values of cloud computing providers are \(O_{1}= \$4.8101 \times 10^{5}\), \(O_{2}= \$4.4410 \times 10^{5}\), and \(O_{3}= \$1.0729 \times 10^{6}\). Moreover, the minimum value of distance function is calculated as 0.5482.

On applying the proposed INPA with HTMFA, the pricing values of the IaaS and the SaaS facilities provider are \(P_{I}= \$0.2029\) and \(P_{S}= \$0.4337\), respectively. Consequently, the corresponding objective function values of cloud computing providers are \(O_{1}= \$4.3409 \times 10^{5}\), \(O_{2}= \$4.0680 \times 10^{5}\), and \(O_{3}= \$9.7686 \times 10^{5}\). Furthermore, the minimum value of distance function is obtained as 0.5728.

Comparing the above three solution outcomes, we observe that the proposed INPA with LTMFA has a lower cost for cloud service facility and fetch more profit for the IaaS and the SaaS service provider. It means that the pricing policies of the proposed INPA with LTMFA are more feasible and can attract the attention of more customers to utilize cloud computing services. Also, cloud service facility providers will invite more customers to avail of the cloud service facilities to generate more revenue. Based on the obtained solution outcomes, the proposed INPA with LTMFA is more reliable and outperforms the other approaches while solving the cloud computing pricing problem.

Managerial implications

The current study inherently focuses on practical level implications by exploring advances in modeling and optimization techniques for solving MOOP. First, the representation of vague or ambiguous parameters using the intuitionistic fuzzy set theory is more realistic than a fuzzy set. It deals with the degree of non-belongingness along with the belongingness simultaneously. The second practical approach can be considered a neutrosophic optimization technique that inherently supports the neutral thoughts in decision-making problems and adheres to the indeterminacy degree and degrees of belongingness and non-belongingness simultaneously. The extension of Torabi and Hassini [35] is presented, and a comparative study is also performed. The proposed INPA is tested on four different numerical examples, and outcomes are evaluated based on the neutrosophic distance function. The performance of the proposed INPA is better than TH method for all the numerical. A case study on the cloud computing pricing problem is examined and solved using the proposed INPA with LTMFA, ETMFA, and HTMFA, respectively. For this problem, the proposed INPA with LTMFA outperforms the other approaches. The end-users’ dimension can be enhanced to match the real-life situation, and the optimal results can be obtained by using the proposed INPA efficiently. By tuning the compensation co-efficient \((\eta )\), one can obtain the desired number of solution sets. Thus, the proposed modeling and optimization framework can solve many real-life problems such as transportation, supplier selection, inventory control, supply chain planning problems, etc. The robust neutrosophic optimization techniques can be extended to solve the multiobjective linear, nonlinear, fractional, bi-level, multi-level programming problems implicitly.

Conclusions

This study presented the modeling and optimization approach of MOOP under the neutrosophic environment. The various parameters are taken as an intuitionistic fuzzy number that highlights real-life complexity, and the corresponding crisp versions are obtained using a robust ranking function (EV). The development of various membership functions ensures the rationality aspects while determining the best possible optimal solution set. The use of linear, exponential, and hyperbolic membership functions is more flexible and realistic in decision-making processes due to an indeterminacy degree while obtaining the optimal compromise solution. The proposed INPA with LTMFA, ETMFA, and HTMFA is designed under the neutrosophic environment, consisting of independent indeterminacy/neutral thoughts in decision-making processes. The overall satisfactory decision-makers’ overall satisfactory degree is always maximized at different compensation coefficients to achieve each objective function’s best possible compromise solution. A comparative study is also presented by illustrating four different numerical examples. The proposed MOOP and INPA are implemented on a cloud computing pricing case study, and the outcomes are evaluated efficiently. The practical implications are explored that immensely support practitioners in adopting the proposed INPA while solving the MOOP.

The propounded study has some limitations that can be addressed in future research. The discussed MOOP can be merged with and extended by considering the hierarchical level decision, which is not included in this study. The proposed approach is not directly applicable for multiobjective fractional programming problems however can be applied after linearizing processes. The bi-level and multi-level decision-making model cannot be solved directly, but can be adopted after obtaining the individual level satisfactory solution. Uncertainty among parameters due to randomness can also be incorporated and handled with the historical data.

In the future, the proposed INPA can be extended for different mathematical programming problems such as multiobjective fractional programming, nonlinear programming, geometric programming, etc. It can also be applied to transportation, supplier selection, inventory control, and supply chain planning problems. Besides, proposed INPA, various metaheuristic approaches may be applied to solve the cloud computing pricing problem as a future research scope.