Introduction

One of the practical applications of network theory in solving real-world problems is to solve the problem of the shortest path in the network. The aim of shortest path (SP) problem is to determine a path with at least time or cost between two predefined nodes [3]. Many of the real-life optimization problems such as communication and telecommunications systems, transportation system, location issue, computer networks, investment planning and so on can be modeled as SP problems. Therefore, many scholars have researched on this area.

Conventional SP problems are formulated with crisp weights, which may not always be available in real-world applications. However, in real-world problems, cost and time values are often imprecise or vague in nature as they depend on weather and traffic conditions. The imprecision in the cost and time data of the SP problem can be handled by fuzzy sets. The SP problem is extended to fuzzy SP (FSP) problem in which are weights (time and cost) are represented in terms of fuzzy numbers. Up to now, several approaches have been developed for solving FSP problems. Indeed, directed approach and heuristic approach are two important techniques for finding FSP.

The main idea behind of direct approach is to generalize the classical algorithms for solving FSP problems. Okada and Soper [35] proposed an algorithm for solving FSP problem by determining possibility degree for each arc being on optimal fuzzy path. Nayeem and Pal [34] proposed a new approach that gives a guideline to the decision maker for choosing the best fuzzy SP if it does not find the unique fuzzy SP. Moazeni [33] extended classical Dijkstra algorithm for obtaining all non-dominated paths in a fuzzy network. Chuang and Kung [10] proposed an algorithm that gives the shortest path and the fuzzy shortest weight simultaneously in a discrete FSP problem based on similarity measure. Yu and Wei [47] obtained the FSP based on a linear multi-objective programming problem without using integer variables. Ji et al. [29] formulated three kinds of FSP model based on the credibility measure of fuzzy set theory and proposed a hybrid intelligent algorithm for solving the FSP problem. Mahdavi et al. [32] extended the traditional Floyd–Warshall algorithm to find the FSP from one node to all other nodes. Dou et al. [13] first defined the concepts of the best and the worst ideal paths in a fuzzy network and then determined the FSP by evaluating similarity degrees between all candidate paths and two ideal paths based on similarity measure. The fuzzy reliable SP problem in a mixed fuzzy network has been investigated by Yang et al. [42]. Eshaghnezhad et al. [24] proposed an artificial neural network model for solving the FSP problem. Lakdashti et al. [31] presented an algorithm to find the SP in a directed network with triangular vague weighs. Dey et al. [12] proposed an algorithm to find the fuzzy minimum spanning tree of an undirected weighted fuzzy graph, in which mixed fuzzy numbers, either triangular or trapezoidal, are used to represent the lengths/costs of the arcs. Ebrahimnead [21] proposed an approach for solving interval shortest path problem based on acceptability index. Abbaszadeh Sori et al. [1] used fuzzy inference system for solving the fuzzy-constrained SP problem with three objectives of cost, time and risk. An overview on solving FSP problem based on different algorithms can be found in Broumi et al. [5].

In recent years, numerous researchers have used heuristic algorithms to solve many complicated optimization problems at the appropriate time [2, 38, 40]. In particular, heuristic approach is an important tool for solving the FSP problem. The genetic algorithm (GA) proposed by Hassanzadeh et al. [28] obtained the SP in a directed graphs with various fuzzy parameters. Particle Swarm Optimization (PSO) algorithm and Artificial Bee Colony (ABC) algorithm proposed by Ebrahiminejad et al. [18, 19] had better performance in terms of run time compared to GA.

To the best of our knowledge, there are very few studies in SP literature indicating the uncertainties of arc weights by means of interval-valued fuzzy numbers (IVFNs). Dey et al. [11] extended GA for solving the SP problem with interval-valued trapezoidal fuzzy arc weights. Enayattabar et al. [22] extended Dijkstra algorithm to solve the SP problem in an interval-valued Pythagorean fuzzy network. Enayattabar et al. [23] determined the shortest weights between every pair of nodes in a given interval-valued fuzzy network based on a dynamic approach. Broumi et al. [6] considered the SP problem through Bellman’s algorithm for a network using interval-valued neutrosophic numbers. These approaches are not effective in the case that arc weights are represented in terms of various IVFNs. That is why in this research, the issue of finding such SP with different kinds of interval-valued fuzzy weights is perused. In this contribution, imprecise arc weights have interval-valued trapezoidal and normal fuzzy membership functions. As the SP problem in fuzzy environment is a complex problem in combinatorial optimization, it is time consuming or even difficult to find an exact solution for large samples. Due to efficiency of applying collective intelligence algorithms to solve optimization problems, we intend to use the modified version of the ABC algorithm for solving mixed interval-valued FSP (MIVFSP) in a mixed fuzzy directed graph. It is able to solve the problem under consideration in less time compared to other intelligence algorithms such as GA and PSO. This algorithm is particularly efficient when it requires a quasi-optimal solution in a short time.

As mentioned, one application of SP problems WSNs is investigated in this study. WSNs are comprised of a huge count of low-power devices generally termed as sensor nodes. Sensor devices are considered as the nodes with resource-constraint regarding storage capacity, computational power, communication range energy, and bandwidth. On the basis of application circumstances, the placing of sensor nodes is either accurately made within the area of interest or employed randomly in the field. WSNs are very suitable for monitoring and control in remote or aggressive environments. Each sensor node senses the event as soon as an event occurs in the environment and sends it to the central station with a message. The longer the message is transmitted on the network, the greater the energy consumption of the sensor nodes. Generally, WSNs are responsible for gathering data from its environment and sending it to the sink node. The energy sources in WSNs limit the lifetime of the networks [9, 26]. It is possible to extend the lifetime of the network by saving as much energy as possible. Hence, it is very important to obtain the shortest path for passing data in WSNs to decrease the energy consumption. Finding the SP in such network ensures lower energy consumption by the nodes and increases the lifetime of the WSN. However, the amount of energy consumption in a WSN is imprecise. Thus, in this study, we present the imprecision in data of WSN by means of mixed interval-valued fuzzy data.

On such motivation basis, the main contributions of this study are summarized as follows: (1) to the best of our knowledge, this study is the first attempt for formulating the SP problem in a mixed interval-valued fuzzy environment, (2) a new approach is proposed to compare mixed interval-valued fuzzy numbers, (3) a MABC algorithm is extended for finding the mixed interval-valued fuzzy shortest path, (4) in contrast to GA and PSO algorithm, the MABC algorithm has less convergence iteration, convergence time and implementation time, (5) two simulations on WSNs confirm that that the MABC algorithm is quite robust to recognize the shortest path among all paths of networks in terms of energy consummation.

In Table 1, a comparison study of new incorporations with the previous studies has been illustrated.

Table 1 Classification of different approaches for solving FSP problem

The remainder of this paper is organized as follows. The necessary concepts of mixed interval-valued fuzzy numbers (MIVFN), an approximate technique for summing up two MIVFNs and comparison of two MIVFNs are described in the “Mixed interval-valued fuzzy numbers”. In the “Modified ABC (MABC) algorithm for solving MIVFSP problem”, we present modified ABC algorithm for solving MIVFSP problem in a mixed fuzzy-oriented graph. In the “Experiments and results”, two applications of SP problems in WSNs are solved by the proposed algorithm and the obtained results are compared with those derived from GA and PSO. Finally, the “Conclusions” concludes the paper.

Mixed interval-valued fuzzy numbers

The aim of this section to review some essential definitions and preliminaries of mixed interval-valued fuzzy numbers [8, 16, 20, 24, 41].

Definition 1

A fuzzy set \(\tilde{A}_{{{\text{TrFN}}}}\), denoted by \(\tilde{A}_{{{\text{TrFN}}}} = (a_{1} ,a_{2} ,a_{3} ,a_{4} ;h)\quad 0 < h \le 1\) \(0 < h \le 1\), is called a level \(h\)-trapezoidal fuzzy number (TrFN) if its membership function is given as follows:

$$ \mu_{{\tilde{A}_{{{\text{TrFN}}}} }} (x) = \left\{ {\begin{array}{ll} {h\frac{{x - a_{1} }}{{a_{2} - a_{1} }},} \hfill & {a_{1} \le x \le a_{2} ,} \hfill \\ {h,} \hfill & {a_{2} \le x \le a_{3} ,} \hfill \\ {h\frac{{a_{4} - x}}{{a_{4} - a_{3} }},} \hfill & {a_{3} \le x \le a_{4} ,} \hfill \\ {0,} \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right. $$
(1)

We denote the set of all level \(h\)-TrFNs by \(F_{{{\text{TrFN}}}} (h)\).

Remark 1

The \(\alpha\)-cut for the level \(h\)-TrFN \(\tilde{A}_{{{\text{TrFN}}}} = (a_{1} ,a_{2} ,a_{3} ,a_{4} ;h)\), i.e., \(\left\{ {x \in R;\;\mu_{{\tilde{A}_{{{\text{TrFN}}}} }} (x) \ge \alpha } \right\}\) is given by

$$ \begin{aligned}\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{\alpha } = \left[ {\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{\alpha }^{l} ,\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{\alpha }^{r} } \right] \nonumber\\= \left[ {a_{1} + (a_{2} - a_{1} )\frac{\alpha }{h},\;a_{4} - (a_{4} - a_{3} )\frac{\alpha }{h}} \right]. \end{aligned}$$
(2)

Definition 2

Let \(\tilde{A}_{{{\text{TrFN}}}}^{L} \in F_{{{\text{TrFN}}}} \left( {h^{L} } \right)\) and \(\tilde{A}_{{{\text{TrFN}}}}^{U} \in F_{{{\text{TrFN}}}} \left( {h^{U} } \right)\). A level \((h^{L} ,h^{U} )\)-interval-valued trapezoidal fuzzy number (IVTrFN) \(\tilde{\tilde{A}}_{{{\text{TrFN}}}}\), denoted by \(\tilde{\tilde{A}}_{{{\text{TrFN}}}} = \left[ {\tilde{A}_{{{\text{TrFN}}}}^{L} ,\tilde{A}_{{{\text{TrFN}}}}^{U} } \right] = \left\langle {\left( {a_{1}^{L} ,a_{2}^{L} ,a_{3}^{L} ,a_{4}^{L} ;h^{L} } \right),\left( {a_{1}^{U} ,a_{2}^{U} ,a_{3}^{U} ,a_{4}^{U} ;h^{U} } \right)} \right\rangle\), is an interval-valued fuzzy set (IVFS) on \({\mathbb{R}}\).The membership function of the lower TrFN \(\tilde{A}_{{{\text{TrFN}}}}^{L}\) is expressed as:

$$ \mu_{{\tilde{A}_{{{\text{TrFN}}}}^{L} }} (x) = \left\{ {\begin{array}{ll} {h^{L} \frac{{x - a_{1}^{L} }}{{a_{2}^{L} - a_{1}^{L} }},} \hfill & {a_{1}^{L} \le x \le a_{2}^{L} ,} \hfill \\ {h^{L} ,} \hfill & {a_{2}^{L} \le x \le a_{3}^{L} ,} \hfill \\ {h^{L} \frac{{a_{4}^{L} - x}}{{a_{4}^{L} - a_{3}^{L} }},} \hfill & {a_{3}^{L} \le x \le a_{4}^{L} ,} \hfill \\ {0,} \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right. $$
(3)

The membership function of the upper TrFN \(\tilde{A}_{{{\text{TrFN}}}}^{U}\) is expressed as:

$$ \mu_{{\tilde{A}_{{{\text{TrFN}}}}^{U} }} (x) = \left\{ {\begin{array}{ll} {h^{U} \frac{{x - a_{1}^{U} }}{{a_{2}^{U} - a_{1}^{U} }},} \hfill & {a_{1}^{U} \le x \le a_{2}^{U} ,} \hfill \\ {h^{U} ,} \hfill & {a_{2}^{U} \le x \le a_{3}^{U} ,} \hfill \\ {h^{U} \frac{{a_{4}^{U} - x}}{{a_{4}^{U} - a_{3}^{U} }},} \hfill & {a_{3}^{U} \le x \le a_{4}^{U} ,} \hfill \\ {0,} \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right. $$
(4)

here, \(0 < h^{L} \le h^{U} \le 1,\,\,a_{1}^{U} \le a_{1}^{L}\) and \(a_{4}^{L} \le a_{4}^{U}\) (Fig. 1).

Fig. 1
figure 1

The interval-valued trapezoidal fuzzy number \(\tilde{\tilde{A}}\)

We denote the set of \((h^{L} ,h^{U} )\)-IVTrFNs by \(F_{{{\text{IVTrFN}}}} \left( {h^{L} ,h^{U} } \right)\).

Example 1

\(\tilde{\tilde{A}}_{{{\text{TrFN}}}} = \left[ {\tilde{A}_{{{\text{TrFN}}}}^{L} ,\tilde{A}_{{{\text{TrFN}}}}^{U} } \right] = \left\langle {\left( {16,20,26,30;0.7} \right),\left( {14,18,28,32;0.9} \right)} \right\rangle\) is a level \((0.7,0.9)\)-interval-valued trapezoidal fuzzy number where the lower and upper membership functions are given as follows:

$$ \mu_{{\tilde{A}_{{{\text{TrFN}}}}^{L} }} (x) = \left\{ {\begin{array}{ll} {\frac{7}{10}\left( {\frac{x - 16}{4}} \right),} \hfill & {16 \le x \le 20,} \hfill \\ {\frac{7}{10},} \hfill & {20 \le x \le 26,} \hfill \\ {\frac{7}{10}\left( {\frac{30 - x}{4}} \right),} \hfill & {26 \le x \le 30,} \hfill \\ {0,} \hfill & {x < 16,\;x > 30.} \hfill \\ \end{array} } \right.,\quad \mu_{{\tilde{A}_{{{\text{TrFN}}}}^{U} }} (x) = \left\{ {\begin{array}{ll} {\frac{9}{10}\left( {\frac{x - 14}{4}} \right),} \hfill & {14 \le x \le 18,} \hfill \\ {\frac{9}{10},} \hfill & {18 \le x \le 28,} \hfill \\ {\frac{9}{10}\left( {\frac{32 - x}{4}} \right),} \hfill & {28 \le x \le 32,} \hfill \\ {0,} \hfill & {x < 14,\;x > 32.} \hfill \\ \end{array} } \right. $$

Definition 3

The summation of two \((h^{L} ,h^{U} )\)-IVTrFNs \(\tilde{\tilde{A}}_{{{\text{TrFN}}}} = \left[ {\tilde{A}_{{{\text{TrFN}}}}^{L} ,\tilde{A}_{{{\text{TrFN}}}}^{U} } \right] = \left\langle {\left( {a_{1}^{L} ,a_{2}^{L} ,a_{3}^{L} ,a_{4}^{L} ;h^{L} } \right),\left( {a_{1}^{U} ,a_{2}^{U} ,a_{3}^{U} ,a_{4}^{U} ;h^{U} } \right)} \right\rangle\) and \(\tilde{\tilde{B}}_{{{\text{TrFN}}}} = \left[ {\tilde{B}_{{{\text{TrFN}}}}^{L} ,\tilde{B}_{{{\text{TrFN}}}}^{L} } \right] = \left\langle {\left( {b_{1}^{L} ,b_{2}^{L} ,b_{3}^{L} ,b_{4}^{L} ;h^{L} } \right),\left( {b_{1}^{U} ,b_{2}^{U} ,b_{3}^{U} ,b_{4}^{U} ;h^{U} } \right)} \right\rangle\) is defined as follows:

$$ \left( {\tilde{\tilde{A}} \oplus \tilde{\tilde{B}}} \right)_{{{\text{TrFN}}}} = \left\langle {\left( {a_{1}^{L} + b_{1}^{L} ,a_{2}^{L} + b_{2}^{L} ,a_{3}^{L} + b_{3}^{L} ,a_{4}^{L} + b_{4}^{L} ;h^{L} } \right),\left( {a_{1}^{U} + b_{1}^{U} ,a_{2}^{U} + b_{2}^{U} ,a_{3}^{U} + b_{3}^{U} ,a_{4}^{U} + b_{1}^{U} ;h^{U} } \right)} \right\rangle . $$
(5)

Definition 4

A level-normal fuzzy number (NFN) \(\tilde{A}_{{{\text{NFN}}}}\), denoted by \(\tilde{A}_{{{\text{NFN}}}} = (m,\sigma ;k)\), \(0 < k \le 1\), is a fuzzy set on \({\mathbb{R}}\) with the membership function as follows:

$$ \mu_{{\tilde{A}_{{{\text{NFN}}}} }} (x) = ke^{{ - \left( {\frac{x - m}{\sigma }} \right)^{2} }} ,\quad x \in {\mathbb{R}}. $$
(6)

We denote the set of all level \(k\)-NFNs by \(F_{{{\text{NFN}}}} (k)\).

Remark 2

The \(\alpha\)-cut for the level \(k\)-normal fuzzy number \(\tilde{A}_{{{\text{NFN}}}} = (m,\sigma ;k)\) is given by

$$ \left[ {\tilde{A}_{{{\text{NFN}}}} } \right]_{\alpha } = \left[ {\left[ {\tilde{A}_{{{\text{NFN}}}} } \right]_{\alpha }^{l} ,\left[ {\tilde{A}_{{{\text{NFN}}}} } \right]_{\alpha }^{r} } \right] = \left[ {m - \sigma \sqrt { - \ln \frac{\alpha }{k}} ,m + \sigma \sqrt { - \ln \frac{\alpha }{k}} } \right]. $$
(7)

Definition 5

Let \(\tilde{A}_{{{\text{NFN}}}}^{L} \in F_{{{\text{NFN}}}} (k^{L} )\) and \(\tilde{A}_{{{\text{NFN}}}}^{U} \in F_{{{\text{NFN}}}} (k^{U} )\). A level \((k^{L} ,k^{U} )\)-interval-valued normal fuzzy number (IVNFN) \(\tilde{\tilde{A}}_{{{\text{NFN}}}}\), denoted by \(\tilde{\tilde{A}}_{{{\text{NFN}}}} = \left[ {\tilde{A}_{{{\text{NFN}}}}^{L} ,\tilde{A}_{{{\text{NFN}}}}^{U} } \right] = \left\langle {(m,\sigma^{L} ;k^{L} ),(m,\sigma^{U} ;k^{U} )} \right\rangle\), is an IVFS on \({\mathbb{R}}\). In this case, the membership function of the lower NFS \(\tilde{A}_{{{\text{NFN}}}}^{L}\) is given as

$$ \mu_{{\tilde{A}_{{{\text{NFN}}}}^{L} }} (x) = k^{L} e^{{ - \left( {\frac{x - m}{{\sigma^{L} }}} \right)^{2} }} ,\quad x \in {\mathbb{R}}. $$
(8)

The membership function of the upper NFN \(\tilde{A}_{{{\text{NFN}}}}^{U}\) is given as

$$ \mu_{{\tilde{A}_{{{\text{NFN}}}}^{U} }} (x) = k^{U} e^{{ - \left( {\frac{x - m}{{\sigma^{U} }}} \right)^{2} }} ,\quad x \in {\mathbb{R}}. $$
(9)

here, \(0 < k^{L} \le k^{U} \le 1,\,\,\sigma^{L} \le \sigma^{U}\) and \(a_{4}^{L} \le a_{4}^{U}\) (Fig. 2).

Fig. 2
figure 2

The interval-valued normal fuzzy number

In a similar way, \(F_{{{\text{IVNFN}}}} (k^{L} ,k^{U} )\) denotes the set of all level \((k^{L} ,k^{U} )\)-IVNFNs.

Example 2

\(\tilde{\tilde{A}}_{{{\text{NFN}}}} = \left[ {\tilde{A}_{{{\text{NFN}}}}^{L} ,\tilde{A}_{{{\text{NFN}}}}^{U} } \right] = \left\langle {(5,7;0.6),(5,10;0.8)} \right\rangle\) is a level \((0.6,0.8)\)-interval-valued normal fuzzy number where the lower and upper membership functions are given as follows:

$$ \mu_{{\tilde{A}_{{{\text{NFN}}}}^{L} }} (x) = \frac{6}{10}e^{{ - \left( {\frac{x - 5}{7}} \right)^{2} }} ,\;\mu_{{\tilde{A}_{{{\text{NFN}}}}^{U} }} (x) = \frac{8}{10}e^{{ - \left( {\frac{x - 5}{{10}}} \right)^{2} }} ,\quad x \in {\mathbb{R}} $$

Definition 6

The extended addition on \(\tilde{\tilde{A}}_{{{\text{NFN}}}} = \left[ {\tilde{A}_{{{\text{NFN}}}}^{L} ,\tilde{A}_{{{\text{NFN}}}}^{U} } \right] = \left\langle {(m_{1} ,\sigma_{1}^{L} ;k^{L} ),(m_{1} ,\sigma_{1}^{U} ;k^{U} )} \right\rangle\) and \(\tilde{\tilde{B}}_{{{\text{NFN}}}} = \left[ {\tilde{B}_{{{\text{NFN}}}}^{L} ,\tilde{B}_{{{\text{NFN}}}}^{L} } \right] = \left\langle {(m_{2} ,\sigma_{2}^{L} ;k^{L} ),(m_{2} ,\sigma_{2}^{U} ;k^{U} )} \right\rangle\) is defined as follows:

$$ \left( {\tilde{\tilde{A}} \oplus \tilde{\tilde{B}}} \right)_{{{\text{NFN}}}} = \left\langle {(m_{1} + m_{2} ,\sigma_{1}^{L} + \sigma_{2}^{L} ;k^{L} ),(m_{1} + m_{2} ,\sigma_{1}^{U} + \sigma_{2}^{U} ;k^{U} )} \right\rangle . $$
(10)

Remark 3

The \(\alpha\)-cut sum of the level \(h\)-TrFN \(\tilde{A}_{{{\text{TrFN}}}} = (a_{1} ,a_{2} ,a_{3} ,a_{4} ;h)\) and the level \(k\)-NFN \(\tilde{A}_{{{\text{NFN}}}} = (m,\sigma ;k)\), denoted by \(\left[ {\tilde{C}} \right]_{{\alpha_{i} }} = \left[ {\left[ {\tilde{C}} \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{C}} \right]_{{\alpha_{i} }}^{r} } \right]\), is equal to sum of the \(\alpha\)-cuts of them:

$$ \left[ {\tilde{C}} \right]_{{\alpha_{i} }} = \left[ {\left[ {\tilde{C}} \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{C}} \right]_{{\alpha_{i} }}^{r} } \right] = \left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }} + \left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }} . $$
(11)

Assuming \(\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }} = \left[ {\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }}^{r} } \right]\) and \(\left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }} = \left[ {\left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }}^{r} } \right]\) are the \(\alpha\)-cuts of \(\tilde{A}_{{{\text{TrFN}}}}\) and \(\tilde{B}_{{{\text{NFN}}}}\), respectively, formulation (11) is given as follows:

$$ \left[ {\tilde{C}} \right]_{{\alpha_{i} }} = \left[ {\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }}^{r} } \right] + \left[ {\left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }}^{r} } \right] = \left[ {\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }}^{l} + \left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{A}_{{{\text{TrFN}}}} } \right]_{{\alpha_{i} }}^{r} + \left[ {\tilde{B}_{{{\text{NFN}}}} } \right]_{{\alpha_{i} }}^{r} } \right]. $$
(12)

Regarding Eqs. (2) and (7), formulation (12) is simplified as follows:

$$ \left[ {\tilde{C}} \right]_{{\alpha_{i} }} = \left[ {\left[ {\tilde{C}} \right]_{{\alpha_{i} }}^{l} ,\left[ {\tilde{C}} \right]_{{\alpha_{i} }}^{r} } \right] = \left[ {(a_{2} - a_{1} )\frac{{\alpha_{i} }}{h} + a_{1} + m - \sigma \sqrt { - \ln \frac{{\alpha_{i} }}{k}} ,a_{4} - (a_{4} - a_{3} )\frac{{\alpha_{i} }}{h} + m + \sigma \sqrt { - \ln \frac{{\alpha_{i} }}{k}} } \right]. $$
(13)

Now, we are in a situation to describe an approximate approach for summing up two IVFNs with different membership functions, i.e., \((h^{L} ,h^{U} )\)-IVTrFN and \((k^{L} ,k^{U} )\)-IVNFN. The proposed approach is indeed a direct extension of the method of Tajdin et al. [39] for summing a TrFN and a normal one. In what follows, the summation of the \((h^{L} ,h^{U} )\)-IVTrFN \(\tilde{\tilde{A}}_{{{\text{TrFN}}}} = \left\langle {(a_{1}^{L} ,a_{2}^{L} ,a_{3}^{L} ,a_{4}^{L} ;h^{L} ),(a_{1}^{U} ,a_{2}^{U} ,a_{3}^{U} ,a_{4}^{U} ;h^{U} )} \right\rangle\) and the \((k^{L} ,k^{U} )\)-IVNFN \(\tilde{\tilde{A}}_{{{\text{NFN}}}} = \left\langle {(m,\sigma^{L} ;k^{L} ),(m,\sigma^{U} ;k^{U} )} \right\rangle\) is approximated by an interval-valued exponential fuzzy number (IVEFN) using \(\alpha\)-cut points.

The upper membership function of the IVEFN \(\tilde{\tilde{C}} = \left[ {\tilde{C}^{L} ,\tilde{C}^{U} } \right]\), i.e., \(\mu_{{\tilde{C}^{U} }} (x)\), is given as

$$ \mu_{{\tilde{C}^{U} }} (x) = \left\{ {\begin{array}{*{20}l} {\mu_{{\tilde{C}^{U} }}^{l} (x) = t^{U} e^{{ - \left( {\frac{{m_{1}^{U} - x}}{{\sigma_{1}^{U} }}} \right)^{2} }} ,} \hfill & {x < m_{1}^{U} ,} \hfill \\ {t^{U} = \min \left\{ {h^{U} ,k^{U} } \right\},} \hfill & {m_{1}^{U} \le x \le m_{2}^{U} ,} \hfill \\ {\mu_{{\tilde{C}^{U} }}^{r} (x) = t^{U} e^{{ - \left( {\frac{{x - m_{2}^{U} }}{{\sigma_{2}^{U} }}} \right)^{2} }} ,} \hfill & {x > m_{2}^{U} .} \hfill \\ \end{array} } \right. $$
(14)

Let \(x_{i}^{U} = \left[ {\tilde{C}^{U} } \right]_{{\alpha_{i} }}^{l}\) and \(y_{i}^{U} = \mu \left( {\left[ {\tilde{C}^{U} } \right]_{{\alpha_{i} }}^{l} } \right)\). To find the parameters \((m_{1}^{U} ,\sigma_{1}^{U} )\) of the left membership function of (14), we first choose \(n\) points \(\left( {x_{i}^{U} ,y_{i}^{U} } \right)\) where \(\alpha_{i} = \alpha_{i - 1} + \frac{1}{n},\quad i = 1,2, \ldots ,n\) and then consider the fitting model \(y = e^{{ - \left( {\frac{{m_{1}^{U} - x}}{{\sigma_{1}^{U} }}} \right)^{2} }}\). To do this we have,

$$ \ln y_{i}^{U} = - \left( {\frac{{m_{1}^{U} - x_{i}^{U} }}{{\sigma_{1}^{U} }}} \right)^{2} \Rightarrow - \ln y_{i} = \left( {\frac{{m_{1}^{U} - x_{i}^{U} }}{{\sigma_{1}^{U} }}} \right)^{2} \Rightarrow \sqrt { - \ln y_{i}^{U} } = \left( {\frac{{m_{1}^{U} - x_{i}^{U} }}{{\sigma_{1}^{U} }}} \right) \Rightarrow - \sigma_{1}^{U} \sqrt { - \ln y_{i}^{U} } + m_{1}^{U} = x_{i}^{U} . $$
(15)

In this case, the following linear least squares model is defined for the minimization of error:

$$ \min E^{U} = \sum\limits_{i = 1}^{n} {\left( { - \sigma_{1}^{U} \sqrt { - \ln y_{i}^{U} } + m_{1}^{U} - x_{i}^{U} } \right)}^{2} . $$
(16)

To solve (16), it is needed to have

$$ \begin{aligned} \frac{{\partial E^{U} }}{{\partial \sigma_{1}^{U} }} & = \sum\limits_{i = 1}^{n} {\left[ {\left( { - 2\sqrt { - \ln y_{i}^{U} } } \right)\left( { - \sigma_{1}^{U} \sqrt { - \ln y_{i}^{U} } + m_{1}^{U} - x_{i}^{U} } \right)} \right]} = 0, \\ \frac{{\partial E^{U} }}{{\partial m_{1}^{U} }} & = \sum\limits_{i = 1}^{n} {\left[ {2\left( { - \sigma_{1}^{U} \sqrt { - \ln y_{i}^{U} } + m_{1}^{U} - x_{i}^{U} } \right)} \right]} = 0. \\ \end{aligned} $$
(17)

Hence, the following normal equations are solved for the unknown parameters \(m_{1}^{U}\) and \(\sigma_{1}^{U}\):

$$ \begin{aligned} & - \sigma_{1}^{U} \sum\limits_{i = 1}^{n} {\ln y_{i}^{U} } + m_{1}^{U} \sum\limits_{i = 1}^{n} {\sqrt { - \ln y_{i}^{U} } } = \sum\limits_{i = 1}^{n} {x_{i}^{U} } \sum\limits_{i = 1}^{n} {\sqrt { - \ln y_{i}^{U} } } , \\ & - \sigma_{1}^{U} \sum\limits_{i = 1}^{n} {\sqrt { - \ln y_{i}^{U} } } + nm_{1}^{U} = \sum\limits_{i = 1}^{n} {x_{i}^{U} } . \\ \end{aligned} $$
(18)

Therefore, the parameters \(m_{1}^{U}\) and \(\sigma_{1}^{U}\) are given as follows:

$$ m_{1}^{U} = \frac{{\sum\nolimits_{i} {\ln y_{i}^{U} } \times \sum\nolimits_{i} {x_{i}^{U} } + \sum\nolimits_{i} {\left( {x_{i}^{U} \times \sqrt { - \ln y_{i}^{U} } } \right)} \times \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{U} } } }}{{n\sum\nolimits_{i} {\ln y_{i}^{U} } + \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{U} } } \times \sum\nolimits_{i} {\left. {\sqrt { - \ln y_{i}^{U} } } \right)} }}. $$
(19)
$$ \sigma_{1}^{U} = \frac{{ - n\sum\nolimits_{i} {\left( {x_{i}^{U} \times \,\sqrt { - \ln y_{i}^{U} } } \right)} + \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{U} } } \times \sum\nolimits_{i} {x_{i}^{U} } }}{{ - n\sum\nolimits_{i} {\ln y_{i}^{U} } - \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{U} } } \times \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{U} } } }}. $$
(20)

Now assume \(\overline{x}_{i}^{U} = \left[ {\tilde{C}^{U} } \right]_{{\alpha_{i} }}^{r}\) and \(\overline{y}_{i}^{U} = \mu \left( {\left[ {\tilde{C}^{U} } \right]_{{\alpha_{i} }}^{r} } \right)\). In a similar way, to obtain the parameters \((m_{2}^{U} ,\sigma_{2}^{U} )\) of the right membership function of (14), we first choose \(n\) points \(\left( {\overline{x}_{i}^{U} ,\overline{y}_{i}^{U} } \right)\) where \(\alpha_{i} = \alpha_{i - 1} + \frac{1}{n},\quad i = 1,2, \ldots ,n\) and then consider the fitting model as \(y = e^{{ - \left( {\frac{{m_{2}^{U} - x}}{{\sigma_{2}^{U} }}} \right)^{2} }}\). The same approach gives the following parameters:

$$ m_{2}^{U} = \frac{{\sum\nolimits_{i} {\ln \overline{y}_{i}^{U} } \times \sum\nolimits_{i} {\overline{x}_{i}^{U} } + \sum\nolimits_{i} {\left( {\overline{x}_{i}^{U} \times \sqrt { - \ln \overline{y}_{i}^{U} } } \right) \times } \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } }}{{n\sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } + \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } \times \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } }}, $$
(21)
$$ \sigma_{2}^{U} = \frac{{n\sum\nolimits_{i} {\left( {\overline{x}_{i}^{U} \times \sqrt { - \ln \overline{y}_{i}^{U} )} } \right)} - \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } \times \sum\nolimits_{i} {\overline{x}_{i}^{U} } }}{{ - n\sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } - \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } \times \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{U} } } }}. $$
(22)

Moreover, the lower membership function of the IVTRFN \(\tilde{\tilde{C}} = \left[ {\tilde{C}^{L} ,\tilde{C}^{U} } \right]\), i.e., \(\mu_{{\tilde{C}^{L} }} (x)\), is given as

$$ \mu_{{\tilde{C}^{L} }} (x) = \left\{ {\begin{array}{ll} {\mu_{{\tilde{C}^{L} }}^{l} (x) = t^{L} e^{{ - \left( {\frac{{m_{1}^{L} - x}}{{\sigma_{1}^{L} }}} \right)^{2} }} ,} \hfill & {x < m_{1}^{L} ,} \hfill \\ {t^{L} = \min \left\{ {h^{L} ,k^{L} } \right\},} \hfill & {m_{1}^{L} \le x \le m_{2}^{L} ,} \hfill \\ {\mu_{{\tilde{C}^{L} }}^{r} (x) = t^{L} e^{{ - \left( {\frac{{x - m_{2}^{L} }}{{\sigma_{2}^{L} }}} \right)^{2} }} ,} \hfill & {x > m_{2}^{L} .} \hfill \\ \end{array} } \right. $$
(23)

Assume \(x_{i}^{L} = \left[ {\tilde{C}^{L} } \right]_{{\alpha_{i} }}^{l}\) and \(y_{i}^{L} = \mu \left( {\left[ {\tilde{C}^{L} } \right]_{{\alpha_{i} }}^{l} } \right)\). The same approach gives the parameters \((m_{1}^{L} ,\sigma_{1}^{L} )\) as follows:

$$ m_{1}^{L} = \frac{{\sum\nolimits_{i} {\ln y_{i}^{L} } \times \sum\nolimits_{i} {x_{i}^{L} } + \sum\nolimits_{i} {\left( {x_{i}^{L} \times \sqrt { - \ln y_{i}^{L} } } \right)} \times \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{L} } } }}{{n\sum\nolimits_{i} {\sqrt {\ln y_{i}^{L} } } + \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{L} } } \times \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{L} } } }}, $$
(24)
$$ \sigma_{1}^{L} = \frac{{ - n\sum\nolimits_{i} {\left( {x_{i}^{L} \times \sqrt { - \ln y_{i}^{L} } } \right)} + \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{L} } } \times \sum\nolimits_{i} {x_{i}^{L} } }}{{ - n\sum\nolimits_{i} {\ln y_{i}^{L} } - \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{L} } } \times \sum\nolimits_{i} {\sqrt { - \ln y_{i}^{L} } } }}. $$
(25)

Now, let \(\overline{x}_{i}^{L} = \left[ {\tilde{C}^{L} } \right]_{{\alpha_{i} }}^{r}\) and \(\overline{y}_{i}^{L} = \mu \left( {\left[ {\tilde{C}^{L} } \right]_{{\alpha_{i} }}^{r} } \right)\). The same approach gives the parameters \((m_{2}^{L} ,\sigma_{2}^{L} )\) as follows:

$$ m_{2}^{L} = \frac{{\sum\nolimits_{i} {\ln \overline{y}_{i}^{L} } \times \sum\nolimits_{i} {\overline{x}_{i}^{L} } + \sum\nolimits_{i} {\left( {\overline{x}_{i}^{L} \times \sqrt { - \ln \overline{y}_{i}^{L} } } \right)} \times \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } }}{{n\sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } + \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } \times \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } }}, $$
(26)
$$ \sigma_{2}^{L} = \frac{{n\sum\nolimits_{i} {\left( {\overline{x}_{i}^{L} \times \sqrt { - \ln \overline{y}_{i}^{L} } } \right)} - \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } \times \sum\nolimits_{i} {\overline{x}_{i}^{L} } }}{{ - n\sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } - \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } \times \sum\nolimits_{i} {\sqrt { - \ln \overline{y}_{i}^{L} } } }}. $$
(27)

Now, we present an approach for comparing each two IVFNs. The \(D_{p,q}\)-distance between two fuzzy numbers \(\tilde{M}\) and \(\tilde{N}\) with corresponding \(\alpha_{i}\)-cuts, is approximately given as [39]:

$$ D_{p,q} (\tilde{M},\tilde{N}) = \left[ {(1 - q)\sum\limits_{i = 1}^{n} {\left| {\left[ {\tilde{M}} \right]_{{\alpha_{i} }}^{l} - \left[ {\tilde{N}} \right]_{{\alpha_{i} }}^{l} } \right|^{p} } + q\sum\limits_{i = 1}^{n} {\left| {\left[ {\tilde{M}} \right]_{{\alpha_{i} }}^{l} - \left[ {\tilde{N}} \right]_{{\alpha_{i} }}^{r} } \right|^{p} } } \right]^{p} . $$
(28)

In the case that \(p = 2\) and \(q = \frac{1}{2}\) the following formula is obtained:

$$ D_{{2,\frac{1}{2}}} (\tilde{M},\,\tilde{N}) = \sqrt {\,\left[ {\frac{1}{2}\,\sum\limits_{i = 1}^{n} {\left| {\left[ {\tilde{M}} \right]_{{\alpha_{i} }}^{l} - \left[ {\tilde{N}} \right]_{{\alpha_{i} }}^{l} } \right|^{2} } + \frac{1}{2}\sum\limits_{i = 1}^{n} {\left| {\left[ {\tilde{M}} \right]_{{\alpha_{i} }}^{l} - \left[ {\tilde{N}} \right]_{{\alpha_{i} }}^{r} } \right|^{2} } } \right]} . $$
(29)

Hence, for two IVFNs \(\tilde{\tilde{M}}\) and \(\tilde{\tilde{N}}\), the \(D_{p,q}\)-distance with the same parameters is given as follows:

$$ D_{{2,\frac{1}{2}}} (\tilde{\tilde{M}},\tilde{\tilde{N}}) = D_{{2,\frac{1}{2}}} (\tilde{M}^{L} ,\tilde{N}^{L} )\;D_{{2,\frac{1}{2}}} (\tilde{M}^{U} ,\tilde{N}^{U} ), $$
(30)

where \(D_{{2,\frac{1}{2}}} (\tilde{M}^{L} ,\,\tilde{N}^{L} )\) and \(D_{{2,\frac{1}{2}}} (\tilde{M}^{U} ,\,\tilde{N}^{U} )\) are given by (29).

Remark 4

The \(D_{p,q}\)-distance (30) is used in order of MIVFNs, i.e., it is said to \(\tilde{\tilde{M}}\underline { \prec } \tilde{\tilde{N}}\) if and only if \(D_{{2,\frac{1}{2}}} (\tilde{\tilde{M}},\tilde{\tilde{0}}) \le D_{{2,\frac{1}{2}}} (\tilde{\tilde{N}},\tilde{\tilde{0}})\).

Remark 5

The fuzzy min operation for \(p\) MIVFNs \(\tilde{\tilde{M}}_{i}\) is defined as \(\mathop {\min }\nolimits_{1 \le i \le p} \tilde{\tilde{M}}_{i} = \min \left\{ {D_{{2,\frac{1}{2}}} (\tilde{\tilde{M}}_{i} ,\tilde{\tilde{0}})\left| {i = 1,2, \ldots ,p} \right.} \right\}\).

Modified ABC (MABC) algorithm for solving MIVFSP problem

In the case that arc weights are expressed by MIVFNs, the weight of each path between any two arbitrary nodes will be a MIVFN too. The SP in such network is named as mixed interval-valued FSP (MIVFSP) problem. In this section, we modify ABC algorithm for solving the same problem.

ABC algorithm

Karaboga [30] proposed the ABC algorithm, inspired by honeybees’ life style, for solving numerical optimization problems. This is a population-based heuristic algorithm and is inspired by the search for food sources in the colony of honeybees. In this algorithm, each solution is considered as a food source, and the fitness of the solution is determine based on the amount of nectar per source. The algorithm aims to discover the best available source of food by searching the bees in different iterations. Here, three different categories of bees are as follows.

  • The employed bees: the bees belonging to this category search around each food source.

  • The onlooker bees: the bees belonging to this category are waiting for information from employed bees in dance area and deciding to go to a food source to do more searches.

  • Scout bees: the bees belonging to this category in the entire space of the problem are randomly looking for a new food source.

There are three control parameters (SN, MCN, and limit) in this algorithm that need to be set correctly in accordance with the problem specification. The SN parameter is the number of food sources (population solutions) and parameter D is equal to the dimensions of the problem. The MCN parameter specifies the number of iterations. The limit control parameter indicates when the employed bee becomes a scout bee. That is, if an employed bee cannot locate a suitable source in a local search around a food source in several successive iterations determined by the limit variable, it will leave that food source and seek a new food source in the space of the problem. The flowchart of the ABC algorithm is given in Fig. 3.

Fig. 3
figure 3

Flowchart of the ABC algorithm

Initially, the algorithm determines values of control parameters and generates food sources (initial population) randomly in dimensions (SN, D). Then, the process of searching each category of bees is repeated in a loop. In each iteration of the algorithm, the search of the employed bees is performed, and then, according to the results of these bees, the operation of the onlooker bees is done, and eventually the operation of the scout bees is performed.

In the initial population, colonies are equally allocated to the employed bees and the onlooker bees. The number of employed bees and the number of onlooker bees are equal to the number of colony solutions. In each iteration, each employed bee performs a local search around the corresponding food source. If this bee finds a better food source, it forgets the previous food source and tries to work on the new food source. Otherwise, it will divert the new food source and adds one unit to the limit value of this source.

The employed bees return to the beehive after the end of the search and share the resulting information, that is, the value of fitness and position of the food source in the dance area for the onlooker bees. Each onlooker bee selects a food source for a local search using a probability function. A food source with greater fitness value has more chance to be selected by the onlooker bees. Then, a local search is done by every onlooker bee like an employed around the chosen food source.

After the activity of the onlooker bees, the limit value for each food source is compared with the threshold value. In the case that its value is more than the threshold value, the employed bee corresponding to that food source is converted into the scout bee. The scout bee selects a new food source randomly by moving in the search space. The iteration of the algorithm continues until it reaches the termination condition [19].

Some authors have used the ABC algorithm for solving optimization problems [37, 48]. In what follows, we will solve the problem of finding the MIVFSP problem by use of the ABC algorithm and explain each of the steps of the algorithm. Hence, we need to make changes to the initial version of the algorithm to be used to solve discrete MIVFSP problem. We will use the mutation operator introduced by Ebrahimnejad et al. [19] for the search of employed and onlooker bees.

MABC algorithm for finding MIVFSP

The aims of this section are to explain how (1) to produce the initial population, (2) to work the bees and (3) to calculate the fitness of this problem.

MABC colony initialization

For the beginning MABC algorithm for solving MIVFSP problem, it is required to generate the initial colony which consists of some solutions. Each path between the source and destination nodes of the network under consideration is defined as a solution. The path in this network is generated by use of its vicinity matrix. To this end, after finding the vicinity matrix, the initial colony is generated based on Algorithm 1 [28]. In this algorithm, \(a\) and n denote the vicinity matrix and destination node, respectively. Figure 4 illustrates a simple graph involving seven nodes and its vicinity matrix. Table 2 gives three solutions (paths).

Fig. 4
figure 4

A network with its vicinity matrix

Table 2 Three random paths
figure a

In this algorithm, q denotes the number of generated paths. Since each path is considered as a solution, the number of paths should be equal to the number of worker and onlooker bees (SN). In addition, m shows the location of path nodes. As an example, in path 1–4–5–6–7, for node 5, we have \(m = 3.\)

The search process of employed bees

In the MABC algorithm, the number of employed bees is equal to the number of colony solutions. The employed bee i is corresponded to the solution i in colony. In each iteration, each employed bee must perform a local search around the corresponding solution. Here, the mutation operator in the MABC is used as local search. In the case, fitness value of the new path is more than that of the current path, the bee uses the new path, ignores the previous one, and sets the value of the limit variable of the corresponding path to zero. Otherwise, the bee forgets the new path. In this case, the value of the limit variable of the corresponding path is increased by one unit.

The mutation operator in a path is defined as “Random change of the path from the middle of a path to the destination node” [19]. In other words, by random selection of one of the intermediate nodes of a path, we change the direction from that point of the path to the destination. Thus, the new path is identical with the previous one from the first node until the selected random node and is different from the previous one from the selected random node until the destination node. For example, assume that node k is the selected random node. In this case, the path between node 1 and node k − 1 remains unchanged and the intermediate nodes belonging to the path between node k and the destination node are replaced with the new nodes determined by Algorithm 1. Figures 5 and 6 illustrate this situation. Figure 5 illustrates the mutation operation on the path 1–2–5–7 and Fig. 6 shows the generation of the new path 1–2–4–6–7. Here, node 2 is the selected random node.

Fig. 5
figure 5

Mutation operator on the path

Fig. 6
figure 6

Generation of the new path

The search process of onlooker bees

In the MABC algorithm, at the end of the search process of the employed bees, the resulting information including the amount of fitness and position of the food source is shared in the dance area for the onlooker bees. Then each onlooker bee selects one of the paths, with a probability proportional to the fitness value of the paths, such as the roulette wheel. The probability of choosing any of the paths, denoted by \(p_{i}\), is obtained using the following formulation:

$$ p_{i} = \frac{{{\text{Fit}}_{i} }}{{\sum\nolimits_{z = 1}^{{{\text{SN}}}} {{\text{Fit}}_{z} } }}, $$
(31)

where \({\text{Fit}}_{i}\) shows the fitness value of the path \(i\) and SN is the number of population solutions. Note that the weight of each path is defined as the fitness value of that path and is computed by summation of weights of arcs on that path.

The path that has more favorable fitness would be more likely to be selected by the onlooker bees. After each onlooker bee selects a path, that bee generates a random number in the interval [1]. If such random number is less than the \(p_{i}\), the onlooker bee is done a local search on this path. If the new path resulting from the mutation has a higher fitness value than the previous path, the algorithm ignores the previous path. In this case, the value of the limit variable corresponding to the new path is changed to zero. Otherwise, the algorithm ignores the new path. In such a case, the value of the limit variable corresponding to this path increases one unit.

The search process of scout bees

In the MABC algorithm, if a food source is not improved in a predetermined number of successive iterations, that food source food is forgotten. The number of unsuccessful sequential iterations is defined by the limit parameter. Each path whose number of failed searches exceeds the limit value, the corresponding employed is converted into a scout bee. The scout bee forgets the previous path and produces a new random path by moving in the search space based on Algorithm 1. In addition, the new path in the population is replaced with the previous path.

In the MABC algorithm, fitness value is determined by summation of the interval-valued fuzzy weights of the arcs in the interval-valued fuzzy path. This value is computed by use of Eqs. (14) and (23).

Experiments and results

In this section, the efficiency of the proposed MABC algorithm in solving MIVFSP problem is demonstrated in WSN. The results of the proposed algorithm are compared with those derived with the GA [28] and PSO [18], which have performed on a network with various type-1 fuzzy numbers.

In the modern decision-making process, a different person may opt for the different evaluation methods like crisp, interval, linguistic or fuzzy and so on to access the object. However, ambiguity is one of the most utmost concern factors keeping in the mind during any process and hence it will make some sort of hesitancy to expose the information in a crisp format. To address it completely, interval-numbers are the appropriate ones. Since trapezoidal numbers handle more uncertainties than the triangular numbers and hence interval-valued trapezoidal fuzzy numbers are utilized in the work. Furthermore, almost all the approaches which are presented in the literature [20, 25] are working under the discrete set of the universe and hence do not tell the clear picture for the fuzzy concept such as “good”, “bad” and so on. Therefore, there is a need to study the behavior of the set for these variables also. Furthermore, in our real-life situations, various social and natural phenomena belong to the normal distribution which has the continuous higher derivative of its membership function. For instance, with respect to the stochastic phenomena, consider the life-length of the electric bulb, the average and the variance of the using lifetime of production are 100 and 2, respectively, which are expressed as an NFN (100, 2), (here, NFN represent normal fuzzy number) sometime we are not 100% sure for this value if we have [80%, 85%] certainty degree and [5%, 10%] negation degree, we can use the interval-valued NFNs (IVNFN) to describe this kind of information, i.e., <(100, 2), [0.8, 0.9], [0.05. 0.10]>. Therefore, IVNFNs can express the stochastic phenomena better than NFNs by adding membership and non-membership functions. Hence, in the following examples, the arc weights are represented in terms of interval-valued trapezoidal fuzzy numbers and interval-valued normal fuzzy numbers.

Example 3

Consider the WSN shown in Fig. 7 with 22 sensor nodes connected to each other through 59 arcs. The energy data in terms of MIVFNs are given in Table 3. Here, we are going to find the mixed interval-valued fuzzy shortest path between node 1 and node 22. The simulation parameters are given in Table 4.

Fig. 7
figure 7

A wireless sensor network

Table 3 Mixed interval-valued fuzzy energy of WSN
Table 4 Simulation parameters

Since the heuristic algorithms are randomness, then each algorithm is executed ten times independently. Table 5 gives the obtained results. The MABC algorithm was able to detect the mixed interval-valued fuzzy shortest path 1–2-5–10–19–22 from node 1 to node 22 in every ten implementations. However, the GA only managed to converge to the best solution in eight implementations and the PSO was converged to the best solution in seven implementations.

Table 5 The results of GA, PSO and MABC

Regarding Table 4, the iteration number of all algorithms is 30 and the number of members of the population is 20. The convergence time of algorithms, the total runtime, and the iteration number are shown in Table 5. The MABC algorithm was able to find the best solution in the first iteration in three implementations, while the GA determined the mixed interval-valued fuzzy shortest path in its best execution in the third iteration and the PSO founded the best solution only in one implementation. The comparison of the iteration number required for convergence in all algorithms shows that the MABC algorithm has more ability to solve the problem. Moreover, the MABC algorithm had less convergence time compared to GA and PSO. It is worth mentioning that the GA and PSO have not been integrated at all in two and three implementations, respectively. Finally, the implementation time of the MABC algorithm has been less than the time of the implementation of the GA and PSO. Regarding the randomness of the mutation operators and the effect of controlling variables such as limit in the MABC and Pc, Pm in the GA, and Wmin and Wmax in the PSO, runtimes of these algorithms in each implementation may be slightly different.

The optimal weight of the optimal path 1–2–5–10–19–22 is equal to [(140.16, 20.55, 157.60, 22.86; 0.60), (134.92, 33.05, 167.08, 33.05; 0.70)]. Figure 8 shows the membership function of this interval-valued fuzzy number.

Fig. 8
figure 8

The membership function of MIVFSP weight

The convergence time comparison of all three algorithms presented in Table 5 is shown in Fig. 9 with bar chart. The minimum convergence time, mean convergence time, and maximum convergence time for all algorithms are shown in this figure. The red parts display the time of the genetic algorithm and the blue parts shows the time of the PSO algorithm. Moreover, the green parts shows the time of the MABC algorithm. In all the charts, the MABC algorithm had less convergence time compared to GA and PSO algorithm.

Fig. 9
figure 9

The best, average and worst time of convergence of all algorithms

Figure 10 shows the best, average and worst run time of all algorithms. In all of these diagrams, the implementation time of the MABC algorithm has been less than the time of the implementation of the genetic and PSO algorithms.

Fig. 10
figure 10

The best, average and worst runtime of all algorithms for 30 iterations

Figure 11 shows the solution convergence diagram of the three algorithms in each iteration. The fitness of the paths has computed after finding the approximate sum of the arc weights. In this figure, the shortest path weights found by the MABC algorithm, GA and PSO are shown with green circles, red stars and blue squares, respectively. In this diagram, which is the result of three independent implementations, the MABC algorithm, GA and PSO have converged in the 6th, 25th and 28th iterations, respectively. This confirms the high ability of the MABC algorithm to solve the MIVFSP problem in a directed network.

Fig. 11
figure 11

Convergence curve of genetic algorithm, PSO algorithm and MABC algorithm

Example 4

Figure 12 shows another example of a WSN with 36 sensor nodes and 86 arcs. Data are given in Table 6. The aim is to determine the mixed interval-valued fuzzy shortest path between node 1 and node 36. The simulation parameters are given in Table 7. The results of the implementation of the algorithms on Fig. 12 are shown in Table 8. Of the ten independent runs of each algorithm, MABC has detected the shortest path from the source to the destination node in eight runs, while the genetic algorithm has converged to the best solution in six runs, and the PSO algorithm has found the best path only in five runs. The convergence time of the algorithms, the total implementation time, and the iteration number that the algorithms have been converged in each run are shown in Table 8. Comparison of the iteration number required for convergence in the three algorithms shows that the MABC algorithm is more capable of finding the optimal solution.

Fig. 12
figure 12

WSN with 36 sensor nodes and 86 arcs

Table 6 Data of WSN
Table 7 The simulation parameters of Example 4.2
Table 8 The simulation results

The comparisons of convergence times all three algorithms is illustrated in Fig. 13 with bar chart. In all the charts, the MABC algorithm had less convergence time compared to GA and PSO algorithm. Note that the maximum convergence time bar is related to the MABC algorithm because it converges once in the 27th iteration. It is clear that converging on the final iterations is better than not converging at all. It is worth mentioning that the MABC algorithm, the genetic algorithm and the PSO algorithm have not been integrated at all in two, four and five implementations, respectively. Here, the mean convergence time bar is the mean implementation time in cases where the algorithm converges.

Fig. 13
figure 13

The convergence times of algorithms

Figure 14 shows the best, average and worst run time of all algorithms. In all of these diagrams, the implementation time of the MABC algorithm has been less than the time of the implementation of the genetic and PSO algorithms.

Fig. 14
figure 14

The run times of algorithms

The interval-valued exponential fuzzy weight of shortest path 1–3–9–18–28–31–36 is equal to [(131.27, 29.98, 174.73, 29.98; 0.50), (126.92, 48.05, 179.44, 50.07; 0.70)]. Figure 15 shows the lower and upper membership functions of this interval-valued fuzzy number.

Fig. 15
figure 15

The membership function of the shortest path weight

Finally, Fig. 16 shows the solution convergence diagram of the three algorithms in each iteration. In this diagram, which is the result of three independent implementations, the MABC algorithm and GA have converged in the 7th and 13th iterations, respectively, while the PSO algorithm has not converged at all.

Fig. 16
figure 16

Convergence curve of all algorithms

Now we do sensitivity analysis on the results of Example 4. To do this, we change the number of members of the population of all algorithms from 25 to 15 and 40. The obtained results are given in Table 9. As can be seen from Table 9, when the number of members of the population of all algorithms is reduced from 25 to 15, these is no difference in the order of the algorithms’ superiority, and the MABC algorithm has produced better quality results than the PSO algorithm and genetic algorithm. Moreover, with the increase in the number of members to 40 members, the best results belong to the MABC algorithm, the PSO algorithm and the genetic algorithm, respectively. The results of the algorithms in Table 9 show that by changing the population size there is no change in the ranking of the algorithms.

Table 9 Sensitivity analysis

To sum up with all the above aspects, we can conclude that the simulation results obtained from the MABC algorithm are vigorous regarding three criteria namely, convergence iteration, convergence time and run time. In fact, the results obtained from both simulations confirm that the MABC algorithm has less convergence iteration, convergence time and implementation time compared to GA and PSO algorithm. These results confirm that the MABC algorithm has more ability to find the shortest path. Moreover, we can conclude from the both simulations on WSNs that the MABC algorithm is quite robust to recognize the shortest path among all paths of networks in terms of energy consummation.

Conclusions

In this study, we investigated a kind of the SP problem in a network in which arc weights are represented in terms of MIVFNs and proposed a modified version of the ABC algorithm for solving it. To do this, we developed a novel approach to approximate the summation of two MIVFNs using alpha cuts. Then, we presented an extended distance function for comparing the various interval-valued fuzzy weights of different interval-valued fuzzy paths. The proposed algorithm was illustrated through two applications in WSNs involving 22 and 36 sensor nodes. The results shows that the proposed algorithm has better performance for solving MIVFSP compared to GA and PSO in terms of convergence iteration, convergence time and run time. However, considering other improved ABC algorithms [7, 27, 36] in solving MIVFSP problem is a worthwhile area of research that could be the scope for the future work. Finally, as future works, we can focus on extending the proposed algorithm and method to other systems such as transportation problems [4, 14, 15, 17], new product development projects [43], seru production system [44], manufacturing and supply chain [45], workforce scheduling [46] and so on.