Advertisement

Complex & Intelligent Systems

, Volume 4, Issue 1, pp 31–53 | Cite as

New binary bat algorithm for solving 0–1 knapsack problem

  • Rizk M. Rizk-Allah
  • Aboul Ella Hassanien
Open Access
Original Article

Abstract

This paper presents a novel binary bat algorithm (NBBA) to solve 0–1 knapsack problems. The proposed algorithm combines two important phases: binary bat algorithm (BBA) and local search scheme (LSS). The bat algorithm enables the bats to enhance the exploration capability while LSS aims to boost the exploitation tendencies and, therefore, it can prevent the BBA–LSS from the entrapment in the local optima. Moreover, the LSS starts its search from BBA found so far. By this methodology, the BBA–LSS enhances the diversity of bats and improves the convergence performance. The proposed algorithm is tested on different size instances from the literature. Computational experiments show that the BBA–LSS can be promise alternative for solving large-scale 0–1 knapsack problems.

Keywords

Bat algorithm Local search scheme Knapsack problem 

Introduction

Knapsack problem (KP) is one of the most important problems in the combinatorial optimization. It appears in a broad variety of applications, including scheduling problems, portfolio optimization, investment decision-making, project selection, resource distribution, and so on. Unfortunately, KP is non-polynomial (NP) hard, the complete problem [1]. Thus, solving this problem using the gradient methods is inappropriate because this problem may fall in local optima for large-scale problems. Also, these methods are time-consuming, and they achieve one of the closest local optima to initial random solution. Meanwhile, the metaheuristic algorithms have the ability to overcome these drawbacks and proved to be a robust alternative to solve complex optimization problems.

Recently, metaheuristic algorithms are one of the significant stochastic research topics in optimization that imitate natural phenomena. The features of the metaheuristic algorithms are the avoidance of local optima; generate multiple solutions for each run which assist to produce good-quality solutions quickly and no dependence on derivative information [2].

In recent decades, there have been extensive works based on metaheuristic algorithms to solve 0–1 KP. Liu and Liu [3] introduced an evolutionary algorithm based on schema-guiding to solve 0–1 KP. Martello et al. [4] proposed a survey of different approaches to solving 0–1 KP. Shi [5] proposed a modified version of the ant colony optimization (ACO) to solve 0–1 KP. Lin [6] solved the KP in the fuzzy environment through imprecise weight using a genetic algorithm (GA). Li and Li [7] presented a binary particle swarm optimization using a multi-mutation mechanism to solve KP. Zhang et al. [8] introduced amoeboid organism algorithm to solve 0–1 KP. Bhattacharjee and Sarmah [9] proposed a shuffled frog-leaping algorithm to solve 0–1 KP. Kulkarni and Shabir [10] proposed Cohort intelligence algorithm for solving 0–1 KP. In addition, many algorithms have been flourished for solving the 0–1 KP such as genetic algorithm (GA), particle swarm optimization (PSO), artificial fish-swarm algorithm (AFSA), harmony search algorithm (HS), chemical reaction optimization based on greedy strategy (CROG), genetic mutation bat algorithm (GMBA), monarch butterfly optimization and hybrid cuckoo search based on harmony search [11, 12, 13, 14, 15, 16, 17, 18, 19]. Owing to the importance of the knapsack problem in the academic area and practical applications, developing new algorithms with more promising performance to solve large-scale types of the knapsack problem applications undoubtedly becomes a true challenge.

Bat algorithm (BA) is one of the recent metaheuristic algorithms that are inspired by the echolocation behavior of micro-bats [20]. During flying, bats emit short and ultrasonic pulses to the environment and record their echoes. The recorded information from the echoes helps the bats to build an airtight image of their surroundings and locate precisely the distance, shapes and prey’s position. The ability of such echolocation of micro-bats is charming, as these bats can find their prey and distinguish different types of insects even in complete darkness [20]. The earlier applications showed that BA could solve different optimization problems and proved that its efficiency and robustness compared to different algorithms such as GA and PSO [20, 21, 22]. A new trend in bat algorithms is focusing on hybridizing BA with different strategies [23, 24, 25, 26, 27, 28, 29, 30, 31]. Fister et al. [23] developed a hybrid BA based on various evolution strategies for solving optimization tasks, while Baziar et al. [24] proposed a modified BA based on adaptive self-strategy. A hybrid BA based on harmony search for solving optimization problems was proposed by Wang and Guo [25]. Yilmaz and Kucuksille [26] developed an improved BA using some modifications, while Wang et al. [27] presented a modified BA through adjusting the flight speed and the flight direction adaptively. Fister et al. [28] introduced a new version of BA based on self-adaptation of control parameters. Further, binary versions of BA were developed in [29, 30, 31]. Mirjalili et al. [29] introduced a binary version of BA by employing a V-shaped transfer function to overcome the drawback of the sigmoid transfer function which keeps the positions unchanged during the iterations of the algorithm. In [30], authors developed an integrating version of the binary BA based on Naïve Bayes classifier for feature selection problem. In [31], a binary vision of BA is established based on the sigmoid transfer function for solving different optimization problems. Due to continuous nature of BA, it is still in its infancy for solving combinatorial optimization problems, so this is also the motivation behind this study.

This paper is motivated by several features. First, incorporating the rough set with bat algorithm to solve large-scale 0–1 KP has not been yet studied. Second, many optimization algorithms suffer from entanglement in local optima when solving large-scale problems. Last, solving large-scale knapsack problems have not received adequate attention yet. Hence solving large-scale knapsack problems to optimality undoubtedly becomes a true challenge.

In this paper, we propose a novel binary bat algorithm (NBBA) to solve 0–1 knapsack problems. In contrast to the binary version of BA in [29], the multi-V-shaped transfer function for generating the solutions, the inclusion of the rough set scheme (RSS) as a local search strategy (LSS) and updating the solution through one-to-one strategy are introduced. The proposed algorithm combines two important phases: binary bat algorithm (BBA) and local search scheme (LSS). The bat algorithm enables the bats to enhance exploration capability while LSS aims to boost the exploitation tendency and, therefore, it can prevent the BBA–LSS from the entrapment in the local optima. Moreover, the LSS starts its search from BBA found so far. By this methodology, the BBA–LSS enhances the diversity of bats and improves the convergence performance. The proposed algorithm is tested on different size instances from the literature. Computational experiments show that the BBA–LSS can promise alternative for solving large-scale 0–1 knapsack problems.

The main contributions of this approach are to (1) introduce a novel binary bat algorithm (NBBA) for solving large-scale 0–1 knapsack problems, (2) integrate intelligently the merits of two phases, namely binary bat algorithm (BBA) and rough set scheme (RSS) as a local search scheme, so it can avoid the sucking in the local optima, (3) improve the exploration capabilities of the BBA phase to seek the overall search space while incorporating RSS phase as a counterpart to enhance the exploitation tendencies, (4) implement the injective (one-to-one) strategy for updating mechanism between the two phases such that the fit ones among two phases replace the worst ones based on feasibility rule and (5) to integrate BBA and RSS to improve the quality of solutions and speed up the convergence to the global solution.

On the other hand, the proposed algorithm is effectively applied for small- , medium- and large-size problems. The experimental results demonstrated the superiority of the proposed algorithm in achieving a high quality of solutions. The simulation results affirm that the application of RSS may be an effective scheme to improve the performances of optimization algorithms.

The novelty of the proposed approach is cleared regarding proposing the multi-V-shaped transfer function for generating the solutions in the BBA phase, and then this can provide more explorations in the search space. Further, adopting the RSS as a local search scheme and introducing the injective (One-to-One) strategy can pick the fit solutions quickly and avoid the running of the algorithm without any improvement in the solutions.

The rest of this paper is organized as follows: In Sect. 2, we describe the preliminaries of the 0–1 knapsack problems. In Sect. 3, the basics of both BA and rough set theory (RST) are reviewed. The proposed algorithm is explained in detail in Sect. 4. The numerical experiments are given in Sect. 5 to show the superiority of the proposed algorithm. Section 6 gives the conclusions and the further work.

Preliminaries

Problem description

There are N items and the knapsack capacity is \(C\cdot w_{j}\) is the weight of the jth item, \(p_{j}\) is the profit of the jth item. Then solve which items are let into the knapsack to make the total weight of the items no more than capacity of the knapsack and get the maximum total of the profit.

Mathematical description

The mathematical description of the 0–1 knapsack problem can be formulated as follows:

0–1 KP:
$$\begin{aligned}&\text {Max} {f}(\mathbf{x})=\sum _{j=1}^N p_{j} x_{j}, \nonumber \\&\text {s.t}.:\left\{ \begin{array}{l} \sum \nolimits _{j=1}^N w_{j} x_{j} \le C, \\ x_{j} =\{0,1\}, j=1,2,\ldots ,N \\ p_{j}>0,\ w_{j} \ge 0,\ C>0 \\ \end{array}\right\} \end{aligned}$$
(1)
The binary decision variables \(x_{j}\) are used to determine whether the item j is put in the knapsack or not.

In large-scale instances, the total weights of the items that can be packed in the knapsack may violate the constraint, and this violation is unacceptable and must be handled. The prominent way to handle the constraint is the penalty function method. It imposes the penalty on unfeasible solutions and, therefore, it can evolve the unfeasible solutions until they move to candidate feasible regions. By use of penalty function, the 0–1 KP can be reformulated as follows:

0–1 KP:
$$\begin{aligned}&\text {Max} f(\mathbf{x})=\sum _{j=1}^N p_j x_j -\lambda \left( \max \left\{ 0, \sum _{j=1}^N w_j x_j -C\right\} ^{2}\right) \nonumber \\ \\&\text {s.t.}: \left\{ \begin{array}{l} x_j =\{0,1\},j=1,2,\ldots ,N \\ p_j>0, w_j \ge 0,C>0, \\ \end{array}\right\} \nonumber \end{aligned}$$
(2)
where \(\lambda \) defines the penalty coefficient where it is set to \(10^{10}\) for all test instances.

Overview of bat algorithm (BA) and rough set theory (RST)

This section is devoted to describing the basics of bat algorithm (BA) and rough set theory (RST).

Real behavior algorithm

Bat algorithm was established based on echolocation process of bats. In the echolocation process, pulses will be created by bats which are alive for 8–10 ms at a constant frequency and corresponding wavelength as given in Fig. 1. The features of bats which are exhibited for the development of bat algorithm are as follows: (i) Even without visibility, bats can sense and estimate the distance between food and the obstacles behind them, (ii) the bats are associated with velocity, position, fixed frequency, varying loudness and wavelength when they start flying to find their food and (iii) many strategies are attributed to change in values of loudness from a small constant value to a maximum positive value.
Fig. 1

Real behavior of bats

Bat algorithm (BA)

Velocity and position

BA starts with the random initial population of bats in a n-dimensional search space where the position of the bat i denoted by \(x_{i}^{t}\) and its velocity denoted by \(v_{i}^{t}\) at time t. Therefore, the new positions \(x_{i}^{t+1}\) and new velocities \(v_i^{t+1}\) at time step \(t+1\) can be determined by
$$\begin{aligned}&\alpha _i =\alpha _{\min } +(\alpha _{\max } -\alpha _{\min } )\beta \end{aligned}$$
(3)
$$\begin{aligned}&v_{i}^{t+1}=v_{i}^{t}+(x_{i}^{t} -x^{\mathrm{{best}}})\alpha \end{aligned}$$
(4)
$$\begin{aligned}&x_{i}^{t+1}=x_{i}^{t}+v_{i}^{t+1}, \end{aligned}$$
(5)
where \(\beta \) is a random number in [0, 1] and \(x^{\mathrm{{best}}}\) represents the current global optimal solution. \(\alpha _{i}\) represents the pulse frequency emitted by bat iat the current moment, and \(\alpha _{\min }\) and \(\alpha _{\max }\) represent the minimum and maximum values of pulse frequency, respectively. Initially, the pulse frequency is assigned randomly for each bat which is elected uniformly from \([\alpha _{\max }, \alpha _{\min }]\).
In this scenario, a bat is chosen randomly from the bat population, and then the corresponding position of this bat is updated according to Eq. (6). This random walk can be comprehended as a process of local search that generates a new solution by the selected solution.
$$\begin{aligned} x_{\mathrm{{new}}}=x_{\mathrm{{old}}}+\varepsilon A^{t} \end{aligned}$$
(6)
where \({x}_{\mathrm{{old}}}\) represents a random solution chosen from the current best solutions, \(A^{t}\) is the loudness and \(\varepsilon \) is a random vector that is drawn from [−1, 1].

Loudness and pulse emission

It is worth noting that loudness (A(i)) and pulse rate (r(i)) are responsible for balancing the combination between the local and global moves, where the loudness is strong, and pulse emission is small at the beginning of the search process. Once the bat has got its prey, the loudness decreases while pulse emission gradually increases. A(i) And r(i) are updated according to Eqs. (7) and (8):
$$\begin{aligned}&r^{t+1}(i)=r^{0}(i)\times [1-\mathrm{e}^{-\gamma t}]\end{aligned}$$
(7)
$$\begin{aligned}&A^{t+1}(i)=\delta A^{t} (i), \end{aligned}$$
(8)
where both \(\delta \) and \(\gamma \) are constants. \(A(i)=0\) means that the bat has just found its prey and temporarily stopped emitting any sound. For any \(0<\delta <1\) and \(\gamma >0\), we have
$$\begin{aligned} A^{t}(i)\rightarrow 0,\quad r^{t}(i)\rightarrow r^{0}(i),\qquad \mathrm{as}\ t\rightarrow \infty \end{aligned}$$
(9)

The implementation steps of bat algorithm

Step 1: Set the basic parameters: population size (PS), attenuation coefficient of loudness \(\delta \), increasing coefficient of pulse emission \(\gamma \), the maximum loudness \(A^{0}\) and maximum pulse emission \(r^{0}\) and the maximum number of iterations T.

Step 2: Define objective function \(f(x_{i}),i=1,2,\ldots ,\text {PS}\).

Step 3: Initialize pulse frequency \(\alpha _i \in [\alpha _{\min }, \alpha _{\max }]\);

Step 4: Initialize the bat population x and v.

Step 5: Start the main loop. If \(\hbox {rand}<r_i\), generate new solutions by updating process for both velocity and current position by using Eqs. (4) and (5). Otherwise, generate new position of bat by making a random disturbance, and go to step 5.

Step 6: If \(\hbox {rand}<A_i\) and \(f(x_{i})<f(x^{\mathrm{{best}}})\), accept the new solutions and fly to the new position.

Step 7: If \(f(x_{i})<f_{\min }\), replace the best bat and adjust A(i)and r(i)according to Eqs. (7) and (8).

Step 8: Evaluate the bat population, and return the best bat and its position.

Step 9: If the termination condition is met (i.e., satisfy the search accuracy condition or reach a maximum number of iterations), go to step 10; else, go to step 5, and perform the next search.

Step 10: Get the output (i.e., global solution and the best fitness).

where, rand is a uniform distribution in [0, 1].

Rough set theory (RST)

The fundamental concept of the RST is the indiscernibility relation, which is produced by the information of interested objects [32]. Because of discerning knowledge is lacking, one cannot identify some objects based on the available information. The indiscernibility relation relies on the granules of indiscernible objects as a fundamental basis. Some relevant concepts of the RST are as follows [32, 33]:

Definition 1

(Information system) An information system (IS) is denoted as a triplet \(T=(U,A,f)\), where U is a non-empty finite set of objects and A is a non-empty finite set of attributes. An information function f maps an object to its attribute, i.e., \(f_a:U\rightarrow V_a \) for every \(a\in A\), where \(V_a \) is the value set for attribute a. A posteriori knowledge (denoted by d) is denoted by one distinguished attributed. A decision system is an IS with the form \(\text {DT}=(U,A\cup \{d\},f)\), where \(d\notin A\) is used as supervised learning. The elements of A are called conditional attributes.

Definition 2

(Indiscernibility) For an attribute set \(B\subseteq A\), the equivalence relation induced by B is called a B-indiscernibility relation, i.e., \(\mathrm{{IND}}_\mathrm{T} (B)=\{(x,y)\in U^{2} | \forall a\in B, f_a (x)=f_a (y)\}\) The equivalence classes of the B-indiscernibility relation are denoted as \(I_B(x)\).

Definition 3

(Set approximation) Let \(X\subseteq U\) and \(B\subseteq A\) in an IS, the B-lower approximation of X is the set of objects that belongs to X with certainty, i.e., \({\underline{B}}X=\{{x\in U | I_B (x)\subseteq X}\}\). The B-upper approximation is the set of objects that possibly belongs to X, where \(\bar{{B}}X=\{{x\in U | I_B (x)\cap X\ne \phi }\}\).

Definition 4

(Reducts) If \(X_{\mathrm{{DT}}}^1,X_{\mathrm{{DT}}}^2,\ldots ,X_{\mathrm{{DT}}}^r \) are the decision classes of \(\text {DT}\), the set \(\text {POS}_B (d)=\underline{B}X^{1}\cup \underline{B}X^{2}\cup \cdots \cup \underline{B}X^{r}\) is the B-positive region of \(\text {DT}\). A subset \(B\subseteq A\) is a set of relative reducts of \(\text {DT}\) if and only if \(\text {POS}_B (d)=\text {POS}_C (d)\) and \(\text {POS}_{B-\{b\}} (d)\ne \text {POS}_C (d), \forall b\in B\). In the same way \(\text {POS}_B (X)\), \(BN_B (X)\) and \(\text {NEG}_B (X)\) are defined below (refer to Fig. 2).

  • \(\text {POS}_B (X)={\underline{B}}X\Rightarrow \) certainly member of X

  • \(\text {NEG}_B (X)=U-\bar{{B}}X\Rightarrow \) certainly nonmember of X

  • \(BN_B (X)={\bar{B}}X-{\underline{B}}X\Rightarrow \) possibly the member of X.

Fig. 2

Definitions regarding rough set approximations

The proposed algorithm (IBBA-RSS)

In this section, we present the injective binary bat algorithm based rough set scheme (IBBA-RSS) to solve the KP. Different from the conventional BA, first, a discrete binary string is adopted to represent a solution; second the updating process of position using Eq. (4) cannot be used to handle the binary space directly; therefore, a new transfer function is introduced to map velocity values to probability values for updating process of the position; third the RSS is adopted to exploit the neighborhood in search process; fourth, after the binary BA procedures, the updating mechanism is implemented based on the injective (one-to-one) strategy, where the fit one replaces the worst one based on feasibility rule. By this methodology, the IBBA-RSS enhances the diversity of bats and improves the convergence performance. The details of the proposed algorithm are given below.

Binary position scheme

In this step, each bat of the population is a solution to the KP, where each bat is represented by the n-bit binary string, where n is the number of decision variables (items) in the KP. For example, considering that \(x_{i}\) represents the bat bits, then its jth bit \(x_{ij} =(x_{i1},x_{i2},\ldots ,x_{in})\) is a binary variable, 0 or 1.

Binary velocity scheme

In bat algorithm, the velocity of the bat is responsible for updating the position. To update the position, the transfer function is introduced to force bat to fly in a binary space. Therefore, the transfer function is responsible for the switching between “0” and “1” values. The traditional transfer function that has been used for binary particle swarm optimization (PSO) is defined as Eq. (10) and Fig. 3 [34].
$$\begin{aligned} \hbox {Sig}(v_i^k (t))=\frac{1}{1+\mathrm{e}^{-v_i^k (t)}}, \end{aligned}$$
(10)
where \(\hbox {Sig}\) denotes the Sigmoid transfer function and \(v_i^k (t)\) denotes the velocity of the bat i in kth dimension at iteration t. After calculating the transfer function values, the new position updating equation is necessary to update particles’ position as follows [34]:
$$\begin{aligned} \mathbf{x}_i^k (t+1)=\left\{ \begin{array}{ll} 0 &{} \quad \hbox {if rand}<S(v_i^k (t+1)) \\ 1 &{} \quad \hbox {if rand}\ge S(v_i^k (t+1)), \end{array}\right. \end{aligned}$$
(11)
where \(x_i^k (t)\) indicates the position and \(v_i^k (t)\) indicates the velocity of ith the bat at iteration t in kth dimension. But the drawback of the sigmoid transfer function is that the particles’ positions remain unchanged when their velocity values increase. To overcome this drawback, a multi V-shaped transfer function (see Fig. 4) is introduced to oblige the bats with high velocity to diverse their positions. A multi V-shaped transfer function and the new position are stated as in Eqs. (12) and (13), respectively.
$$\begin{aligned}&V(v_i^k (t))=\left| {\frac{2}{\pi }\arctan \left( \frac{\pi }{2}v_i^k (t)\right) } \right| ^{Q}\end{aligned}$$
(12)
$$\begin{aligned}&\mathbf{x}_i^k (t+1)=\left\{ \begin{array}{ll} {(\mathbf{x}_i^k (t){)}'} &{}\quad \hbox {if rand}<V(v_i^k (t+1)) \\ {\mathbf{x}_i^k (t)} &{}\quad \hbox {if rand}\ge V(v_i^k (t+1)), \end{array}\right. \end{aligned}$$
(13)
where \(Q=0.1\sim 3\), \(\mathbf{x}_i^k (t)\) is the position and \(v_i^k (t)\) is the velocity of the ith bat at iteration t in kth dimension and \((\mathbf{x}_i^k (t){)}'\) is the complement of \(\mathbf{x}_i^k (t)\).
Fig. 3

Sigmoid transfer function [34]

Fig. 4

Proposed multi V-shaped transfer function

Evaluation

The distance estimation of the bat is related to objective function (fitness function). In this step, the fitness function is evaluated based on finding the maximum profit as in the Eq. (2). Therefore, the best estimation source with the highest fitness \(\mathbf{x}_{\mathrm{best}}\) is determined as follows:
$$\begin{aligned} \mathbf{x}_{\mathrm{best}}=\arg (\text {Max}\{f(x_i)\}_{i=1}^{\mathrm{{PS}}}) \end{aligned}$$
(14)

Rough set scheme (RSS)

In this step, the RSS is introduced to reduce the redundant bits. In this regard, the obtained population is assumed as an information system consisting of bats’ solutions where each bat is represented by a set of condition attributes and one decision attribute. For the bat \(i,x_{ij}\) the condition attribute illustrates the selected item j, and the decision attribute demonstrates the feasibility of this bat. The term feasibility means that the candidate bat satisfies the knapsack capacity. When the candidate bat is feasible, the decision attribute takes one value; otherwise it takes 0 value. After that, all solutions are formulating as augmented matrix consisting of the condition and decision attributes \([{x_{i1},x_{i2},\ldots ,x_{in}|{\{D\}}}]_{i=1}^{\mathrm{PS}}\), where D denotes decision attribute that takes 1 or 0 value. Therefore, D splits the population into two classes: members that picked value of one in D and members that picked value of zero in D. Let U be the set of objects (solutions) and \(X\subseteq U\) that contains the one values of D and \(B=\{x_1,x_2,\ldots ,x_n\}\) is the set of condition attribute in an IS. Then according to Definition 4, the redundant items are eliminated where \({\underline{B}}X,{\bar{B}}X\) \(BN_B (X)\) and \(\text {NEG}_B (X)\) of X are obtained based on the process of attribute reduction.

Afterward, the population is updated by putting one value for items that belong to the \(\underline{B}X\), picking zero value for items that included in \(\text {NEG}_B (X)\) and picking random value whether 0 or 1 for items that included in \(BN_B (X)\). If we have an empty \(BN_B (X)\) or zero values for D, then mutation strategy can be implemented by generated N neighbors around each solution as follows: for each string generated N-solutions randomly, K bits are selected randomly from \(\mathbf{x}_i \) of bat i; then the selected K bits are inverted. Figure 5 illustrates an example for this step with \(N=3\) and \(B=2,\) and the colored bits denote the selected ones.
Fig. 5

An illustration of mutation process

Fig. 6

Flowchart of the proposed IBBA-RSS approach

Table 1

The parameters, dimension and optimum of ten test instances

Problem

Parameter

Dimension

Optimum

KP\(_{1}\)

\(w=(95,4,60,32,23,72,80,62,65,46),\) \(C=269,\) \(p=(55,10,47,5,4,50,8,61,85,87)\)

10

295

KP\(_{2}\)

\(w= (92,\) 4, 43, 83, 84, 68, 92, 82, 6, 44, 32, 18, 56, 83, 25, 96, 70, 48, 14, 58), \(C = 878,\) \(p = (44,\) 46, 90, 72, 91, 40, 75, 35, 8, 54, 78, 40, 77, 15, 61, 17, 75, 29, 75, 63)

20

1024

KP\(_{3}\)

\(w=(6,5,9,7),\) \(C =20,\) \(p=(9,11,13,15)\)

4

35

KP\(_{4}\)

\(w=(2,4,6,7),\) \(C =11,\) \(p=(6,10,12,13)\)

4

23

KP\(_{5}\)

\(w= (56.358531,\) 80.874050, 47.987304, 89.596240, 74.660482, 85.894345, 51.353496, 1.498459, 36.445204, 16.589862, 44.569231, 0.466933, 37.788018, 57.118442, 60.716575), \(C = 375,\) \(p = (0.125126,\) 19.330424, 58.500931, 35.029145, 82.284005, 17.410810, 71.050142, 30.399487, 9.140294, 14.731285, 98.852504, 11.908322, 0.891140, 53.166295, 60.176397)

15

481.07

KP\(_{6}\)

\(w=(30,25,20,18,17,11,5,2,1,1),\) \(C =60,\) \(p=(20,18,17,15,15,10,5,3,1,1)\)

10

52

KP\(_{7}\)

\(w=(31,10,20,19,4,3,6),\) \(C =50,\) \(p=(70,20,39,37,7,5,10)\)

7

107

KP\(_{8}\)

\(w=(983, 982,\) 981, 980, 979, 978, 488, 976, 972, 486, 486, 972, 972, 485, 485, 969, 966, 483, 964, 963, 961, 958, 959), \(C = 10{,}000,\) \(p = (981, 980,\) 979, 978, 977, 976, 487, 974, 970, 485, 485, 970, 970, 484, 484, 976, 974, 482, 962, 961, 959, 958, 857)

23

9767

KP\(_{9}\)

\(w=(15,20,17,8,31),\) \(C=80,\) \(p=(33,24,36,37,12)\)

5

130

KP\(_{10}\)

\(w= (84, 83,\) 43, 4, 44, 6, 82, 92, 25, 83, 56, 18, 58, 14, 48, 70, 96, 32, 68, 92), \(C = 879,\) \(p = (91, 72, 90,\) 46, 55, 8, 35, 75, 61, 15, 77, 40, 63, 75, 29, 75, 17, 78, 40, 44)

20

1025

Table 2

Comparisons of the small sizes KP

 

KP\(_1\)

KP\(_2\)

KP\(_3\)

KP\(_4\)

KP\(_5\)

KP\(_6\)

KP\(_7\)

KP\(_8\)

KP\(_9\)

KP\(_{10}\)

TSR

BHS

   SR

0.78

0.92

0.98

1

0.96

0.9

0.56

0.82

0.98

0.94

1

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Worst

293

1018

28

23

437.94

50

93

9762

118

1019

 

   Mean

294.58

1023.52

34.86

23

479.55

51.84

104.34

9766.34

129.76

1024.64

 

   Std

0.81

1.64

0.99

0

7.59

0.51

4.5

1.52

1.7

1.44

 

DBHS

   SR

1

1

1

1

1

1

1

1

1

1

10

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Worst

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Mean

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Std

0

0

0

0

0

0

0

0

0

0

 

NGHS1

   SR

1

1

1

1

1

0.96

1

0.94

1

1

8

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Worst

295

1024

35

23

481.07

51

107

9765

130

1025

 

   Mean

295

1024

35

23

481.07

51.96

107

9766.88

130

1025

 

   Std

0

0

0

0

0

0.2

0

0.48

0

0

 

ABHS

   SR

1

1

1

1

1

1

1

1

1

1

10

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Worst

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Mean

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Std

0

0

0

0

0

0

0

0

0

0

 

ABHS1

   SR

0.86

0.96

1

0.98

0.98

0.84

0.48

0.82

1

1

3

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

105

9767

130

1025

 

   Worst

293

1018

35

22

475.48

49

96

9762

130

1025

 

   Mean

294.72

1023.76

35

22.98

480.96

51.68

105.18

9766.44

130

1025

 

   Std

0.7

1.19

0

0.14

0.8

0.82

2.95

1.33

0

0

 

SBHS

   SR

1

1

1

1

1

1

1

1

1

1

10

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Worst

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Mean

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Std

0

0

0

0

0

0

0

0

0

0

 

IBBA-RSS

   SR

1

1

1

1

1

1

1

1

1

1

10

   Best

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Median

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Worst

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Mean

295

1024

35

23

481.07

52

107

9767

130

1025

 

   Std

0

0

0

0

0

0

0

0

0

0

 
Table 3

The parameters, dimension and optimum of ten test problems

Problem

Parameter

Dimension

Optimum

KP\(_{11}\)

\(w=[46,\) 17, 35, 1, 26, 17, 17, 48, 38, 17, 32, 21, 29, 48, 31, 8, 42, 37, 6, 9, 15, 22, 27, 14, 42, 40, 14, 31, 6, 34], \(p=[57,\) 64, 50, 6, 52, 6, 85, 60, 70, 65, 63, 96, 18, 48, 85, 50, 77, 18, 70, 92, 17, 43, 5, 23, 67, 88, 35, 3, 91, 48], \(C=577\)

30

1437

KP\(_{12}\)

\(w=[7,\) 4, 36, 47, 6, 33, 8, 35, 32, 3, 40, 50, 22, 18, 3, 12, 30, 31,13, 33, 4, 48, 5, 17, 33, 26, 27, 19, 39, 15, 33, 47, 17, 41, 40], \(p=[35,\) 67, 30, 69, 40, 40, 21, 73, 82, 93, 52, 20, 61, 20, 42, 86, 43, 93, 38, 70, 59, 11, 42, 93, 6, 39, 25, 23, 36, 93, 51, 81, 36, 46, 96], \( C= 655\)

35

1689

KP\(_{13}\)

\(w=[28,\) 23, 35, 38, 20, 29, 11, 48, 26, 14, 12, 48, 35, 36, 33, 39, 30, 26, 44, 20, 13, 15, 46, 36, 43, 19, 32, 2, 47, 24, 26, 39, 17, 32, 17, 16, 33, 22, 6, 12], \(p=[13,\) 16, 42, 69, 66, 68, 1, 13, 77, 85, 75, 95, 92, 23, 51, 79, 53, 62, 56, 74, 7, 50, 23, 34, 56, 75, 42, 51, 13, 22, 30, 45, 25, 27, 90, 59, 94, 62, 26, 11], \(C=819\)

40

1816

KP\(_{14}\)

\(w=[18,\) 12, 38, 12, 23, 13, 18, 46, 1, 7, 20, 43, 11, 47, 49, 19, 50, 7, 39, 29, 32, 25, 12, 8, 32, 41, 34, 24, 48, 30, 12, 35, 17, 38, 50, 14, 47, 35, 5, 13, 47, 24, 45, 39, 1], \(p=[98,\) 70, 66, 33, 2, 58, 4, 27, 20, 45, 77, 63, 32, 30, 8, 18, 73, 9, 92, 43, 8, 58, 84, 35, 78, 71, 60, 38, 40, 43, 43, 22, 50, 4, 57, 5, 88, 87, 34, 98, 96, 99, 16, 1, 25], \( C=907\)

45

2020

KP\(_{15}\)

\(w=[15,\) 40, 22, 28, 50, 35, 49, 5, 45, 3, 7, 32, 19, 16, 40, 16, 31, 24, 15, 42, 29, 4, 14, 9, 29, 11, 25, 37, 48, 39, 5, 47, 49, 31, 48, 17, 46, 1, 25, 8, 16, 9, 30, 33, 18, 3, 3, 3, 4,1], \(p=[78,\) 69, 87, 59, 63, 12, 22, 4, 45, 33, 29, 50, 19, 94, 95, 60, 1, 91, 69, 8, 100, 84, 100, 32, 81, 47, 59, 48, 56, 18, 59, 16, 45, 54, 47, 98, 75, 20, 4, 19, 58, 63, 37, 64, 90, 26, 29, 13, 53, 83], \(C=882\)

50

2440

KP\(_{16}\)

\(w=[27,\) 15, 46, 5, 40, 9, 36, 12, 11, 11, 49, 20, 32, 3, 12, 44, 24, 1, 24, 42, 44, 16, 12, 42, 22, 26, 10, 8, 46, 50, 20, 42, 48, 45, 43, 35, 9, 12, 22, 2, 14, 50, 16, 29, 31, 46, 20, 35, 11, 4, 32, 35, 15, 29, 16], \(p=[98,\) 74, 76, 4, 12, 27, 90, 98, 100, 35, 30, 19, 75, 72, 19, 44, 5, 66, 79, 87, 79, 44, 35, 6, 82, 11, 1, 28, 95, 68, 39, 86, 68, 61, 44, 97, 83, 2, 15, 49, 59, 30, 44, 40, 14, 96, 37, 84, 5, 43, 8, 32, 95, 86, 18], \(C=1050\)

55

2643

KP\(_{17}\)

\(w=[7,\) 13, 47, 33, 38, 41, 3, 21, 37, 7, 32, 13, 42, 42, 23, 20, 49, 1, 20, 25, 31, 4, 8, 33, 11, 6, 3, 9, 26, 44, 39, 7, 4, 34, 25, 25, 16, 17, 46, 23, 38, 10, 5, 11, 28, 34, 47, 3, 9, 22, 17, 5, 41, 20, 33, 29, 1, 33, 16, 14], \(p=[81,\) 37, 70, 64, 97, 21, 60, 9, 55, 85, 5, 33, 71, 87, 51, 100, 43, 27, 48, 17, 16,27, 76, 61, 97, 78, 58, 46, 29, 76, 10, 11, 74, 36, 59, 30, 72, 37, 72, 100, 9, 47, 10, 73, 92, 9, 52, 56, 69, 30, 61, 20, 66, 70, 46, 16, 43, 60, 33, 84], \(C=1006\)

60

2917

KP\(_{18}\)

\(w=[47,\) 27, 24, 27, 17, 17, 50, 24, 38, 34, 40, 14, 15, 36, 10, 42, 9, 48, 37, 7, 43, 47, 29, 20, 23, 36, 14, 2, 48, 50, 39, 50, 25, 7, 24, 38, 34, 44, 38, 31, 14, 17, 42, 20, 5, 44, 22, 9, 1, 33, 19, 19, 23, 26, 16, 24, 1, 9, 16, 38, 30, 36, 41, 43, 6], \(p=[47,\) 63, 81, 57, 3, 80, 28, 83, 69, 61, 39, 7, 100, 67, 23, 10, 25, 91, 22, 48, 91, 20, 45, 62, 60, 67, 27, 43, 80, 94, 47, 31, 44, 31, 28, 14, 17, 50, 9, 93, 15, 17, 72, 68, 36, 10, 1, 38, 79, 45, 10, 81, 66, 46, 54, 53, 63, 65, 20, 81, 20, 42, 24, 28, 1], \(C=1319\)

65

2814

KP\(_{19}\)

\(w=[4,\) 16, 16, 2, 9, 44, 33, 43, 14, 45, 11, 49, 21, 12, 41, 19, 26, 38, 42, 20, 5, 14, 40, 47, 29, 47, 30, 50, 39, 10, 26, 33, 44, 31, 50, 7, 15, 24, 7, 12, 10, 34, 17, 40, 28, 12, 35, 3, 29, 50, 19, 28, 47, 13, 42, 9, 44, 14, 43, 41, 10, 49, 13, 39, 41, 25, 46, 6, 7, 43], \(p=[66,\) 76, 71, 61, 4, 20, 34, 65, 22, 8, 99, 21, 99, 62, 25, 52, 72, 26, 12, 55, 22, 32, 98, 31, 95, 42, 2, 32, 16, 100, 46, 55, 27, 89, 11, 83, 43, 93, 53, 88, 36, 41, 60, 92, 14, 5, 41, 60, 92, 30, 55, 79, 33, 10, 45, 3, 68, 12, 20, 54, 63, 38, 61, 85, 71, 40, 58, 25, 73, 35], \(C=1426\)

70

3221

KP\(_{20}\)

\(w=[24,\) 45, 15, 40, 9, 37, 13, 5, 43, 35, 48, 50, 27, 46, 24, 45, 2, 7, 38, 20, 20, 31, 2, 20, 3, 35, 27, 4, 21, 22, 33, 11, 5, 24, 37, 31, 46, 13, 12, 12, 41, 36, 44, 36, 34, 22, 29, 50, 48, 17, 8, 21, 28, 2, 44, 45, 25, 11, 37, 35, 24, 9, 40, 45, 8, 47, 1, 22, 1, 12, 36, 35, 14, 17, 5], \(p=[2,\) 73, 82, 12, 49, 35, 78, 29, 83, 18, 87, 93, 20, 6, 55, 1, 83, 91, 71, 25, 59, 94, 90, 61, 80, 84, 57, 1, 26, 44, 44, 88, 7, 34, 18, 25, 73, 29, 24, 14, 23, 82, 38, 67, 94, 43, 61, 97, 37, 67, 32, 89, 30, 30, 91, 50, 21, 3, 18, 31, 97, 79, 68, 85, 43, 71, 49, 83, 44, 86, 1, 100, 28, 4,16], \( C=1433\)

75

3614

Table 4

Comparison results for the medium sizes KP (KP\(_{11}\)–KP\(_{20})\)

Fun

Algorithm

Obtained solution

Best

Mean

Worst

Std

KP\(_{11}\)

IBBA-RSS

111110111111001110110101111011

1437

1437

1437

0

 

BBA

111110111111001110110101111011

1437

1437

1437

0

 

CI

NA

1437

1418

1398

11.79

 

B&B

NA

1437

NA

NA

NA

KP\(_{12}\)

IBBA-RSS

11011111111010111111101101110111111

1689

1689

1689

0

 

BBA

11011111111010111111101101110111111

1689

1689

1689

0

 

CI

NA

1689

1686.5

1679

3.8188

 

B&B

NA

1689

NA

NA

NA

KP\(_{13}\)

IBBA-RSS

0011110011111011111101011111001111111111

1821

1821

1821

0

 

BBA

0011110011111011111101011111001111111111

1821

1821

1821

0

 

CI

NA

1816

1807.5

1791

9.604

 

B&B

NA

1821

NA

NA

NA

KP\(_{14}\)

IBBA-RSS

11110100111111011111011111111111101011111 1001

2033

2033

2033

0

 

BBA

11110100111111011111011111111111101011111 1001

2033

2030.3333

2016

6.0988

 

CI

NA

2020

2017

2007

4.749

 

B&B

NA

2033

NA

NA

NA

KP\(_{15}\)

IBBA-RSS

11111001111111110110111111111010111111011101111111

2448

2448

2448

0

 

BBA

11111001111111110110111111111010011111011111111111

2440

2439.633333

2435

1.1591

 

CI

NA

2440

2436.166

2421

6.841

 

B&B

NA

2440

NA

NA

NA

KP\(_{16}\)

IBBA-RSS

1110011111011111011111101001111111111001101101110101111

2643

2642.6000

2632

2.0103

 

BBA

1111011111011111011111101001111111111001101101111101110

2642

2640.4000

2614

5.5930

 

CI

NA

2643

2605

2581

22.018

 

B&B

NA

2440

NA

NA

NA

KP\(_{17}\)

IBBA-RSS

1111101011011111011001111111110111111111011 11011111111101111

2917

2917

2917

0

 

BBA

111110101101111101100111111111011111111101111011111111101111

2917

2915

2893

6.1923

 

CI

NA

2917

2915

2905

4.472

 

B&B

NA

2917

NA

NA

NA

KP\(_{18}\)

IBBA-RSS

11110101111011101111101111111111111001011111100111011111111101010

2818

2817.6333

2814

1.0661

 

BBA

11111101111011101101101111111111111001011111100111111111111101010

2809

2808.3333

2802

1.881549

 

CI

NA

2814

2773.66

2716,

18.273

 

B&B

NA

2818

NA

NA

NA

KP\(_{19}\)

IBBA-RSS

1111101110101101110111111101011101011111111100111111111011011111111111

3223

3222.6000

3219

1.1017

 

BBA

1111101110101101100111111101011111011111111111111011111011011111111111

3213

3212.9000

3209

1.4936

 

CI

NA

3221

3216

3211

4.3589

 

B&B

NA

3223

NA

NA

NA

KP\(_{20}\)

IBBA-RSS

011011111011001011111111111011111100111101111111011111111001111111111101101

3614

3613.2333

3605

2.4166

 

BBA

011011111011001011111111111011111100111101111111111111110100111111111101101

3602

3600.3793

3588

4.1611

 

CI

NA

3614

3603.8

3591

8.035

 

B&B

NA

3614

NA

NA

NA

Table 5

Comparisons of the large-size KP

 

IBBA-RSS

BBA

SBHS

IHS

GHS

SAHS

EHS

NGHS

NDHS

KP\(_{21}\)

   Best

63.2149

62.3101

62.08

61.99

61.81

62.02

61.78

61.82

61.61

   Median

63.2149

62.3101

62.04

61.81

61.3

61.86

61.25

61.5

61.02

   Worst

62.0222

61.1074

61.97

61.23

60.94

61.65

60.63

61.11

59.59

   Mean

63.1545

62.2322

62.04

61.77

61.29

61.85

61.22

61.5

60.86

   Std

0.2418

0.2964

0.03

0.15

0.19

0.11

0.3

0.2

0.45

   Best

131.1273

129.7232

129.44

128.89

127.09

127.99

128.43

128.34

127.82

   Median

131.1273

129.7232

129.38

128.42

125.7

127.21

127.88

127.7

127

KP\(_{22}\)

   Worst

129.2422

128.3646

129.27

127.61

124.47

126.39

127.08

126.87

125.72

   Mean

130.9917

129.6492

129.37

128.4

125.69

127.16

127.81

127.66

126.86

   Std

0.4352

0.2890

0.04

0.31

0.61

0.41

0.36

0.42

0.54

   Best

195.0331

192.5467

192.02

189.94

187.28

188.15

190.96

190.18

189.97

   Median

195.0331

192.5467

192.02

189.35

185.77

187.36

190.43

189.31

189.04

KP\(_{23}\)

   Worst

193.2210

192.4450

191.85

188.27

184.16

186.05

189.27

187.9

187.85

   Mean

194.9348

192.5431

192.01

189.14

185.77

187.27

190.28

189.23

188.97

   Std

0.3587

0.0186

0.03

0.51

0.72

0.53

0.43

0.58

0.61

   Best

316.3039

312.5521

314.23

306.89

301.03

302.92

312.04

310.16

309.49

   Median

316.1211

312.5521

314.2

305.11

299.78

300.72

311.32

308.28

308.28

KP\(_{24}\)

   Worst

315.8936

312.2119

314.1

303.55

297.25

299.14

310.29

305.67

305.94

   Mean

316.1044

312.5294

314.19

305.1

299.6

300.79

311.25

308.33

308.07

   Std

0.0789

0.0863

0.03

0.92

0.91

1.03

0.49

1.06

0.93

   Best

448.8721

446.9679

448.65

434.04

429.02

431.63

444.91

442.32

442.85

   Median

448.8721

446.9679

448.63

431.74

425.75

428.99

443.64

441.13

439.39

KP\(_{25}\)

   Worst

447.2503

446.2406

448.46

429.63

423.35

427.08

442.13

436.45

436.01

   Mean

448.7179

446.9049

448.6

431.73

425.68

428.93

443.53

440.83

439.43

   Std

0.4232

0.1947

0.05

1.13

1.28

1.23

0.64

1.23

1.37

   Best

639.4001

635.0750

638.14

605.88

602.29

606.5

629.29

626.77

621.15

   Median

639.4001

635.0750

638.08

603.42

599.07

601.31

626.62

623.9

618.41

KP\(_{26}\)

   Worst

639.0579

632.6213

638

599.53

594.34

597.84

624.99

619.15

614.86

   Mean

639.3884

634.8336

638.09

603.26

598.83

601.78

626.76

623.87

618.09

   Std

0.0624

0.7367

0.04

1.56

1.9

2.34

1.13

1.37

1.48

   Best

767.0228

764.3262

763.81

722.52

721.23

724.4

751.73

750.67

744.72

   Median

767.0228

764.3262

763.72

718.39

716.92

721.53

749.16

747.88

739.88

KP\(_{27}\)

   Worst

766.9989

764.1197

763.39

714.39

713.17

716.46

746.38

745.05

735.02

   Mean

767.0219

764.3177

763.71

718.29

716.69

721.38

749.15

747.66

739.76

   Std

0.0043

0.0383

0.08

1.98

1.84

1.95

1.33

1.41

2.14

   Best

966.0450

962.6650

964.91

902.36

903.31

908.1

944.09

945.2

932.32

   Median

966.0450

962.6650

964.86

897.78

901.26

904.01

940.76

942.09

926.48

KP\(_{28}\)

   Worst

965.5550

962.6020

964.7

891.26

895.58

899.04

937.07

938.31

923.45

   Mean

966.0164

962.661

964.85

897.62

900.63

903.83

940.72

941.97

926.62

   Std

0.1099

0.0152

0.06

2.68

1.77

2.54

1.68

1.7

2

   Best

1157.2337

1153.0032

1155.65

1073.93

1080.1

1086.57

1128.25

1133.44

1110.98

   Median

1157.2337

1153.0032

1155.58

1066.02

1076.49

1080.71

1122.29

1128.77

1106.13

KP\(_{29}\)

   Worst

1155.6659

1152.8484

1155.35

1058.6

1072.65

1074.16

1119.25

1125.69

1099.5

   Mean

1157.1784

1152.9942

1155.57

1066.1

1076.58

1080.58

1122.61

1129.02

1105.73

   Std

0.2861

0.0344

0.08

3.29

2.02

2.83

2.33

1.94

2.86

   Best

1289.5521

1284.7260

1283.92

1182.55

1198.69

1202.7

1247.95

1257.45

1229.87

   Median

1289.5521

1284.7260

1283.81

1177.52

1192.03

1196.75

1243.8

1252.9

1223.25

KP\(_{30}\)

   Worst

1285.6171

1283.6650

1283.26

1172.02

1188.27

1190.05

1238.26

1249.74

1218.14

   Mean

1289.4157

1284.6381

1283.79

1177.59

1192.71

1196.71

1243.07

1252.86

1223.5

   Std

0.7177

0.2760

0.12

2.34

2.66

3.34

2.55

1.83

2.94

   Best

1668.4021

1661.2185

1653.72

1500.31

1534.74

1536.25

1592.68

1615.64

1570.24

   Median

1668.4021

1661.2185

1653.66

1492.52

1526.73

1528.71

1587.53

1611.05

1561.41

KP\(_{31}\)

   Worst

1661.4592

1651.3906

1653.43

1481.67

1521.56

1521.65

1582.16

1604.28

1553.61

   Mean

1668.0197

1660.5869

1653.64

1492.57

1527.06

1528.66

1587.06

1610.5

1561.24

   Std

1.3841

2.3466

0.06

4.25

3.2

3.33

2.93

2.71

3.68

   Best

1927.8000

1890.3517

1917.49

1731.78

1777.72

1785.64

1843.7

1877.6

1818.63

   Median

1921.7966

1890.3517

1917.44

1724.57

1771.48

1779.68

1838.22

1872.5

1809.24

KP\(_{32}\)

   Worst

1917.3188

1884.0266

1917.23

1714.03

1767.32

1769.75

1830.47

1868.31

1800.95

   Mean

1921.3983

1890.07

1917.42

1724.16

1771.88

1779.06

1838.15

1872.43

1809.34

   Std

1.1391

1.323

0.06

3.81

2.78

4

3.09

2.26

4.06

Injective updating based on feasibility rule

Now the feasibility rule must be carried out to obtain a new population. In this step, the population is updated based on injective scheme (one-to-one), where the solution after implementing the rough set scheme is compared with corresponding one obtained by the bat procedures. The winner solution is selected for updated process based on the feasibility rule that was introduced by Deb [35]. These rules are defined as follows:
  1. 1.

    Through comparing two feasible solutions, the chosen is the one that has a better objective.

     
  2. 2.

    Through comparing a feasible and an infeasible solution, the chosen is the feasible one.

     
  3. 3.

    Through comparing two infeasible solutions, the chosen is the one with the lower sum of constraint violation.

     
The sum of constraint violation for a solution given by
$$\begin{aligned} \text {CV}(x)=\max \left( \sum _{j=1}^N w_j x_j -C,0\right) \end{aligned}$$
(15)
The flowchart of the proposed IBBA-RSS algorithm is shown in Fig. 6, where the changes are highlighted.

Experimental results and analysis

In this section, the performance of the IBBA-RSS algorithm is extensively investigated by a large number of experimental studies. Ten low-dimensional, ten medium size and twelve large-scale instances are considered to validate the robustness of the proposed IBBA-RSS algorithm. The algorithm is coded in MATLAB 7, running on a computer with an Intel Core I 5 (1.8 GHz) processor and 4 GB RAM memory and Windows XP operating system.

Low-dimensional 0–1 knapsack problems

In this section, the performance of proposed algorithm is investigated to solve ten low-dimensional 0–1 knapsack problems, where these instances are taken from [36, 37]. The required information about test instances such as dimension and parameters is listed in Table 1. The maximum number of iterations is set to 400 iterations for each instance with 30 bats for the population size, where each instance is tested with 30 independent algorithm runs. To completely evaluate the IBBA-RSS performance statistical measures such as success rate (SR) among all runs in reaching the appointed Optima, “Best”, “Median”, “Worst”, “Mean” and standard division (Std.) are calculated.

On the other hand, the performance of the proposed IBBA-RSS algorithm is compared with six different algorithms that are reported in [37]: NGHS1 [36], SBHS [37], BHS [38], DBHS [39], ABHS [40] and ABHS1 [41]. Table 2 shows the comparisons between the proposed algorithm and six algorithms, where best results are highlighted in bold. The obtained results showed that the proposed algorithm could achieve the optima for the low-dimensional knapsack problems, where the proposed IBBA-RSS algorithm is competitive with SBHS, ABHS and DBHS and outperforms BHS, NGHS1 and ABHS1.

Medium size 0–1 knapsack problems

This section is devoted to investigate the performance of the proposed algorithm to solve medium size 0–1 knapsack problems. Ten instances are taken from [42], where sizes of these instances include 30, 35, 40, 45, 50, 55, 60, 65, 70 and 75 items. The information about these instances such as dimension, parameters and optimum solution are listed in Table 3. Extensive experimental tests were carried out to adjust the maximum numbers of iterations. Based upon these tests, the maximum numbers of iterations are accordingly set to 400 iterations for KP\(_{10}\)–KP\(_{15}\) and 500 iterations for KP\(_{16}\)–KP\(_{20}\). The proposed IBBA-RSS is run 30 times for each instance with 30 bats for the population size.

To demonstrate the effectiveness and robustness of the proposed IBBA-RSS, it is implemented and compared with the BBA phase. The statistical measures for the each instance is obtained using BBA and IBBA-RSS and reported in Table 4 where best results are highlighted in bold. The statistical measures such as the best, median, worst, mean values and standard deviations are determined.

The proposed IBBA-RSS is compared with BBA, Cohort Intelligence (CI) and Branch and Bound method (B&B) as in Table 4. From these Tables, we can see that the proposed IBBA-RSS is statistically superior to other algorithms for the most KP instances and similar for the some KP instances. It can be perceived from Table 4 that the proposed IBBA-RSS is competent to obtain very competitive solutions with other algorithms. For KP15, KP16, KP18, KP19 and KP20 the solutions of the proposed IBBA-RSS demonstrate that it is capable of outperforming the BBA phase. The solutions of the proposed IBBA-RSS are also superior to the results of the other evaluated techniques in the most of the test cases.

Further, the convergence behavior for each instance is depicted in Fig. 7, where the KP11 is depicted in Fig. 7a, the KP12 is depicted in Fig. 7b and so on. As shown in theses graphs the proposed IBBA-RSS gives better results than BBA, and consequently, the profit for each instance is improved.

Large-scale 0–1 knapsack problems

To further prove the proficiency of the proposed IBBA-RSS algorithm, twelve large-scale 0–1 knapsack instances were utilized. The sizes of these instances include 100, 200, 300, 500, 700, 1000, 1200, 1500, 1800, 2000, 2600 and 3000 items. Each large-scale KP (KP\(_{11}\)–KP\(_{22})\) is generated as follows: the volume of each item is randomly chosen from 0.5 and 2 and its corresponding profit is randomly set between 0.5 and 1. The maximal volume capacity of the knapsack is limited to 0.75 times of the sum volumes of the items generated following the above procedure. It is worth noting that these instances are created only once using a random generator and kept constant for all the experiments. Extensive experimental tests were carried out to adjust the maximum numbers of iterations. Based upon these tests, the maximum numbers of iterations are accordingly set to 300, 600, 600, 1000, 1800, 1800, 2500, 10,000, 10,000, 16,000, 18,000 and 18,000 respectively. The proposed IBBA-RSS is run 30 times for each instance with 30 bats for the population size.

The proposed IBBA-RSS and BBA phase are compared with the V-shaped binary bat algorithm (V-BBA) which was developed in [29]. The statistical measures for each instance using the proposed IBBA-RSS and other comparative algorithms are presented in Tables 5 and 6 while best results are highlighted in bold. The statistical measures such as the best, median, worst, mean values and standard deviations are determined where the success rate (SR) results are not reported because the optimal profits of KP\(_{21}\)–KP\(_{32}\) are unknown.

The proposed IBBA-RSS is compared with 16 different algorithms as in Tables 5 and 6. From these tables, we can see that the proposed IBBA-RSS outperforms the other algorithms for all KP instances (KP\(_{21}\)–KP\(_{32}\)). Also, the proposed algorithm saves the commotional time, where it is consumed a small number of iterations compared with the other algorithms [37].

Further, the convergence behavior for each instance is depicted in Fig. 8, where the KP21 is depicted in Fig. 8a, the KP22 is depicted in Fig. 8b and so on. As shown from these graphs, the proposed IBBA-RSS achieves better simulation results than the BBA phase and V-BBA. Consequently, the profit for each instance has improved significantly. The improved ratio for each instance is equivalent to 1.4313, 1.0708, 1.2748, 1.1861, 0.4242, 0.6764, 0.3515, 0.3498, 0.3656, 0.3742, 0.4306 and 1.9425%, respectively, when comparing IBBA-RSS with BBA phase, while the improved ratio obtained by comparing IBBA-RSS with V-BBA is equivalent to 5.4714, 6.6404, 2.4526, 2.6791, 1.7067, 1.7673, 2.2394, 0.9915, 0.5675, 1.6160, 1.4548 and 3.2288%, respectively. Further, comparing BBA with V-BBA achieves the following improved ratio as follows: 4.0988, 5.6298, 1.1929, 1.5108, 1.2879, 1.0983, 1.8945, 0.6439, 0.2027, 1.2467, 1.0287 and 1.3117%, respectively. Although these ratios seem small for some instances, it is very significant from the practical point of view for large-scale problems. Based on the above-improved ratios, it can be concluded that proposed IBBA-RSS algorithm has better ratios. Therefore, the proposed IBBA-RSS is robust approach and has powerful searching quality.

Regarding overall results on Tables 2, 4, 5 and 6, amongst the evaluated optimizers, IBBA-RSS could achieve the best performance. The main reason for the superior performance of the proposed IBBA-RSS lies behind the multi-V-shaped transfer function. The multi-V-shaped transfer function helps the proposed algorithm to preserve the diversity of the solutions and thus refine the convergence rate of the proposed algorithm. Also incorporating of RSS-based approximations can efficiently redistribute the search bats to enhance their diversity and to the emphasis on more explorative steps in case of convergence to the local optimum. The new transfer function and RSS strategies have improved the searching capacities and quality of the solutions of the proposed algorithm. Therefore, these strategies can assist the proposed algorithm to switch between exploration and exploitation behaviors more effectively.
Fig. 7

The Convergence behavior for medium KP (KP\(_{11}\)–KP\(_{20})\)

Table 6

Comparisons for the large sizes KP (continued)

 

PSFHS

BHS

DBHS

NGHS1

ABHS

ABHS1

ITHS

V-BBA

KP\(_{21}\)

   Best

56.3

62.05

59.99

61.76

62.01

62.08

62.06

59.7561

   Median

53.26

61.87

58.58

61.46

61.92

61.98

61.95

59.7561

   Worst

48.48

61.68

58.04

61.12

61.71

61.76

61.76

56.0839

   Mean

53.13

61.87

58.63

61.44

61.9

61.95

61.93

59.51

   Std

1.82

0.1

0.43

0.17

0.09

0.1

0.07

0.84

   Best

106.52

129.27

118.24

128.41

129.31

129.29

129.24

122.4199

   Median

99.89

129.06

115.95

127.72

129

128.95

128.88

121.1979

KP\(_{22}\)

   Worst

94.25

128.76

113.32

125.66

128.51

128.56

128.45

120.6412

   Mean

100.15

129.06

115.88

127.59

128.94

128.95

128.87

122.16

   Std

2.92

0.13

1.12

0.6

0.21

0.18

0.19

0.60

   Best

147.08

191.54

166.55

190.83

191.49

191.46

191.41

190.2497

   Median

141.64

190.97

164.1

189.17

191.05

190.71

190.78

188.5120

KP\(_{23}\)

   Worst

136.66

190.5

162.24

187.7

190.32

189.78

190.06

179.5679

   Mean

141.2

190.94

164.4

189.14

191.04

190.67

190.74

187.37

   Std

2.71

0.25

1.27

0.64

0.24

0.34

0.36

2.54

   Best

234.23

311.85

257.61

310.1

312.51

310.28

310.94

307.8299

   Median

224.64

310.6

252.58

308.39

311.92

309.32

309.85

307.7595

KP\(_{24}\)

   Worst

218.81

309.56

249.16

306.91

310.67

307.82

308.44

292.0437

   Mean

225.45

310.52

252.87

308.38

311.79

309.23

309.82

306.52

   Std

4.26

0.6

1.86

0.83

0.48

0.61

0.61

4.13

   Best

323.93

443.43

355.45

442.2

446.3

441.51

442.35

441.2110

   Median

311.68

441.82

348.81

440.68

445.45

439.71

441.18

440.2339

KP\(_{25}\)

   Worst

301.51

439.93

344.56

437.99

444.42

437.15

438.8

425.8292

   Mean

311.5

441.66

349.09

440.52

445.43

439.45

440.93

438.91

   Std

4.29

0.75

2.8

1.02

0.51

0.93

0.83

3.84

   Best

453.2

626.04

482.59

626.27

632.38

620.31

624.04

628.0995

   Median

431.71

623.09

475.73

623.07

630.33

618.12

621.68

628.0995

KP\(_{26}\)

   Worst

420.42

621.53

470.08

619.09

628.65

615.83

618.81

610.5837

   Mean

431.97

623.18

475.33

623.17

630.34

617.96

621.7

626.62

   Std

6.68

1.25

3.15

1.64

1.02

1.28

1.18

4.30

   Best

526.59

746.55

570.95

750.32

756.08

741.27

745.77

749.8460

   Median

512.59

744.38

560.84

746.73

754.26

738.82

743.32

745.7390

KP\(_{27}\)

   Worst

497.65

741.5

556.7

744.14

752.1

734.96

738.73

725.7928

   Mean

511.65

744.4

561.83

746.95

754.26

738.47

743.03

744.51

   Std

7

1.17

3.18

1.51

1.11

1.36

1.59

4.72

   Best

659.05

938.36

700.81

944.36

950.7

927.6

937.62

956.4658

   Median

631.94

935.21

693.91

941.18

949.42

924.15

933.82

956.4548

KP\(_{28}\)

   Worst

615.25

932.94

687.84

937.3

947.36

920.73

930.6

946.4512

   Mean

633.31

935.18

694.53

941.14

949.17

924.25

933.7

955.73

   Std

8.37

1.29

3.87

2

1

1.64

1.8

2.15

   Best

780.89

1118.83

833.43

1129.81

1140.69

1106.12

1121.58

1150.6657

   Median

755.93

1115.78

819.96

1127.63

1136.71

1102.06

1115.3

1150.5952

KP\(_{29}\)

   Worst

742.55

1112.07

813.31

1123.39

1133.22

1098.7

1111.32

1150.2488

   Mean

755.48

1115.23

821.07

1127.15

1136.57

1102.07

1115.39

1150.57

   Std

9.97

1.68

4.59

1.8

1.6

1.93

2.57

0.07

   Best

867.81

1238.16

916.07

1254.53

1263.67

1223.38

1240.66

1268.7089

   Median

835.85

1234.81

906.09

1252.63

1260.42

1220.21

1234.72

1266.2494

KP\(_{30}\)

   Worst

818.62

1231.58

896.47

1247.62

1257.85

1214.94

1231.26

1263.4345

   Mean

835.23

1234.53

905.17

1252.08

1260.46

1219.88

1234.95

1266.23

   Std

9.72

1.85

5.44

1.71

1.54

2.07

2.26

1.05

   Best

1092.87

1579.8

1148.13

1613.95

1623.3

1559.19

1585.94

1644.1290

   Median

1061.59

1577.19

1140.14

1609.36

1618.89

1553.33

1579.92

1644.1290

KP\(_{31}\)

   Worst

1044.6

1573.51

1129.41

1605.75

1613.54

1545.65

1572.33

1562.5883

   Mean

1062.7

1577.17

1139.77

1609.53

1618.77

1553.04

1579.7

1640.25

   Std

10.58

1.74

4.17

2.17

2.09

2.71

3.43

17.00

   Best

1269.54

1830.65

1332.06

1875.71

1879.12

1803.16

1839.01

1865.5547

   Median

1222.61

1826.22

1314.44

1871.05

1874.11

1797.45

1830.55

1865.5547

KP\(_{32}\)

   Worst

1205.88

1821.17

1304.5

1866.99

1868.61

1792.59

1824.55

1851.2552

   Mean

1224.09

1825.98

1314.9

1870.91

1874.04

1797.55

1831.37

1864.60

   Std

12.96

2.38

7

2.28

2.65

2.75

3.88

3.0132

Performance assessment

Regarding the assessment, the performance of the proposed algorithm is investigated through using the Wilcoxon signed ranks (WSRs) test for a better comparison [43]. WSRs test is a nonparametric test that utilized in a hypothesis testing situation involving a design with two samples [42]. It is a pair-wise test that aims to find out significant differences between the behaviors of two algorithms. WSRs test is working as follows: First, the difference between the scores of the two algorithms on ith of n problems and the differences are ranked according to their absolute values. Second \(R^{+}\) and \(R^{-}\) are determined, where \(R^{+}\) is the sum of positive ranks, while \(R^{-}\) is the sum of negative ranks; then the minimum of \(R^{+}\) and \(R^{-}\) is obtained. If the result of the test is returned in \(p<0.05\) (i.e. p-value is the probability of the null hypothesis being true) indicates a rejection of the null hypothesis, while \(p>0.05\) indicates a failure to reject the null hypothesis.

Therefore, we apply the WSRs test for the proposed IBBA-RSS algorithm against the different algorithms that appear in Table 2 and the obtained results for WSRs test is reported in Table 7. Also, the WSRs test is employed for the results of the medium size KP instances that depicted in Table 4, where obtained results for WSRs test for the medium size instances are reported in Table 8. Also the WSRs test is employed for the results of the large-scale KP instances that depicted in Tables 5 and 6, where obtained results for WSRs test for the large-scale instances are reported in Table 9. From Tables 7, 8 and 9, it can be concluded that the proposed IBBA-RSS has superior characteristics both in the high quality of the solution and robustness of the results. Also, it can keep a significant balance between the global exploration and the local exploitation.

Most of the p values reported in Tables 7, 8 and 9 are less than 0.05 (5% significance level) which is a robust evidence against the null hypothesis, concluding that the obtained results by the proposed approach are statistically better and they have not happened by chance.

Convergence analysis

To analyze the convergence analysis of the proposed algorithm, statistical measures, Wilcoxon signed ranks (WSRs) test and improvement ratio were developed. Tables 2, 4, 5 and 6 demonstrated the superiority of the proposed approach regarding optimality. Further, the nonparametric WSRs is employed to offer the winner algorithm, where Tables 7, 8 and 9 show that the proposed algorithm outperforms the other comparative algorithms regarding the obtained p value. Also, the improvement ratio for the large-scale test instances is recorded as 1.4313, 1.0708, 1.2748, 1.1861, 0.4242, 0.6764, 0.3515, 0.3498, 0.3656, 0.3742, 0.4306 and 1.9425%, respectively, when comparing IBBA-RSS with BBA phase, while the improved ratio obtained by comparing IBBA-RSS with V-BBA is equivalent to 5.4714, 6.6404, 2.4526, 2.6791, 1.7067, 1.7673, 2.2394, 0.9915, 0.5675, 1.6160, 1.4548 and 3.2288%, respectively. Further, comparing BBA phase with V-BBA achieves the following improved ratio: 4.0988, 5.6298, 1.1929, 1.5108, 1.2879, 1.0983, 1.8945, 0.6439, 0.2027, 1.2467, 1.0287 and 1.3117%, respectively. Based on the above-improved ratios, it can be concluded that proposed IBBA-RSS algorithm has better ratios. From the practical point of view for large-scale problems, these ratios are very significant. Despite the high dimensionality of the KP problems, it is noteworthy that the proposed algorithm is competent to provide very significant results in a small number of iterations compared to the other algorithms. Regarding presented analyses, it can be concluded that the inherent characteristic of this improvement is contained in incorporating the RSS as a local search strategy that accelerates the convergence behavior and avoids the systematic running of the algorithm without any improvements in the outcomes. It can be concluded that the proposed IBBA-RSS has a significant performance and that the immature convergence inaccuracies of BBA phase are mitigated, efficiently.

In this subsection, a comparative study has been carried out to evaluate the performance of the proposed IBBA-RSS algorithm concerning the binarization strategy and hybridization. In this respect, a new strategy is introduced based on the multi-V-shaped transfer function that has effective exploration than the sigmoid transfer function. On the one hand, hybridization mechanism is implemented through incorporating the RSS as a local search step can avoid the trapping on undesirable values. Also, this hybridization can accelerate the convergence and also preserve its searching efficiency and effectiveness. On the other hand, the proposed algorithm is highly competitive when comparing it with the other methods regarding calculating the statistical measures and Wilcoxon signed ranks test. So the use of the hybrid approach has a great potential for solving large-scale knapsack problems. Moreover, it can be recognized from the obtained results on 32 test instances that the proposed IBBA-RSS is capable of attaining satisfactory solutions with an appropriate exploitation potential. The reason is that the proposed RSS-embedded mechanisms can stimulate the exploitation tendency of the BBA phase, effectively.
Fig. 8

The Convergence behavior for large scale KP (KP\(_{21}\)–KP\(_{32})\)

Table 7

Wilcoxon test for comparison results in Table 2

Compared methods

Solution evaluations

The proposed

Compared algorithms

\(R^{-}\)

\(R^{+}\)

p Value

Winner

IBBA-RSS

BHS

45

0

0.007686

IBBA-RSS

IBBA-RSS

DBHS

0

0

IBBA-RSS

NGHS1

3

0

0.179712

IBBA-RSS

IBBA-RSS

ABHS

0

0

IBBA-RSS

ABHS1

28

0

0.017960

IBBA-RSS

IBBA-RSS

SBHS

0

0

Table 8

Wilcoxon test for comparison results in Table 4

Compared methods

Solution evaluations

The proposed

Compared algorithms

\(R^{-}\)

\(R^{+}\)

p Value

Winner algorithm

IBBA-RSS

BBA

15

0

0.043114

IBBA-RSS

IBBA-RSS

CI

12.5

8.5

0.674987

IBBA-RSS

IBBA-RSS

B&B

6

4

0.715001

IBBA-RSS

Table 9

Wilcoxon test for comparison results in Tables 5 and 6

Compared methods

Solution evaluations

The proposed

Compared algorithms

\(R^{-}\)

\(R^{+}\)

p Value

Winner algorithm

IBBA-RSS

BBA

78

0

0.002218

IBBA-RSS

IBBA-RSS

SBHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

IHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

GHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

SAHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

EHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

NGHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

NDHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

PSFHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

BHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

DBHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

NGHS1

78

0

0.002218

IBBA-RSS

IBBA-RSS

ABHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

ABHS1

78

0

0.002218

IBBA-RSS

IBBA-RSS

ITHS

78

0

0.002218

IBBA-RSS

IBBA-RSS

V-BBA

78

0

0.002218

IBBA-RSS

Careful observation will reveal the following achievements of the proposed IBBA-RSS algorithm:
  1. (a)

    IBBA-RSS performs better on all large-scale KP problems while the other algorithms often miss the better results.

     
  2. (b)

    IBBA-RSS can obtain a relatively stable and better result on the whole regarding the statistical measures.

     
  3. (c)

    IBBA-RSS combines the merits of the BBA and RSS to obtain an elevated performance.

     
  4. (d)

    IBBA-RSS gives a promising improvement in the problem profit and can avoid the trapping in local optima.

     
  5. (e)

    The proposed methodology opens up numerous research directions for solving the different variants of KP problems such as multi-dimensional KP and quadratic KP.

     

Conclusions and future work

This paper presented a novel injective binary bat algorithm-based rough set scheme (IBBA-RSS) for solving 0/1 knapsack problems. To overcome the BBA’s problem of being converged to local optima and to improve its exploration and exploitation tendencies, it is hybridized with the RSS phase. Furthermore, the survival process of bats is achieved based on the injective (one-to-one) strategy, where the fit one replaces the worst one based on feasibility rule. The performance of the proposed algorithm has been extensively investigated through using small-scale, medium- scale and large-scale instances of 0–1 KP. The proposed algorithm is compared with several algorithms from the literature. Based on statistical measures, the proposed algorithm can explore/exploit better-quality solutions, and it outperforms as the best amongst the other compared algorithms. Convergence trend for average best results for IBBA-RSS is preferable to equivalent curves for BBA. Also, non-parametric statistical tests also affirm that the optimality of solutions is enriched, significantly. The results reveal that IBBA-RSS is competent to provide very competitive results compared to BBA and other investigated algorithms. Regarding presented analyses, it can be concluded that IBBA-RSS has a desirable performance and the immature convergence inaccuracies of the BBA phase is mitigated, efficiently.

For future works, it is possible to investigate the proposed IBBA-RSS algorithm to solve different forms of KP problems like multi-dimensional KP and quadratic KP. Further research on using other metaheuristic algorithms such as krill herd, monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO) and moth search (MS) algorithm need to be developed to solve different forms of KP problems. Finally, I hope to design a new version of KP that is bi-level KP.

References

  1. 1.
    Shen XJ, Wang WW, Zheng BJ, Li YX (2006) Based on improved optimizing particle swarm optimization to solve 0–l knapsack problem. Comput Eng 32(18):23–24Google Scholar
  2. 2.
    Hassanien AE, Alamry E (2015) Swarm intelligence: principles, advances, and applications. CRC-Taylor & Francis Group, Boca RatonCrossRefGoogle Scholar
  3. 3.
    Liu Y, Liu C (2009) Schema-guiding evolutionary algorithm for 0–1 knapsack problem. In: International association of computer science and information technology-spring conference, pp 160–164Google Scholar
  4. 4.
    Martello S, Toth P (1990) Knapsack problems: algorithms and computer implementations. Wiley, New YorkzbMATHGoogle Scholar
  5. 5.
    Shi HX (2006) Solution to 0/1 knapsack problem based on improved ant colony algorithm. In: International conference on information acquisition, pp 1062–1066Google Scholar
  6. 6.
    Lin FT (2008) Solving the knapsack problem with imprecise weight coefficients using genetic algorithms. Eur J Oper Res 185(1):133–145CrossRefzbMATHGoogle Scholar
  7. 7.
    Li Z, Li N (2009) A novel multi-mutation binary particle swarm optimization for 0/1 knapsack problem. In: Control and decision conference, pp 3042–3047Google Scholar
  8. 8.
    Zhang X, Huang S, Hu Y, Zhang Y, Mahadevan S, Deng Y (2013) Solving 0–1 knapsack problems based on amoeboid organism algorithm. Appl Math Comput 219(19):9959–9970MathSciNetzbMATHGoogle Scholar
  9. 9.
    Bhattacharjee KK, Sarmah SP (2014) Shuffled frog leaping algorithm and its application to 0/1 knapsack problem. Appl Soft Comput 19:252–263CrossRefGoogle Scholar
  10. 10.
    Kulkarni AJ, Shabir H (2016) Solving 0 –1 knapsack problem using cohort intelligence algorithm. Int J Mach Learn Cybern 7(3):427–441Google Scholar
  11. 11.
    Shen W, Xu B, Huang J (2011) An improved genetic algorithm for 0–1 knapsack problems. In: Second international conference on networking and distributed computing (ICNDC), pp 32–35Google Scholar
  12. 12.
    Li ZK, Li N (2009) A novel multi-mutation binary particle swarm optimization for 0/1 knapsack problem. In: Control and decision conference, pp 3042–3047Google Scholar
  13. 13.
    Azada MAK, Rocha AMAC, Fernandes EMGP (2014) Improved binary artificial fish swarm algorithm for the 0–1 multidimensional knapsack problems. Swarm Evol Comput 14:66–75CrossRefGoogle Scholar
  14. 14.
    Zou D, Gao L, Li S, Wu J (2011) Solving 0–1 knapsack problem by a novel global harmony search algorithm. Appl Soft Comput 11:1556–1564CrossRefGoogle Scholar
  15. 15.
    Truong TK, Li K, Xu Y (2013) Chemical reaction optimization with the greedy strategy for the 0–1 knapsack problem. Appl Soft Comput 13:1774–1780CrossRefGoogle Scholar
  16. 16.
    Li Z, Ma L, Zhang H (2012) Genetic mutation bat algorithm for 0–1 knapsack problem. Comput Eng Appl 48(4):50–53Google Scholar
  17. 17.
    Feng Y, Wang G-G, Li W, Li N (2017) Multi-strategy monarch butterfly optimization algorithm for discounted 0–1 knapsack problem. Neural Comput Appl. doi: 10.1007/s00521-017-2903-1 Google Scholar
  18. 18.
    Feng Y, Wang G-G, Deb S, Lu M, Zhao X-J (2015) Solving 0–1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput Appl. doi: 10.1007/s00521-015-2135-1 Google Scholar
  19. 19.
    Feng Y, Wang G-G, Gao X-Z (2016) A novel hybrid cuckoo search algorithm with global harmony search for 0–1 knapsack problems. Int J Comput Intell Syst 9(6):1174–1190. doi: 10.1080/18756891.2016.1256577 CrossRefGoogle Scholar
  20. 20.
    Yang X-S (2010) New metaheuristic bat-inspired algorithm. In: González J, Pelta D, Cruz C, Terrazas G, Krasnogor N (eds) Nature inspired cooperative strategies for optimization (NICSO 2010), vol 284. Springer, Berlin, pp 65–74CrossRefGoogle Scholar
  21. 21.
    Yang X-S, Gandomi AH (2012) Bat algorithm: a novel approach for global engineering optimization. Eng Comput 29(5):464–483CrossRefGoogle Scholar
  22. 22.
    Random A, Yang X-S, Alavi A, Talatahari S (2013) Bat algorithm for constrained optimization tasks. Neural Comput Appl 22(6):1239–1255CrossRefGoogle Scholar
  23. 23.
    Fister Jr I, Fister D, Yang XS (2013) A hybrid bat algorithm. Elektrotehniski VestnikGoogle Scholar
  24. 24.
    Baziar A, Kavoosi-Fard AA, Zare J (2013) A novel self adoptive modification approach based on bat algorithm for optimal management of renewable MG. J Intell Learn Syst Appl 5:11–18Google Scholar
  25. 25.
    Wang G, Guo L (2013) A novel hybrid bat algorithm with harmony search for global numerical optimization. J Appl Math 2013:21MathSciNetzbMATHGoogle Scholar
  26. 26.
    Yilmaz S, Kucuksille EU (2013) Improved bat algorithm (IBA) on continuous optimization problems. Lect Notes Softw Eng 1(3):279–283Google Scholar
  27. 27.
    Wang X, Wang W, Wang Y (2013) An adaptive bat algorithm. Lect Notes Comput Sci 7996:216–223CrossRefGoogle Scholar
  28. 28.
    Fister Jr I, Fong S, Brest J, Fister I (2014) A novel hybrid self-adaptive bat algorithm. Sci World J 2014:1–12Google Scholar
  29. 29.
    Mirjalili S, Mirjalili SM, Yang X-S (2013) Binary bat algorithm. Neural Comput Appl 25(3–4):663–681Google Scholar
  30. 30.
    Varuna S, Ramya R (2017) An integration of binary bat algorithm and Naïve Bayes classifier for intrusion detection in distributed environment. Int J Adv Res Comput Commun Eng 6(2):164–168Google Scholar
  31. 31.
    Crawford B, Soto R, Olivares-Suárez M, Paredes F (2014) A binary firefly algorithm for the set covering problem. In: Silhavy R, Senkerik R, Oplatkova Z, Silhavy P, Prokopova Z (eds) Modern trends and techniques in computer science, Advances in intelligent systems and computing, vol 285. Springer, Cham. doi: 10.1007/978-3-319-06740-7_6
  32. 32.
    Pawlak Z (1982) Rough sets. Int J Comput Inf Sci 11:341–356CrossRefzbMATHGoogle Scholar
  33. 33.
    Rizk-Allah RM (2016) Fault diagnosis of the high-voltage circuit breaker based on granular reduction approach. Eur J Sci Res 138(1):29–37Google Scholar
  34. 34.
    Kennedy J, Eberhart RC (1997) A discrete binary version of the particle swarm algorithm. In: IEEE international conference on computational cybernetics and simulation, pp 4104–4108Google Scholar
  35. 35.
    Deb K (2000) An efficient constraint handling method for genetic algorithms. Comput Methods Appl Mech Eng 186(2/4):311–338CrossRefzbMATHGoogle Scholar
  36. 36.
    Zou D, Gao L, Wu J, Li S (2010) Novel global harmony search algorithm for unconstrained problems. Neurocomputing 73(16):3308–3318CrossRefGoogle Scholar
  37. 37.
    Konga X, Gaoa L, Ouyanga H, Lib S (2015) A simplified binary harmony search algorithm for large-scale 0–1 knapsack problems. Expert Syst Appl 42(12):5337–5355CrossRefGoogle Scholar
  38. 38.
    Geem ZW (2005) Harmony search in water pump switching problem. In: Advances in natural computation, Lecture Notes in Computer Science, vol 3612. Springer, Berlin, pp 751–760Google Scholar
  39. 39.
    Wang L, Xu Y, Mao Y, Fei M (2010) A discrete harmony search algorithm. In: Life system modeling and intelligent computing, Communications in computer and information science, vol 98. Springer, Berlin, pp 37–43Google Scholar
  40. 40.
    Wang L, Yang R, Xu Y, Niu Q, Pardalos PM, Fei M (2013a) An improved adaptive binary harmony search algorithm. Inf Sci 232:58–87MathSciNetCrossRefGoogle Scholar
  41. 41.
    Wang L, Yang R, Pardalos PM, Qian L, Fei M (2013b) An adaptive fuzzy controller based on harmony search and its application to power plant control. Int J Electr Power Energy Syst 53:272–278CrossRefGoogle Scholar
  42. 42.
    Kulkarni AJ, Krishnasamy G, Abraham A (2017) Cohort intelligence: a socio-inspired optimization method. Springer International Publishing, SwitzerlandCrossRefGoogle Scholar
  43. 43.
    Derrac J et al (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evolut Comput 1(1):3–18CrossRefGoogle Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Basic Engineering Science, Faculty of EngineeringMenoufia UniversityShebin El-KomEgypt
  2. 2.Faculty of Computers and InformationCairo UniversityGizaEgypt
  3. 3.Scientific Research Group in EgyptCairoEgypt

Personalised recommendations