Abstract
In many scenarios, the nature of the decision-making is discrete and we have to deal with a situation where decisions have to be made from the set of discrete choices, or mutually exclusive alternatives. Choices like passing the electric signal versus not passing the electric signal, going upward versus downward, or choosing a certain route over other available routes are discrete in nature. There are many physical systems for which continuous variable modeling is not sufficient to handle the complexity of the physical systems. For instance, communication models, transportation models, finite element analysis, and network routing models are discrete models. The discrete nature of the search space offers the leverage of definiteness, and possibilities for graphical representation of given particular choices. In fact, discrete optimization problems are of paramount importance in various branches of sciences, like decision-making, information systems, and combinatorics. Operation management decision problems, like product distribution, manufacturing facility design, machine sequencing, and production scheduling problems, fall under the purview of discrete optimization problems. Network designing, circuit designing, and automated production systems are also represented as discrete optimization problems. Moreover, the application spectrum of discrete optimization problems includes data mining, data processing, cryptography, graph theory, and many others.
You have full access to this open access chapter, Download chapter PDF
In many scenarios, the nature of the decision-making is discrete, and we have to deal with a situation where decisions have to be made from the set of discrete choices, or mutually exclusive alternatives. Choices like passing the electric signal versus not passing the electric signal, going upward versus downward, or choosing a certain route over other available routes are discrete in nature. There are many physical systems for which continuous variable modeling is not sufficient to handle the complexity of the physical systems. For instance, communication models, transportation models, finite element analysis, and network routing models are discrete models. The discrete nature of the search space offers the leverage of definiteness, and possibilities for graphical representation of given particular choices. In fact, discrete optimization problems are of paramount importance in various branches of sciences, like decision-making, information systems, and combinatorics. Operation management decision problems, like product distribution, manufacturing facility design, machine sequencing, and production scheduling problems, fall under the purview of discrete optimization problems. Network designing, circuit designing, and automated production systems are also represented as discrete optimization problems. Moreover, the application spectrum of discrete optimization problems includes data mining, data processing, cryptography, graph theory, and many others.
The decision space of the discrete optimization problems is either finite or similar to an enumerable set in the sense of cardinality. Mathematically, a general discrete optimization problem is given by Eq. (4.1),
where X is a permutation of decision variables, P is the set of feasible permutations or the search space, and C(X) is the objective function. We are interested in finding an optimal permutation or arrangement \(X \in P\), such that the objective value is optimal. The problem of finding optimal arrangement might look simple and easy to handle, but the discreteness of the problem brings forth a massive burden of dimensionality and computational complexity. For instance, traveling salesman problem (TSP) with 100 cities would require \((100-1)!\) permutations to be checked if the brute force method is applied. 99! would be approximately equal to \(9.3326 \times 10^{155}\), and it is estimated that our observable universe has approximately \(10^{82}\) atoms. But, to make our point clear, it involves a very huge computational cost, and of course, brute force method is not a recommended choice. The discrete optimization problems may look simple in formulation, but these problems can be computationally expensive. First, we explain a few discrete optimization models, then methods to solve them are included, and finally, discrete variants of SCA will be discussed in subsequent sections.
4.1 Discrete Optimization Models
We are mentioning some of the discrete optimization problems to give a brief idea to the reader. For more discrete optimization problems, a reader can refer to the book ‘Discrete Optimization’ by Parker and Rardin [1].
-
1.
Traveling Salesman Problem (TSP): Suppose a salesman wants to travel n number of cities lying in a geographical area. The salesman wants to start his journey from the present city and travel all the remaining \(n-1\) cities exactly once and return back to his current location with minimizing the cost of the journey. This cost may include money, time, distance, or all. The problem can be visualized on a graph where each city represents a node (vertex). Arc (edge) lengths denote the associated cost of traveling between the cities. We have already discussed about the complexity of TSP with 100 cities. Mathematically, TSP can be formulated as:
For any directed (or, undirected) graph with certain fixed weights lying on the edges, we are interested in determining a closed cycle that includes each vertex of the graph exactly once, and this closed cycle would yield minimum total edge weight. Graphical illustration of TSP with 6 cities is given in Fig. 4.1.
The application spectrum of the TSP is very wide. A lot of well-known problems like vehicle routing, computer wiring, X-ray crystallography, crew-scheduling problem, and aircraft scheduling problems can be studied as the instance of a TSP problem [2].
In the literature, many exact and approximate methods are available for handling traveling salesman problem. One of the earliest approaches to solve TSP problem was proposed by Dantzig, Fulkerson, and Johnson (DFJ) [3]. DFJ algorithm formulates TSP into integer linear programming (ILP) problem with constraints, and prohibits the formation of subtours, i.e., tours containing less than n vertices [3]. Miller, Tucker, and Zemlin (MTZ) proposed an alternative formulation of ILP by reducing the number of subtour elimination constraints at the cost of introducing a new variable in the TSP problem [4]. Other than ILP approaches, branch-and-bound (BB) algorithms also proved effective in providing optimal solutions to the TSP problem. In BB algorithms, some of the problem constraints are relaxed initially, and at later stages, feasibility is regained by including constraints in an enumerative manner [5]. Many researchers including Eastman [6], Little et al. [7], Shapiro [8], Murty [9], Bellmore and Malone [10], Garfinkel [11], Smith et al. [12], Carpaneto and Toth [13], Balas and Christofides [14], and Miller and Pekny [15] proposed various branch-and-bound algorithms for handling TSP instances. However, high computational complexities of above mentioned approaches motivated researchers to employ heuristic and meta-heuristic approaches to solve TSP problems. For instance, ant colony optimizer (ACO) [16], particle swarm optimizer (PSO) [17], and discrete spider monkey optimizer (D-SMO) [18] are some popular heuristic approaches to produce optimal solution for TSP problems.
-
2.
Knapsack Problem: In Knapsack problem, we are interested in finding a finite set \(K \subseteq \mathcal {Z}\) of integer values \(k_i\), where \(i= 1,2, \ldots n\) that minimizes \(F(k)= f(k_1, k_2, \ldots , k_n)\) satisfying the restriction \(g(k_1, k_2, \ldots , k_n) \ge v\), where v is a parameter.
The Knapsack problem is of particular interest in the various branches of sciences and decision-making problems like, resource allocation problems, portfolio allocation, capital budgeting, and project selection applications [19, 20]. The Knapsack problem has also been used in generating covering inequalities [21, 22], and in the area of cryptography [23]. Different versions of knapsack problems are available in the literature, for instance multi-dimensional knapsack problem (MKP), multiple choice knapsack problem (MCKP), and multi-dimensional multi-choice knapsack problem [24]. Traditional methods like dynamic programming, linear programming relaxation, Lagrangian relaxation, reduction methods, and branch-and-bound approaches are available in the literature to handle a variety of knapsack problems [25]. On the other hand, meta-heuristic approaches like simulated annealing (SA), genetic algorithm (GA), and particle swarm optimizer (PSO) have proved their capabilities in handling knapsack problems [26,27,28].
-
3.
Vertex Coloring: Vertex coloring problem is a particular case of Vertex labeling problem in which vertices in a graph are labeled using colors. In this problem, the task is to label vertices of a given graph with a minimum number of colors, such that each vertex of the graph is in order that two adjacent vertices (an edge connecting vertices) are not labeled with the same color.
Some major applications of the vertex coloring problem include scheduling tasks like job scheduling, aircraft scheduling, and time-table scheduling [29]. The assignment of radio frequencies, separating combustible chemical combinations, and handling multi-processor tasks are also instances of vertex coloring problems [30]. The traditional approaches like dynamic programming, branch-and-bound methods, and integer linear programs have been used in exact methods for handling the vertex coloring problems [31]. For instance, algorithms like Lawler’s algorithm [32], Eppstein’s algorithm [33], Byskov’s algorithm [34], and Bodlaender and Kratsch algorithm [35] utilize dynamic programming approaches. And, Brelaz’s algorithm [36], and Zykov’s algorithm [37] are based on branch-and-bound methods. Meta-heuristic approaches like genetic algorithm (GA), simulated annealing (SA), ant colony optimizer (ACO), and cuckoo search (CS) have been utilized in the literature for solving vertex coloring problem [38, 39].
-
4.
Shortest Path Planning Problem: The goal is to determine the shortest path connecting two fixed vertices in a given graph with certain fixed cost on the edges, such that the total length of a distinct sequence of edges that connects the two vertices is minimum.
Shortest path planning problem has applicability in road networks, designing electric circuits, logistic communication, robotic path planning, etc. [40]. Dijkstra’s algorithm [41], Floyd–Warshall algorithm [42], and Bellman–Ford algorithm [43] are some traditional algorithms in the literature of shortest path problem. Apart from the traditional approaches, genetic algorithms (GA), particle swarm optimizer (PSO), ant colony optimizer (ACO), and artificial bee colony (ABC) algorithms are popular meta-heuristic approaches to solve shortest path problem [44, 45].
-
5.
Set Covering: The task is to find a family of subsets \(\{P_i \subseteq P: i\in K\}\) (K is an index set) for a particular finite set P. These subsets \(P_i\) have a cost, say, \(c_i\) associated with them. One has to choose a collection of subsets such that the union of these subsets contains all the elements of the universal set P and the total cost of the collection is minimum.
Set covering problem is a particular problem of interest for various disciplines, like operations research, computer science, and management. Crew scheduling problems, optimal location problems, optimal route selection problems are some of the instances of set covering problems [46]. The traditional approaches like linear programming relaxation, Lagrange relaxation, and branch-and-bound methods are available in the literature to tackle set covering problems [47, 48]. Meta-heuristic approaches including genetic algorithm (GA), ant colony optimizer (ACO), and XOR-based ABC have been utilized to solve the set covering problem [49, 50].
Discrete problems are widely used by different branches of sciences and industries. For instance, an airline company will be interested in solving TSP to optimize the route plan of their fleet. Similarly, the knapsack problem has a wide variety of applications in the financial modeling, production and inventory management, and optimal design for queueing networks model in manufacturing [51]. The detailed discussion of discrete optimization problems and their applications is beyond the scope of this book. The focus of this chapter is to present an overview of the discrete version of sine cosine algorithm (SCA) and its applications. But, before proceeding further, we briefly discuss about the discrete optimization methods.
4.2 Discrete Optimization Methods
Similar to continuous optimization methods, discrete optimization methods can also be studied under the two major categories of exact methods and approximate methods. The exact methods offer the guarantee of finding the optimal solution in a bounded time, but these methods are incapable of handling problems with large instances. Branch-and-cut, branch-and-bound, and Gomory’s cutting plane method are some of the examples of exact methods. An interested reader can refer to the books ‘Combinatorial Optimization’ by Cook [52] and ‘Discrete Optimization’ by Parker and Rardin [1] to have a detailed idea about exact methods. On the other hand, approximate methods do not offer any guarantee for locating the optimal solution(s), but can produce near optimal solutions way much faster than the exact methods [53]. These heuristic methods are easy to implement and do not require extensive computational power to generate solutions. The greedy algorithms, sequential algorithms, heuristics local search, and methods based on random-cut, and randomized-backtracking are some of the examples of approximate algorithms. Figure 4.2 represents the classification of discrete optimization problems.
In the last three decades, researchers have tried to propose many efficient methods to tackle the discrete optimization problems. In the class of these efficient methods, the population-based meta-heuristic techniques have important role to play. Meta-heuristic possesses the potential to provide efficient near optimal solution(s) to these discrete optimization problem. Ant-colony optimizer [16], tabu search (TS) [54], simulated annealing [55], and genetic algorithm [56] are some of the meta-heuristic algorithms proposed for handling discrete optimization problems. The other class of meta-heuristic algorithms which were actually proposed for tackling continuous optimization problems, but later modified to solve the discrete optimization problems. Kennedy and Eberhart presented the discrete binary version of particle swarm optimization [57]. Discrete version of other meta-heuristic algorithms include discrete firefly-inspired algorithm [58], discrete teaching–learning-based optimization algorithm [59], binary coded firefly algorithm [60], binary magnetic optimization algorithm (BMOA) [61], binary cat swarm algorithm [62], binary dragonfly algorithm [63], and discrete spider monkey optimization [18].
Many discrete optimization problems can be reduced to binary optimization problems [1].
The sine cosine algorithm was originally proposed for continuous optimization problems [64]. The robust optimization capabilities of the SCA motivated the researchers for designing discrete sine cosine algorithm. In the next section, binarization techniques adopted to modify continuous sine cosine algorithm (SCA) will be discussed in detail.
4.3 Binary Versions of Sine Cosine Algorithm
In binary optimization problems, the decision variables can only take two values, typically 0 and 1. These two values can represent the True/False logic values, Yes/No, or On/Off. In general, the logical truth value is represented by ‘1’ and false value is denoted by ‘0’. There are various techniques available to modify a continuous meta-heuristic into binary one. The binarization methods, like the nearest integer (NI) [65], the normalization technique [66], transfer functions [67], angle modulation [68], quantum approach [69], etc., are available in the literature to reinforce a binary version of continuous evolutionary or swarm intelligence algorithm [70]. In the literature of binarization techniques [53], two major categorization of binarization techniques were identified. The first category corresponds to a general techniques of binarization that enable the proceeding with the continuous meta-heuristics without altering the operators of the continuous algorithms. These techniques adopt mechanisms, like transfer functions [71], and angular modulation [68], to transform a continuous meta-heuristic algorithm into a binary version. Discrete PSO [17], binary coded firefly algorithm [60], binary magnetic optimization algorithm (BMOA) [61], and binary cat swarm algorithm [62] are some of the algorithms under the first category mentioned above. The second category consists of the techniques, in which the structure of meta-heuristics is altered. These methods rectifies the structure of the search space and hence reformulate the operators of the algorithms. Some techniques under this category include quantum binary algorithms [69], set-based approaches [72], techniques based on percentile concept [73], Db-scan unsupervised learning [74], and K-means transition ranking [75], to design binary versions of continuous meta-heuristic algorithms. Figure 4.3 depicts the various binarization techniques available in the literature.
4.3.1 Binary Sine Cosine Algorithm Using Round-Off Method
Hafez et al. [65] proposed a binary version of sine cosine algorithm (SCA) that utilizes the standard binarization rule. This proposed binary SCA was applied to feature selection problems. The goal is to choose combinations of features that maximize the classification performance and minimize the number of selected features. Therefore, the overall objective is to minimize the fitness value given by Eq. (4.2).
where \(f_X\) is the fitness function corresponding to a D-dimensional vector \(X=(x_1,x_2,\ldots x_D)\), where \(x_i = 0 ~\text {or}~ 1\). \(x_i=1\) represents the selection of the ith feature, while \(x_i=0\) indicate the non-selection of the ith feature. D is the total number of features in the given dataset. \(\epsilon \) is the classifier error rate and w is a constant controlling the importance of classification performance to the number of features selected.
In the proposed approach, the range of all the decision variables is constrained to \(\{0,1\}\) using the rounding method, in which values of each decision variables are rounded to the integer value 0/1 by employing Eq. (4.3). The features corresponding to the variable value 1 are selected. And, the features with variable value 0 are rejected.
where \(X_{ij}\) is the value for ith search agent at the jth dimension.
The feature selection problem is of particular interest in machine learning algorithms. This is also an essential tool for attribute reduction or pre-processing of a large data sets. The least significant features which have very less relevance are removed to reduce the computational burden of the classification algorithm. It is quite evident that from the set of features we have to select or reject a feature. The choice available to us is of ‘Yes/No’ type, which fall under the purview of ‘0/1’ discrete optimization problem or binary optimization. The proposed algorithm applied sine cosine algorithm (SCA) to find the combinations of features that have maximum impact on the classification performance.
4.3.2 Binary Sine Cosine Algorithm Using Transfer Functions
The procedure of converting a continuous meta-heuristic algorithm into its binary counterpart is called binarization. The technique of binarization using transfer functions has two major steps,
-
(1)
The transfer function, which mapped the values generated by a continuous meta-heuristic algorithm to an interval (0, 1).
-
(2)
The binarization process, which consist of converting the real number lying in the interval (0, 1) to a binary value.
Kennedy et al. [17] introduced transfer function (sigmoid function) for converting continuous PSO into discrete PSO. A transfer function facilitates the movement of ith search agent in the binary space by switching the value of jth dimension from 0 to 1 and vice versa. The advantage of utilizing a transfer function is that, it provides a probability of switching the solutions coordinates at a low computational cost [53]. In the literature of binarization techniques, there are several transfer functions available for converting a continuous meta-heuristic algorithm into its binary version [53]. The binary versions of a continuous meta-heuristic algorithms are constructed using the transfer functions, and these binary versions have the structure, similar to their continuous versions. The search agents’ position is updated in the continuous space, and these continuous values are mapped in the interval (0, 1) using transfer functions to generate a switching probability. These binary versions of continuous meta-heuristics differ from their continuous counterparts in the sense that, the search agents’ position vector is a vector of binary digits rather than a vector of continuous values, and the position update mechanism is concerned with switching search agents’ positions in the set \(\{0,1\}\) based on the transition probability obtained using the transfer function. The fundamental idea is to update the search agents’ positions in such a way that the bit value of the search agents is switched between 0 and 1, with a probability based on the updated position of the search agents in the continuous space. The idea of using transfer function for converting a continuous meta-heuristic algorithm into a discrete meta-heuristic has also been incorporated in the sine cosine algorithm (SCA). The procedure for two-step binarization technique using transfer function method is graphically illustrated in the (Fig. 4.4).
Reddy et al. [71] proposed and investigated four binary variants of SCA to solve the binary natured profit-based unit commitment problem (PUCP). The proposed four variants use four different transfer functions for binary adaption of continuous search space and search agents. These transfer functions are mentioned as follows;
-
1.
The tangent hyperbolic transfer function (\(\boldsymbol{T}\))
$$\begin{aligned} T(X^{t+1})= \tanh (X^{t+1})=\frac{\text {e}^{-(X^{t+1})}-1}{\text {e}^{-(X^{t+1})}+1} \end{aligned}$$(4.4)Mapping is given by:
$$\begin{aligned} Y^{t+1}={\left\{ \begin{array}{ll} 0 &{}\text {if}\ \text {rand} < T(X^{t+1}) \\ 1 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$(4.5)where \(X^{t}\) is the real-valued position of the search agent in the tth iteration and \(Y^{t}\) is its corresponding binary position. Here, rand is a uniformly distributed random number in the range [0, 1].
-
2.
Sigmoidal transfer function (\(\boldsymbol{S}\))
$$\begin{aligned} S(X^{t+1})=\frac{1}{1+\text {e}^{-X^{t+1}}} \end{aligned}$$(4.6)Mapping is given by:
$$\begin{aligned} Y^{t+1}={\left\{ \begin{array}{ll} 0 &{}\text {if}\ \text {rand} < S(X^{t+1}) \\ 1 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$(4.7)where rand has the same meaning as mentioned above.
-
3.
A modified sigmoidal transfer function (MS)
$$\begin{aligned} \text {MS}(X^{t+1})=\frac{1}{1+\text {e}^{-10(X^{t+1}-0.5)}} \end{aligned}$$(4.8)Mapping is given by:
$$\begin{aligned} Y^{t+1}={\left\{ \begin{array}{ll} 0 &{}\text {if}\ \text {rand} < S(X^{t+1}) \\ 1 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$(4.9)where rand has the same meaning as mentioned above.
-
4.
Arctan transfer function (ArcT)
$$\begin{aligned} \begin{aligned} \text {ArcT}(X^{t+1})&=\arctan (X^{t+1}) \\&= \left| \frac{2}{\pi }\arctan \left( \frac{\pi }{2}X^{t+1}\right) \right| \end{aligned} \end{aligned}$$(4.10)Mapping is given by:
$$\begin{aligned} Y^{t+1}={\left\{ \begin{array}{ll} 0 &{}\text {if}\ \text {rand} < \text {ArcT}(X^{t+1}) \\ 1 &{}\text {otherwise} \end{array}\right. } \end{aligned}$$(4.11)where rand has the same meaning as mentioned above.
The performance of these different transfer functions to solve a binary profit-based unit commitment (PBUC) problem was investigated. The adequacy of the proposed approach, in terms of convergence and quality of solutions, was experimented over a benchmark test set. In terms of the solution quality, the arctan transfer function showed superior results out of all mentioned above variants, and the simple sigmoid transfer function could not produce satisfactory results.
Following the similar trend, Taghian et al. [67] proposed two other binary versions of SCA using the two-step binarization technique. The first version is called S-shaped binary sine cosine algorithm (SBSCA). In SBSCA, the S-shaped transfer function, defined in Eq. (4.12), is used to define a bounded probability of changing positions of the search agents [67].
Then, the standard binarization rule, given in Eq. (4.13), is used to transform the solutions into a binary counterpart.
Here, rand is a uniformly distributed random number in the range [0, 1]. The second version is called the V-shaped binary sine cosine algorithm (VBSCA) [67]. In VBSCA, the V-shaped transfer function is used to calculate the position changing probabilities given as:
Then, the complement binarization rule, given by Eq. (4.15), is utilized to transform the solution into a binary domain.
where \(\bar{X}_{ij}^{t+1}\) represents the complement of \(X_{ij}^{t+1}\) at the iteration \(t+1\).
The performance of both the proposed algorithms was assessed and compared with four popular binary optimization algorithms, including binary GSA [76] and binary Bat algorithm [77], over five UCI medical datasets: pima, lymphography, heart, breast cancer, and breast-WDBC. The experimental results demonstrated that both the binary SCA variants have effectively enhanced the classification accuracy and yielded competitive or even better results when compared with the other existing algorithms.
4.3.3 Binary Sine Cosine Algorithm Using Percentile Concept
Another binary variant of SCA called binary percentile sine cosine algorithm (BPSCA) was introduced by Fernandez et al. [78], in which percentile concept was utilized to conduct the binary transformation of the sine cosine algorithm (SCA). In binary percentile concept, the magnitude of the displacement in jth component of a solution (say) X is calculated. Based on the magnitude of the displacement, these solutions are grouped in different percentile values of 20, 40, 60, 80, and 100. Solutions with least displacement values were grouped into 20-percentile value, while solutions with the maximum displacement were grouped into 100-percentile values [79].
The main issue using the binary percentile operator is that it may generate infeasible solutions in the search space. For handling the infeasible solutions, the BPSCA uses a heuristic operator. Heuristic operator chooses a new column, when solutions are needed to be repaired. As an input argument, the operator considers the set \(S_{\text {in}}\), which is a set of columns to be repaired. The flowchart of the BPSCA algorithm is illustrated in Fig. 4.5.
To assess the performance of the percentile operator in obtaining the solutions, BPSCA was applied to solve the classic combinatorial problems called the set covering problem (SCP). The experimental results demonstrated that the percentile operator plays an important role in maintaining good-quality solutions. In addition, when BPSCA compared with the two best available meta-heuristic binary algorithms, namely jumping PSO (JPSO) [80] and multi-dynamic binary black hole (MDBBH) algorithm [81]. The experiments showed that the solutions obtained using BPSCA were similar to the jumping PSO. BPSCA generated superior results when compared with the MDBBH. The authors emphasized that, unlike JPSO, the percentile technique used in BPSCA allows binarization of any continuous meta-heuristic algorithm [78].
Pinto et al. [73] proposed percentile-based binary SCA (BPSCOA) using a repair operator instead of a heuristic operator. The repair operator handles the infeasible solutions produced during the optimization process. For repairing a particular solution, the coordinate with the maximum displacement measure is selected and eliminated from the solution [73]. The process is continued till the feasible solution is obtained. After this, repaired solution is improved by incorporating new elements in the solution such that no constraints are violated [73]. The flowchart of the BPSCOA is given in Fig. 4.6. In binarization process, the utility of percentile concept was evaluated by applying it to resolve the multi-dimensional knapsack problem (MKP). The results showed that the operator improved the precision and the quality of the solutions. The proposed method was contrasted with the binary artificial algae (BAAA) [82] and K-means transition ranking (KMTR) algorithms [75].
Till now, all the discrete optimization problems discussed above have binary nature. The solution(s) to these problems were ‘Yes/No’, or ‘0/1’ type. Binary optimization problems hold important position in the discrete world of choices. However, sometimes we are interested in finding integer solutions to the real-world discrete optimization problems. Now, instead of making Boolean choices, we try to find the solution(s) having integer values associated with the discrete optimization problem. In the next section, we will be discussing a general discrete version of sine cosine algorithm, in which solutions can take any finite integer value including the case of ‘0/1’ type binary optimization problems.
4.4 Discrete Versions of Sine Cosine Algorithm
Tawhid et al. [83] proposed discrete version of sine cosine algorithm (DSCA) for solving traveling salesman problem (TSP). The objective function of the TSP is given by:
where the cost \(C_{i,j}\) represents the Euclidean distance between any two towns i and j. For solving TSP, the authors adopted two local search techniques—the heuristic crossover [84] and the 2-opt [85] method on the best solution based on two randomly generated numbers between 0 and 1 (say \(R_1\) and \(R_2\)), in a manner mentioned below,
The psuedo-code of DSCA is given by Algorithm 1.
![figure a](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-981-19-9722-8_4/MediaObjects/540847_1_En_4_Figa_HTML.png)
The DSCA was tested on 41 different benchmark instances of symmetrical TSP. The results indicated that the technique provided optimal solutions for 27 benchmark problems and near optimal solutions for the remaining ones. When the results were compared with other state-of-the-art techniques, the DSCA demonstrated promising and competitive performance over the others.
Gholizadeh et al. [86] proposed discrete version of the sine cosine algorithm to tackle the discrete truss structures optimization problem and called it discrete modified SCA. In this algorithm, the solutions obtained from the traditional SCA are rounded to their nearest integer to speed up the process of optimization.
where \(\text {round}(\cdot )\) truncate the values to their nearest integer. However, the solutions generated by the unintelligent round-offs might lie in an infeasible region, and their fitness values might differ drastically from that of the optimal solutions. The two main strategies—regeneration and mutation operator—are incorporated to address the issue of infeasible solutions. These two strategies help the algorithm to explore and exploit the design space in more robust manner. In the regeneration strategy, individual solutions of the population of size N are first sorted on the basis of objective function values in ascending order as follows:
where \(\text {sort}(X^t)\) is the current sorted population, and \(X^{t}_{k}\) to \(X^{t}_N\) are the worst solutions at iteration (t), that are required to be regenerated. Then, \(\lambda \times N\) number of worst search agents (\(X^{t}_{k}\) to \(X^{t}_N\)) are removed from the population, where \(\lambda \) is a user defined parameter whose value lies in the interval (0, 1). The best solution found so far \(X^*=[ X^{*}_{1}, X^{*}_{2}\ldots X^{*}_{j},\ldots , X^{*}_{D}]\) are then copied \(\lambda \times N\) times in the population, that is,
A randomly selected dimension j \(\in \) [1, 2 ... D] of each solution \(X^{t}_{k}\) to \(X^{t}_{N-1}\) is regenerated in a random manner using Eq. (4.20),
where \(X^{\text {L}}_{j}\) and \(X^{\text {U}}_{j}\) are lower and upper bounds of the jth dimension, and r is a random number in [0, 1].
The regenerated variables of the individuals \(X^{t}_{k}\) to \(X^{t}_{N-1}\) are then substituted in the last particle (\(X^{t}_N\)) to increase the chances of finding promising regions in the search space. In the second strategy, a mutation operator is applied in the generated solutions to escape the local optimal region. For each particle \((X_i, i=1,2\ldots N)\), a random number in [0, 1] is generated, and if for the ith particle, the selected random number is less than a pre-defined mutation rate \((\text {mr})\), \(X_i\) will be regenerated using the following equation:
where \(\otimes \) denotes the vector product and \(R^t\) is a D dimensional vector of random numbers in the range [0, 1] at the tth iteration. \(X^t_{\text {best}}\) is the best solution of the current population, and \(X^t_r\) is a randomly selected solution from the current population at iteration t. The values of \(\lambda \) and mr were taken to be 0.2 and 0.05, respectively. The pseudo-code of the algorithm is given in Algorithm 2.
![figure b](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-981-19-9722-8_4/MediaObjects/540847_1_En_4_Figb_HTML.png)
The proposed algorithm was applied on five well-known benchmark truss optimization problems, and the outcomes were compared with the original SCA and other optimization algorithms. Discrete modified SCA has shown superiority over other discrete algorithms [86].
Montoya et al. [87] proposed another discrete version of the sine cosine algorithm (SCA) to find the optimal location of the distributed generators (DGs) in alternating current (AC) distribution networks. An integer codification of SCA is proposed in the algorithm [88]. This codification technique eases the implementation with a matrix associated with the population. Any infeasible solutions obtained during the search process are removed from the population. In the technique of integer codification, each location of DGs is assigned a number between 2 and n (where n is the total number of nodes), and the number 1 is assigned to the slack node. The location of all the DGs can be visualized at the nodes, and an individual \(x_i\) does not appear more than once in any node to maintain the codification. The initial population for the proposed SCA will be represented in the form of a matrix with the dimensions NP \(\times \) N, defined as follows:
where NP denotes the population size, and N is the number of DGs available for installation. In the initial population, ith search agent in the jth dimension (\(x_{i,j}\)) is randomly generated natural number between 2 and n, using the following equation:
where the \(\text {round}(\cdot )\) denotes the floor function or nearest integer part of the given number, and rand is a normal distributed random number with mean 0 and standard deviation 1.
The following constraint must be satisfied to secure the feasible solution using the following equation:
To preserve the feasible solution space at the time of the initializing population, the authors establish that every solution’s component is different from one another. The discrete version of the sine cosine algorithm is summarized in Algorithm 3.
![figure c](http://media.springernature.com/lw685/springer-static/image/chp%3A10.1007%2F978-981-19-9722-8_4/MediaObjects/540847_1_En_4_Figc_HTML.png)
Practice Exercises
-
1.
How is the binary version of SCA different from the original SCA?
-
2.
What are the major limitations of using the transfer function for discrete optimization problems?
-
3.
Binary PSO has also used the transfer function. Compare and tell how the transfer function taken in binary PSO and the transfer function taken in SCA are different.
-
4.
What is the difference between the sigmoid transformation function and the tan hyperbolic transformation function?
-
5.
What are the important issues to be considered while designing a discrete version of a meta-heuristic algorithm?
-
6.
Using the standard SCA and converting it into a binary or discrete value is easier than going through conversion and working in the discrete space. Explain why it cannot be adopted with the help of an example.
References
R.L. Rardin, R.G. Parker, Discrete Optimization (Academic Press, Inc., 1988)
D. Devendra, Travelling Salesman Problem, Application and Theory, vol. 1 (InTech, 2010)
G. Dantzig, R. Fulkerson, S. Johnson, Solution of the large-scale travelling salesman problem. Oper. Res. (1954)
C.E. Miller, A.W. Tucker, R.A. Zemlin, Integer programming formulation and travelling salesman problem. J. Assoc. Comput. Mach. (1960)
G. Laporte, The traveling salesman problem: an overview of exact and approximate algorithms. Eur. J. Oper. Res. (1992)
W.L. Eastman, Linear programming with pattern constraints, PhD thesis, Harvard University, Cambridge, 1958
J.D.C. Little, K.G. Murty, D.W. Sweeney, C. Karel, An algorithm for travelling salesman problem. Oper. Res. 11 (1963)
D.M. Shapiro, Algorithms for the solution of the optimal cost and bottleneck traveling salesman problems, Sc.D. thesis, Washington University, St. Louis, MO, 1966
K.G. Murty, An algorithm for ranking all the assignments in order of increasing cost. Oper. Res. 16 (1968)
M. Bellmore, J.C. Malone, Pathology of travelling-salesman subtour-elimination algorithms. Oper. Res. 19, 278–307 (1971)
R.S. Garfinkel, On partitioning the feasible set in a branch-and-bound algorithm for the asymmetric traveling-salesman problem. Oper. Res. 21, 340–343 (1973)
T.H.C. Smith, G.L. Thompson, V. Srinivasan, Computational performance of three subtour elimination algorithms for solving asymmetric traveling salesman problems. Ann. Discrete Math. 1, 495–506 (1977)
G. Carpaneto, P. Toth, Some new branching and bounding criteria for the asymmetric travelling salesman problem. Manage. Sci. 26, 736–743 (1980)
E. Balas, N. Christofides, A restricted Lagrangean approach to the traveling salesman problem. Math. Program. 21, 19–46 (1981)
D.L. Miller, J.F. Pekny, Results from a parallel branch and bound algorithm for solving large asymmetric traveling salesman problems. Oper. Res. Lett. 8, 129–135 (1989)
M. Dorigo, M. Birattari, C. Blum, M. Clerc, T. Stützle, A.F.T. Winfield, Ant colony optimization and swarm intelligence, in 5th International Workshop (Springer, 2006)
J. Kennedy, R.C. Eberhart, A discrete binary version of the particle swarm algorithm, in 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, vol. 5 (IEEE, 1997), pp. 4104–4108
M.A.H. Akhand, S.I. Ayon, S.A. Shahriyar, N.H. Siddique, H. Adeli, Discrete spider monkey optimization for travelling salesman problem. Appl. Soft Comput. J. 86(4), 469–476 (2020)
J.H. Lorie, L.J. Savage, Three problems in capital rationing. J. Bus. 28, 229–239 (1955)
R. Nauss, The zero-one knapsack problem with multiple-choice constraints. Eur. J. Oper. Res. 2, 125–131 (1978)
E. Balas, E. Zemel, An algorithm for large zero-one knapsack problems. Oper. Res. 28, 1130–1154 (1980)
L.A. Wolsey, Faces for a linear inequality in 0–1 variables. Math. Program. 8, 165–178 (1975)
M. Merkle R. Hellman, Hiding information and signatures in trapdoor knapsacks. IEEE Trans. Inf. Theory 24, 525–530 (1978)
C. Wilbaut, S. Hanafi, S. Salhi, A survey of effective heuristics and their application to a variety of knapsack problems. IMA J. Manag. Math. 19, 227–244 (2008)
K. Dudziński, S. Walukiewicz, Exact methods for the knapsack problem and its generalizations. Eur. J. Oper. Res. 28(1), 3–21 (1987)
A. Liu, J. Wang, G. Han, S. Wang, J. Wen, Improved simulated annealing algorithm solving for 0/1 knapsack problem, in Sixth International Conference on Intelligent Systems Design and Applications, 2006. ISDA’06, vol. 2 (IEEE, 2006)
F. Qian, R. Ding, Simulated annealing for the 0/1 multidimensional knapsack problem. Numer. Math. Engl. Ser. 16(4), 320 (2007)
L. Ouyang, D. Wang, New particle swarm optimization algorithm for knapsack problem, in 8th International Conference on Natural Computation (2012)
U. Ufuktepe, G.B. Turan, Applications of graph coloring, in Lecture Notes in Computer Science (2005)
P. Gupta, O. Sikhwal, A study of vertex—edge coloring techniques with application. Int. J. Core Eng. Manag. (IJCEM) 1(2) (2014)
A.M. de Lima, R. Carmo, Exact algorithms for the graph coloring problem. Rev. Inform. Teór. Apl. (RITA) 25 (2018). ISSN 2175-2745
E. Lawler, A note on the complexity of the chromatic number problem. Inf. Process. Lett. 5(3), 66–67 (1976)
D. Eppstein, Small maximal independent sets and faster exact graph coloring. J. Graph Algorithms Appl. 7(2), 131–140 (2003)
J.M. Byskov, Chromatic number in time O(2.4023n) using maximal independent sets. BRICS Rep. Ser. 9(45), 1–9 (2002)
H.L. Bodlaender, D. Kratsch, An exact algorithm for graph coloring with polynomial memory. UU-CS, vol. 2006, no. 15, pp. 1–5 (2006)
D. Brelaz, New methods to color the vertices of a graph. Commun. Appl. Comput. Mach. 22(4), 251–256 (1979)
A. Zykov, On some properties of linear complexes. Mat. Sb. (N.S.) 24(66)(2), 418–419 (1962)
A. Layeb, H. Djelloul, S. Chikhi, Quantum inspired cuckoo search algorithm for graph colouring problem. Int. J. Bio-Inspired Comput. 7, 183–194 (2015)
A. Kole, D. De, A.J. Pal, Solving graph coloring problem using ant colony optimization, simulated annealing and quantum annealing—a comparative study, in Studies in Computational Intelligence, vol. 1029 (Springer, 2022)
M. Kairanbay, H.M. Jani, A review and evaluations of shortest path algorithms. Int. J. Sci. Technol. Res. 2(6) (2013)
E.W. Dijkstra, A note on two problems in connexion with graphs. Numer. Math. 269–271 (1959)
R.W. Floyd, Algorithm 97 shortest path. Commun. ACM 5, 345 (1962)
R. Bellman, On a routing problem. Q. J. Appl. Math. 16, 87–90 (1958)
D.D. Caprio, A. Ebrahimnejad, H. Alrezaamiri, F. Santos-Arteaga, A novel ant colony algorithm for solving shortest path problems with fuzzy arc weights. Alex. Eng. J. 61(5) (2022)
M. Gen, R. Cheng, D. Wang, Genetic algorithms for solving shortest path problems, in Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC ’97) (1997)
A. Caprara, P. Toth, M.A. Fischetti, Algorithms for the set covering problem. Ann. Oper. Res. 98, 353–371 (2000)
E. Balas, A class of location, distribution and scheduling problems: modelling and solutions methods, in Proceedings of the Chinese-US Symposium on System Analysis (Wiley, 1983)
E. Balas, M.C. Carrera, A dynamic subgradient-based branch-and-bound procedure for set covering. Oper. Res. 44, 875–890 (1996)
R. Soto et al., A XOR-based ABC algorithm for solving set covering problems, in The 1st International Conference on Advanced Intelligent System and Informatics (AISI2015), Beni Suef, Egypt, 28–30 Nov 2015 (Springer, 2016), pp. 209–218
K.S. Al-Sultan, M.F. Hussain, J. Nizami, A genetic algorithm for the set covering problem. J. Oper. Res. Soc. 47, 702–709 (1996)
K.M. Bretthauer, B. Shetty, The nonlinear knapsack problem—algorithms and applications. Eur. J. Oper. Res. 1(1), 1–14 (2002)
W.J. Cook, W.H. Cunningham, Combinatorial Optimization (Wiley, 1998)
B. Crawford et al., Putting continuous metaheuristics to work in binary search spaces. Complexity 2017 (2017)
F. Glover, Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 13(5), 533–549 (1986)
S. Kirkpatrick, C.D. Gelatt, Jr., M.P. Vecchi, Optimization by simulated annealing. Science 220(4598), 671–680 (1983)
M. Mitchell, An Introduction to Genetic Algorithms (MIT Press, 1998)
J. Kennedy, R.C. Eberhart, A discrete binary version of the particle swarm algorithm, in 1997 IEEE Conference on Systems, Man, and Cybernetics (1997)
M.K. Sayadi, A. Hafezalkotob, S.G.J. Naini, Firefly-inspired algorithm for discrete optimization problems: an application to manufacturing cell formation. J. Manuf. Syst. 32(1), 78–84 (2013)
A. Lotfipour, H. Afrakhte, A discrete teaching-learning-based optimization algorithm to solve distribution system reconfiguration in presence of distributed generation. Int. J. Electr. Power Energy Syst. 82, 264–273 (2016)
B. Crawford et al., A binary coded firefly algorithm that solves the set covering problem. Roman. J. Inf. Sci. Technol. 17(3), 252–264 (2014)
S.A. Mirjalili, S.Z.M. Hashim, BMOA: binary magnetic optimization algorithm. Int. J. Mach. Learn. Comput. 2(3), 204 (2012)
B. Crawford et al., Binary cat swarm optimization for the set covering problem, in 2015 10th Iberian Conference on Information Systems and Technologies (CISTI) (IEEE, 2015), pp. 1–4
M. Mafarja et al., Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 161, 185–204 (2018)
S. Mirjalili, SCA: a sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 96, 120–133 (2016)
A.I. Hafez et al., Sine cosine optimization algorithm for feature selection, in 2016 International Symposium on Innovations in Intelligent Systems and Applications (INISTA) (IEEE, 2016), pp. 1–5
A.P. Engelbrecht, G. Pampara, Binary differential evolution strategies, in 2007 IEEE Congress on Evolutionary Computation (IEEE, 2007), pp. 1942–1947
S. Taghian, M.H. Nadimi-Shahraki, Binary sine cosine algorithms for feature selection from medical data. arXiv preprint arXiv:1911.07805 (2019)
B.J. Leonard, A.P. Engelbrecht, C.W. Cleghorn, Critical considerations on angle modulated particle swarm optimisers. Swarm Intell. 9(4), 291–314 (2015)
J. Sun, B. Feng, W. Xu, Particle swarm optimization with particles having quantum behavior, in Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No. 04TH8753), vol. 1 (IEEE, 2004), pp. 325–331
Z.A. El Moiz Dahi, C. Mezioud, A. Draa, Binary bat algorithm: on the efficiency of mapping functions when handling binary problems using continuous-variable-based metaheuristics, in IFIP International Conference on Computer Science and Its Applications (Springer, 2015), pp. 3–14
K.S. Reddy et al., A new binary variant of sine cosine algorithm: development and application to solve profit-based unit commitment problem. Arab. J. Sci. Eng. 43(8), pp. 4041–4056 (2018)
Y.-J. Gong et al., Optimizing the vehicle routing problem with time windows: a discrete particle swarm optimization approach. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42(2), 254–267 (2011)
H. Pinto et al., A binary sine cosine algorithm applied to the knapsack problem, in Computer Science On-line Conference (Springer, 2019), pp. 128–138
J. Garcıa et al., A Db-scan binarization algorithm applied to matrix covering problems. Comput. Intell. Neurosci. 2019 (2019)
J. Garcıa et al., A k-means binarization framework applied to multidimensional knapsack problem. Appl. Intell. 48(2), 357–380 (2018)
E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, GSA: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
S. Mirjalili, S.M. Mirjalili, X.-S. Yang, Binary bat algorithm. Neural Comput. Appl. 25(3), 663–681 (2014)
A. Fernéndez et al., A binary percentile sin cosine optimisation algorithm applied to the set covering problem, in Proceedings of the Computational Methods in Systems and Software (Springer, 2018), pp. 285–295
J. Garcıa et al., A percentile transition ranking algorithm applied to binarization of continuous swarm intelligence metaheuristics, in International Conference on Soft Computing and Data Mining (Springer, 2018), pp. 3–13
S. Balaji, N. Revathi, A new approach for solving set covering problem using jumping particle swarm optimization method. Nat. Comput. 15(3), 503–517 (2016)
J. Garcıa et al., A multi dynamic binary black hole algorithm applied to set covering problem, in International Conference on Harmony Search Algorithm (Springer, 2017), pp. 42–51
X. Zhang et al., Binary artificial algae algorithm for multidimensional knapsack problems. Appl. Soft Comput. 43, 583–595 (2016)
M.A. Tawhid, P. Savsani, Discrete sine cosine algorithm (DSCA) with local search for solving traveling salesman problem. Arab. J. Sci. Eng. 44(4), 3669–3679 (2019)
W.-P. Liu et al., Hybrid crossover operator based on pattern, in 2011 Seventh International Conference on Natural Computation, vol. 2 (IEEE, 2011), pp. 1097–1100
G.A. Croes, A method for solving traveling-salesman problems. Oper. Res. 6(6), 791–812 (1958)
S. Gholizadeh, R. Sojoudizadeh, Modified sine cosine algorithm for sizing optimization of truss structures with discrete design variables. Iran Univ. Sci. Technol. 9(2), 195–212 (2019)
O.D. Montoya et al. A hybrid approach based on SOCP and the discrete version of the SCA for optimal placement and sizing DGs in AC distribution networks. Electronics 10(1), 26 (2020)
O.D. Montoya, W. Gil-González, C. Orozco-Henao, Vortex search and Chu-Beasley genetic algorithms for optimal location and sizing of distributed generators in distribution networks: a novel hybrid approach. Eng. Sci. Technol. Int. J. 23(6), 1351–1363 (2020)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2023 The Author(s)
About this chapter
Cite this chapter
Bansal, J.C., Bajpai, P., Rawat, A., Nagar, A.K. (2023). Sine Cosine Algorithm for Discrete Optimization Problems. In: Sine Cosine Algorithm for Optimization. SpringerBriefs in Applied Sciences and Technology(). Springer, Singapore. https://doi.org/10.1007/978-981-19-9722-8_4
Download citation
DOI: https://doi.org/10.1007/978-981-19-9722-8_4
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-9721-1
Online ISBN: 978-981-19-9722-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)