Decision-making is a difficult task, and it requires careful analysis of the underlying problem at hand. The presence of various alternative solutions makes the decision-making problem even more difficult as all the available solutions are not optimal. Since resources, time, and money are limited, or even sometimes scarce, the quest for optimal choices is of paramount importance for the welfare of the mankind. Optimization is a mathematical tool and an indispensable part of the decision-making process which assists in finding optimal (or near optimal) solutions from the set of available solutions. Optimization as a subject spans over almost every field of science and engineering and is mainly concerned with planning and design problems. For instance, in industrial design, corporate planning, budget planning, or holiday planning, optimization plays an important part in the decision-making. The need of optimization as a technique cannot be separated from different fields, such as computer science, engineering, medicine science, economics, and many more others disciplines. Advancements in the computational capabilities and availability of high-speed processors in modern computers have made optimization techniques more user friendly to tackle real-world optimization problems. In addition, easy access to advanced computer simulation techniques has prompted researchers to look for more generalized optimization methods which involve high computations, and are capable of handling more complex real-world optimization problems.

A general optimization problem can be expressed in the following general formFootnote 1:

$$\begin{aligned} \begin{aligned}&\text {Minimize} \ \ F_{i}(\bar{X}) \ \ \ i=1,2,\ldots ,M \\&\text {subject to} \ \ g_j(\bar{X}) \le 0 \ \ j=1,2,\ldots ,J\\&h_k(\bar{X}) = 0 \ \ \ k=1,2,\ldots ,K \\ \end{aligned} \end{aligned}$$
(1.1)

where \(F_i(\bar{X})\) is referred as objective function or cost function in Eq. (1.1) and M denotes the number of objective functions in given optimization problem. When \(M=1\), optimization problem is termed as single-objective optimization problem, and when \(M > 1\), optimization problem is referred as multi-objective optimization problem. \(g_j(\bar{X})\) is called inequality constraints, s.t. \(1 \le j \le J\), and J denotes the number of inequality constraints. \(h_k(\bar{X})\) is equality constraints, s.t. \(1 \le k \le K\), and K denotes the number of equality constraints.

\(F_i(\bar{X}), g_j(\bar{X})\), and \(h_k(\bar{X})\) are functions of the vector \(\bar{X}=(x_1, x_2,\ldots x_n) \in S\). \(\bar{X}\) is called decision (or design) vector, and the components \(x_i \in \bar{X}\) are called decision (or design) variables. S is referred as decision space (or design space), and it can be discrete, continuous, or combination of both. Based on the underlying applications, the term design or decision (vector, variable, and space) is used interchangeably. In this book, the terms decision vector, decision variables, and decision space will be used in all further discussions.

The optimization problem given by Eq. (1.1) describes a decision problem, where we are required to find the “optimal” decision vector X out of all possible vectors in the decision space (S). The process of optimizing (maximizing or minimizing) the objective function of an optimization problem by determining the optimal values of the decision variables involved in it is called optimization. There are different categories of optimization problems. This categorization of optimization problems can be done in several ways, e.g., based on the number of objective functions, the nature of the objective functions, and the nature of the constraints. For instance, if a problem involves exactly one objective function, it is called a single-objective optimization problem. If the number of objective functions is at least two or more than two, it is referred as a multi-objective optimization problem. An optimization problem can also be categorized as a real, discrete, or mixed-integer problem based on whether the underlying decision variables are real, discrete, or mixed-integer type, respectively. When there are no conditions or constraints on the decision variables, the optimization problem is called unconstrained; otherwise, it is termed as constrained optimization problem. A detailed categorization of various optimization problems is presented in Table 1.1. For more specialized discussions of the mentioned categories, an interested reader can refer to two textbooks, Optimization for Engineering Design Algorithms and Examples by [1] and Operations research: an introduction by [2].

Table 1.1 Classification of optimization problems

Different optimization methods (or techniques) are available in the literature to address various types of optimization problems as mentioned in Table 1.1. However, selecting a suitable optimization method for an optimization problem is a challenging task, as there are no general guidelines for algorithm selection for a given optimization problem. Moreover, there is no efficient general algorithm for solving non-deterministic polynomial time hard or NP-hard problems. In general, optimization methods can be classified into the following two types:

  1. 1.

    Traditional (deterministic) methods : Traditional optimization methods start from a randomly chosen initial solution and use specific deterministic rules for changing the solutions’ position in the search space. Most of these methods utilize the gradient information of the objective function. The initial solutions always follow the same path for the same starting position and converge to the fixed final position, irrespective of the number of runs. These methods provide a mathematical guarantee that a given optimization problem can be solved with a required level of accuracy within a finite number of steps. There exist sufficient literature on traditional optimization methods where different methods are capable of handling various types of optimization problems. Based on the type of problem, traditional optimization techniques may be identified as methods for solving the linear programming problems (LPP), nonlinear programming problems (NLPP), and specialized programming problems. However, traditional methods sometimes fail to handle optimization problems. Usually, these methods rely on the properties like continuity, differentiability, smoothness, and convexity of the objective function and constraints (if any). The absence of any of these properties makes traditional methods incapable of handling such optimization problems. Moreover, there are optimization problems for which no information is available about the objective function; these problems are referred as the black-box optimization problem. Traditional optimization methods or deterministic methods also fail to handle such black-box problems.

    Combinatorial optimization problems such as traveling salesman problem, N-vortex problem, and halting problem are non-deterministic polynomial hard (or NP-hard) problems. Traditional optimization methods are incapable of solving these NP-hard problems within a polynomial-bound time and require an exponential time. The time complexity of the traditional methods makes these methods impractical to use. The failure of the deterministic or conventional methods inspired researchers to look for some non-deterministic or unconventional methods, which are statistically reliable, fast, and robust in dealing with a larger class of optimization problems. Stochastic methods are part of these unconventional methods, which have partially proven their superiority over traditional methods in terms of robustness, computational cost-effectiveness, and speed. However, the wide applicability of stochastic methods comes at the core of reliability. Stochastic methods are discussed in detail in the next section.

  2. 2.

    Stochastic (non-deterministic) methods : Stochastic or non-deterministic optimization methods contain inherent components of randomness and are iterative in nature. These methods utilize stochastic equations which are based on different stochastic processes and utilize different probability distributions. The stochastic nature of these equations governs the path of the solutions in the search space. In different runs of these algorithms, a solution can follow different paths, despite having a fixed initial position.

    Stochastic optimization methods do not always guarantee convergence to a fixed optimal position in the search space. In fact, these methods look for near optimal solution in a predefined fixed number of iterations. N number of independent runs are simulated to ensure a statistical reliability to these methods, and in general, the number of runs \(N=30\) or 51 is used to support the claim of near optimal solution. The trade-off for sacrificing the optimal solution by stochastic methods is the fast convergence speed, low computational cost, and less time complexity. Random number generators or pseudo-random number generators play an important role in the success of the stochastic methods. A brief classification of optimization techniques and their methods are illustrated in the (Fig. 1.1).

Fig. 1.1
figure 1

Classification of optimization technique

Stochastic methods are a broad area of study. These methods are based on different stochastic processes, and discussing about all these methods and techniques is beyond the scope of this book. An interested reader can refer to any advanced book like stochastic optimization by [3] for strengthening their knowledge about the subject. However, this book provides a piece of decent information about the stochastic techniques and focuses on the meta-heuristic algorithms, particularly the sine cosine algorithm (SCA) [4]. Meta-heuristic algorithms are one class of the stochastic methods. But before discussing about meta-heuristic algorithms, let us discuss first about heuristic algorithms.

The word heuristic means ‘to find’ or ‘to discover by trial and error’. A heuristic technique or simply a heuristic is an experience-based approach that compromises accuracy, optimality, or precision for speed to solve a problem faster and more efficiently. In layman’s language, a thumb rule, an intelligent guess, an intuitive judgment, or common sense can be considered as metaphors for the word heuristics. Random search algorithms, divide and conquer algorithm, nearest neighbor heuristic, savings algorithm, and best first search method are some examples of heuristic algorithms. The heuristic algorithms are known to be very specific in their search process for the solutions and are problem specific.

The word meta—means ‘beyond’ or ‘higher level’, and meta-heuristics algorithms are higher versions of heuristics algorithms. Meta-heuristic algorithms are advanced optimization algorithms and also known as modern optimization techniques. These algorithms utilize more information of the search process and less or no information of the problem; i.e., these algorithms are actually problem independent. Because of their negligible dependency on the objective function in an optimization problems, meta-heuristic algorithms are well-equipped in handling complex optimization problems and are applicable to a wider class of problems. Meta-heuristics or meta-heuristic algorithms are fast, efficient, and robust in handling highly nonlinear, non-differentiable, and even black-box optimization problems. Inexpensive computational cost or less computational complexity is one of the major advantages of using meta-heuristic algorithms.

The basic idea behind the working of meta-heuristic algorithms is simple and easy to implement. Meta-heuristic algorithms start the search process by randomly initializing a finite set of representative solutions in the search space. These solutions are also referred as particles, search agents, or individuals and will be used interchangeably throughout the text depending upon the context. The finite set containing the representative solutions is referred as the ‘population’. The initial position of the search agents is evaluated using the given objective function. The population iteratively updates the positions of its search agents to look for the optimal solution in the search space. The position update mechanism of search agents during the search can be considered as the soul of the meta-heuristic algorithm.

In random search algorithms, search agents update their position randomly and do not utilize any information from each other. But, in meta-heuristic algorithms, information sharing between search agents is one of the most important components. The algorithm evaluates the position of search agents in the search space using objective function value, and a fitness value is assigned. The fitness of a search agent is the value of the objective function at its position. Search agents lying near to optimum location have better fitnesses, and agents far from the optimum have poor fitness values. Better search agents communicate about the their position to other agents, and other agents try to follow the direction of better agents to improve their fitness values.

Mathematically speaking, suppose \(S\subseteq \mathbb {R}^{D}\) is a D-dimensional subspace of the \(\mathbb {R}^{D}\), and the population size is N. If \(X_i= (x_1,x_2,\ldots ,x_D) \in S\) is the current position vector of the ith (\(1\le i \le N\)) search agent in the search space S, then a simple position update mechanism can be described by Eq. (1.2);

$$\begin{aligned} X_{i}^{\text {new}} = X_{i}^{\text {curr}.} + \overline{h_i} \end{aligned}$$
(1.2)

where \(\overline{h_i}\) is a D-dimensional step vector determining the magnitude of the step length and direction of the position update for ith agent. The addition (+) in Eq. (1.2) is vector addition or component-wise addition. Step vector \(\overline{h}\) is produced by meta-heuristic algorithms may contain components of best search agent’s position, worst search agent’s position, the mean of the positions, and some random scaling factors. For instance, particle swarm optimizer (PSO) utilizes a position update mechanism similar to that mentioned in Eq. (1.2).

One more major position update mechanism can be realized by changing the components of the position vector in the search space. Suppose \(X_i = (x_{i,1},x_{i,2},\ldots ,x_{i,D})\) is the position vector of the ith search agent in the search space S. If we replace some components \(x_{i,j}\), where \(1\le j \le D\), by different values, say \(u_{i,j}\), such that \(u_{i,j} \ne x_{i,j}\), the position of \(X_i\) will be changed. Similarly, if a nontrivial permutation operator is applied on the components of \(X_i\), the position of \(X_i\) can be updated. Genetic algorithms are one class of algorithms utilizing similar technique to update the position of search agents in the search space. A hybrid of these two position update mechanisms can also be employed by some available meta-heuristic algorithms. For example, differential evolution (DE) exploits the combination of both these techniques. The latest development in the field of meta-heuristic algorithms is utilizing more advanced versions of these position update mechanisms, although the underlying idea is the same as discussed above.

In meta-heuristic algorithms, position update mechanisms are dynamic in nature and utilize the information from the ongoing optimization process. In these algorithms, using any large step sizes (or large changes in the position) of the search agents can hamper the convergence of the algorithm, and very small step sizes (or very small changes in the position) of the search agents lead to stagnation and slow speed. Stagnation is the phase of any meta-heuristic algorithm, when search agents in the search space lose their diversity and converge to a local optimal solution. Both of these extreme situations are not good for any optimization algorithm. So, in any meta-heuristic algorithm achieving a fine balance between the large steps and small steps is of paramount importance. This process of achieving a fine balance between step sizes is referred as ‘exploitation versus exploration’ or ‘intensification versus diversification’.

In the exploitation phase, the algorithm utilizes very small step sizes to extensively cover the local region of the search space where the optimum can lie. Search agents make very small changes in their position to scan the local region of the search space thoroughly. However, its disadvantage is that it makes the convergence speed slow. On the other hand, exploration refers to the capability of the algorithm to cover the large size of search space efficiently and maintain the diversity in the population of the search agents. Therefore, exploration can be considered as a searching process on a global scale. Large step sizes make exploration less prone to stuck in the local optimum locations and help in finding the region of the global optimum. The major disadvantage of high exploration rate is that it can skip the global optima and converge prematurely. So, the optimal balance between exploration and exploitation is a very critical component of the algorithm.

The advancements in the literature of meta-heuristic algorithms have grown significantly in the recent past. There are various classifications available. For instance, meta-heuristic algorithms can be categorized based on their source of inspiration, their country of origin, whether they originate from natural or some artificial phenomenon, and whether they start with multiple solutions or single solutions [5]. For a good overview of the classification of meta-heuristic algorithms, an interested reader can refer [6,7,8]. Based on the number of representative solutions in the search space, i.e., multiple solutions and single solutions, meta-heuristic algorithms can be classified into two categories: population-based and single solution-based. The population-based meta-heuristic algorithms begin with a set of random representative solutions, which are then improved iteratively until the termination criterion is satisfied. Some of the popular meta-heuristic algorithms are particle swarm optimization (PSO) [9], artificial bee colony (ABC) [10], sine cosine algorithm (SCA) [4], ant colony optimization (ACO) [11], differential evolution (DE) [12], genetic algorithms (GA) [13], gravitational search algorithm (GSA) [14], teaching–learning-based optimization (TLBO) [15], gray wolf optimization algorithm (GWO) [16], spider monkey optimization (SMO) [17], and many others. Single-solution-based algorithms generate a single solution and improve the solution until the termination condition is satisfied. Methods like simulated annealing (SA) [18], noising method (NM) [19], the tabu search (TS) [20], variable neighborhood search (VNS) [21], and the greedy randomized adaptive search procedure (GRASP ) [22] method fall under this category. Population-based meta-heuristic algorithms are preferred over single-solution-based algorithms because of their robust exploration capabilities, i.e., checking multiple points in the search space simultaneously saving time and resources and improving the probability of reaching the global optima.

Population-based meta-heuristic algorithms can be studied under two major categories of evolutionary algorithms (EAs) and swarm intelligence (SI)-based algorithms. The underlying principles and working of these algorithms are similar but their source of inspiration is different. Brief detail about these algorithms is mentioned below:

  1. 1.

    Evolutionary Algorithms: Evolutionary algorithms (EAs) are inspired by the natural evolutionary process. The structure of the evolutionary algorithm is based on the Darwinian theory related to the biological evolution of species and the survival of the fittest principle. In EAs, search agents or solutions evolve iteratively using three major operators—selection, mutation, and crossover (or recombination). The family of evolutionary algorithms comprises genetic algorithms (GA), evolution strategies, differential evolution (DE), genetic programming (GP), biogeography-based optimization [23], evolutionary programming, etc.

  2. 2.

    Swarm Intelligence (SI)-Based Algorithms : Beni and Wang [24] coined the phrase “swarm intelligence” (SI) in 1993 to describe the cooperative behavior of robotic systems. SI is an important branch of artificial intelligence in which complex, autonomous, and decentralized systems are studied. Swarm can be described as a collection of simple entities which corporate with each other to execute complex tasks, for example the collective behaviors of social ants, cooperation of honey bees, etc. Swarm of simple autonomous agents interact with each other and demonstrate intelligent traits such as the ability to make decisions and adaptability to change when aggregated together. Meta-heuristic algorithms in which the autonomous agents work together to find the optimal solution and do not involve evolutionary operators are termed as swarm intelligence (SI)-based algorithms. Some well-known algorithms under this category are PSO, ABC, ACO, GSA, SCA, and TSA.

In the mid-90s, EA and SI algorithms were studied under the single category of evolutionary computing, because of their similarities, such as using a population of the solution and their stochastic nature. Although the underlying motivation of these algorithms is different. In evolutionary algorithms, new solutions emerge and old solutions die in the optimization process, while in SI algorithms, old solutions are improved iteratively, and no old solution die in the optimization process. Researchers noticed this difference, and consequently, more academic research on swarm intelligence was published in the international academic journals, making the field of SI-based algorithms more popular and applied.

Meta-heuristic algorithms can also be further categorized based on their source of inspiration from different fields of sciences like life science, physics, mathematics, etc. Some of the major categories of the algorithms falling in these categories are discussed below:

  1. 1.

    Life Science-Based Algorithms: Life science concerns with the study of living organisms, from single cells to human beings, plants, microorganisms, and animals. Meta-heuristic algorithms that take inspiration from the species of birds, animals, fishes, bacteria, microorganisms and viruses, plants, trees, fungi, and human organs, like kidney, heart, or disease treatment methods, such as chemotherapy, come under this category. This category can be further classified as fauna-based, flora-based, and organ-based [25]. A few examples of meta-heuristic algorithms that fall under this category are GWO, PSO, ABC, ACO, artificial plant optimization algorithm [26], root tree optimization algorithm [27], chemotherapy science algorithm [28], kidney-inspired algorithm [29], and heart algorithm [30].

  2. 2.

    Physical Science-Based Algorithms: Physical science includes physics, chemistry, astronomy, and earth science. Algorithms that imitate the behavior of physical or chemical phenomena, such as electromagnetism, water movement, electric charges/ions, chemical reactions, gaseous particle movement, celestial bodies, and gravitational forces are grouped under this category. Some popular physical science-based algorithms are black hole optimization [31], crystal energy optimization algorithm [32], ions motion optimization algorithm [33], galaxy-based search algorithm [34], gravitational search algorithm, simulated annealing, and atmosphere clouds model [35].

  3. 3.

    Social Science-Based Algorithms: Social science deals with the behavior of humans and the functioning of human colonies. It covers exciting fields like human geography, psychology, economics, political science, history, and sociology. Meta-heuristic algorithms under this category have drawn inspiration from humans’ social and individual conduct. The principles of leadership, decision-making, economics, and political or competitive ideologies are some of the concepts that have served as the sources of inspiration. Some have even borrowed metaphors from how humans rule territories and economic systems. Some of the algorithms that fall under this category are ideology algorithm [36], greedy politics optimization algorithm [37], parliamentary optimization algorithm [38], imperialist competitive algorithm [39], social emotional optimization algorithm [40], anarchic society optimization [41], brain storm optimization algorithm [42], and teaching–learning-based optimization (TLBO) [15]. This category also includes algorithms that are inspired by the activities or events introduced by humans, such as the soccer league competition algorithm [43], league championship algorithm [44], and tug of war optimization [45].

  4. 4.

    Mathematics-Based Algorithms: This category includes algorithms inspired by mathematical models and mathematical equations. Some of the examples of the mathematics-based algorithms are gradient-based optimizer (GBO) [46], Runge–Kutta optimization (RUN) [47], tangent search algorithm (TSA) [48], sine cosine algorithm (SCA), differential evolution (DE), and stochastic fractal search (SFS) [49].

Population-based meta-heuristic methods are gaining increasing attention from researchers in the scientific community over the recent past. These methods are more efficient and cost-effective in solving complex problems. The major advantages of population-based meta-heuristic algorithms are summarized here:

  1. (1)

    Population-based meta-heuristics are easy to implement and enable better exploration of the search space than single-solution-based algorithms.

  2. (2)

    They initiate the search process with multiple randomly generated solutions in the search space. The presence of multiple solutions in the search space enables solutions to share information about the search space with other solutions and prevents premature convergence in a local optimal region.

  3. (3)

    Since meta-heuristic frameworks follow general principles, which makes population-based meta-heuristic algorithms easily applicable on a wide variety of real-life optimization problems.

  4. (4)

    In general, meta-heuristics do not rely on the information about the optimization problem formulation (such as the requirement of constraints or objective functions to be linear, continuous, differentiable, convex, etc.), they are more robust and optimization-friendly.

Modern optimization techniques like particle swarm optimization (PSO) [9], artificial bee colony (ABC) [10], differential evolution (DE) [12], firefly algorithm (FA) [50], ant colony optimization (ACO) [11], black hole optimization (BHO) [31], teaching–learning-based optimization (TLBO) [15], genetic algorithm (GA) [13], spider monkey optimization (SMO) [17], gravitational search algorithm (GSA) [14], gray wolf optimization algorithm (GWO) [16], sine cosine algorithm (SCA) [4], have emerged as popular methods for tackling challenging problems in both industries and academic research. Sine cosine algorithm (SCA) is a new mathematical concept-based meta-heuristic algorithm. SCA uses trigonometric functions (sine and cosine) to update the position of the search agents in the search space. It has shown promising results in solving various optimization problems. SCA was introduced by Mirjalili [4] to develop a user-friendly, robust, effective, efficient, and easy-to-implement algorithm that demonstrates decent capabilities in exploring and exploiting the search space. This book is dedicated to the study of sine cosine algorithm (SCA) and its applications. The motive of this book is to discuss and present a fair amount of information about the sine cosine algorithm, which might be helpful for fellow readers who wish to work in the field of meta-heuristic algorithms. The basic SCA algorithm, its variants, and its applications are discussed in the subsequent chapters of the book.

Practice Exercises

  1. 1.

    Discuss the difference between traditional optimization algorithms and meta-heuristic algorithms.

  2. 2.

    Describe the shortcomings of traditional optimization techniques.

  3. 3.

    Write a short note on challenges in the meta-heuristic algorithms.

  4. 4.

    Discuss the difference between evolutionary algorithms and swarm intelligence algorithms.