Introduction

To solve real optimization problems such as in industrial and manufacturing applications, the problem should be formulated as a theoretical problem. In 1974, Gonzalez and Sahni (1976) had introduced one of the most known complex combinatorial problem called the open shop scheduling problem (OSSP). There are several real-world applications of the OSSP, such as system-on-a-chip testing (Iyengar and Chakrabarty 2002), the area of satellite-switched time-division multiple access (Dell’Amico and Martello 1996), routing packets (Suel 1995), the scheduling and wavelength assignment problem in optical networks that are based on the wavelength-division-multiplexing technology (Bampis and Rouskas 2002), routing in optical transpose interconnect system (Lucas 2010), in routing in heterogeneous networks to model communications schedules (Bhat et al. 2000).

The OSSP problem is classified as NP-hard (Gonzalez and Sahni 1976), that is why some researchers had tried to solve it by introducing some methods, such as exact methods, polynomial time algorithm proposed by Gonzalez and Sahni (1976), and the branch and bound developed by Brucker et al. (1997). In general, the exact methods can attain some local solutions and rarely a global solution. The metaheuristic has proven its efficiency to reach the global solution of some problems such as the OSSP. Some metaheuristics are used to solve the OSSP problem, such as simulated annealing (Liaw 1999a) and Tabu search algorithm proposed by Liaw (1999b), genetic algorithm proposed by Prins (2000), extended genetic algorithm proposed by Rahmani Hosseinabadi et al. (2018), hybrid ant colony optimization proposed by Blum (2005), bee colony optimization proposed by Huang and Lin (2011), particle swarm optimization proposed by Sha and Hsu (2008).

This paper presents a new approach for solving the open shop scheduling problem, which is called the cat swarm optimization. In order to prove that the proposed method is efficient, the result obtained by its application to solve some benchmarks instances is compared with those existing in the literature. The rest of the paper is organized as follows; Section two is a description and formulation of the open shop scheduling problem, with an example. Section three presents the cat swarm optimization algorithm, its used parameters, and its process. Section four describes the proposed adaptation of cat swarm optimization to solve the open shop scheduling problem. Section five shows the results obtained by the application of the adapted cat swarm optimization to solve some benchmark instances and a discussion. Finally, a conclusion is presented.

Open shop scheduling problem

Presentation of the problem

The OSSP (Gonzalez and Sahni 1976) involves a collection of m machines \(M = \left\{ {M_{1} ,\ldots, M_{m} } \right\}\) and a collection of \(n\) jobssss \(\left\{ {J_{1} ,\ldots, J_{n} } \right\}\). Each job Ji (\(i \in \left[ {1,n} \right]\).) consists of \(m\) operations \(J_{i} = \left\{ {o_{i 1} , o_{i 2} , \ldots , o_{i m} } \right\}\), and each operation \(o_{ij}\) from b Ji executed on a dedicated machine \(M_{j} \left( {j \in \left[ {1,m} \right]} \right)\) must be processed in a determined process time \(p_{ij}\).

The OSSP consists of n jobs, J, should be processed at most m machines, M, each job I consists of n*m operations \(O = \left\{ {o_{i 1} , o_{i 2} , \ldots , o_{i m} } \right\}\), each operation \(o_{ij} = \left( {m_{ij} , t_{ij} } \right)\) from job \(J_{i}\) should be executed in machine \(m_{ij} \in M\) in a determinate execution time \(t_{ij}\). One performance measure which is considered to be minimized is the total execution time of all process called makespan.

Problematic assumptions

  • All operations should be processed.

  • Each machine can process at most one operation at a time in determining operation process time.

  • The operations in the same job cannot process simultaneously.

  • Two operations of the same job cannot be processed at the same time.

Formulation of the problem

Let’s consider n jobs J = {J1,…, Jn} and m machines {M1,…, Mm}, and each job consists of n × m operations O = \(\left\{ { o_{1 1} , o_{1 2} , \ldots , o_{n m} } \right\}\), each operation tij from the job I should be executed in machine j at a determinate execution time. The solution is a schedule presented by a sequence of n*m operations, numbered from 1 to n × m. To calculate the makespan, the data is presented with a matrix of information Minfo that has m*n four columns and four rows. The Minfo matrix presenting the information of each operation is described in Fig. 1. Let’s use:

Fig. 1
figure 1

Informations matrix

Oi: The name (number) of operation i in schedule (\(i \in \left[ {1 , \left( {n*m} \right)} \right]\).). \(J_{{o_{i} }}\): The job of selected operation oi. \(M_{{o_{i} }}\): The machine name where the operation oi should be processed. \(T_{{o_{i} }}\): The processing time of operation oi.

Example

Let’s consider the following: 3*3 OSSP, where n = 3, m = 3, J = {J1,J2,J3}, M = {1,2,3}, and for every Ji in J, Ji = {(mik, tik)} for \(k \in \left[ {1,3} \right]\),

$$\begin{array}{*{20}c} {J1 \, = \, \left\{ {\left( {3, \, 2} \right), \, \left( {1, \, 3} \right), \, \left( {2, \, 5} \right)} \right\}} \\ {J2 \, = \, \left\{ {\left( {1, \, 5} \right), \, \left( {3, \, 7} \right), \, \left( {2, \, 1} \right)} \right\}} \\ {J3 \, = \, \left\{ {\left( {1, \, 4} \right), \, \left( {2, \, 5} \right), \, \left( {3, \, 1} \right)} \right\}} \\ \end{array}$$

Let’s choose a random solution: x = {7, 5, 3, 2, 9, 1, 6, 4, 8}.

The representation of information matrix is shown in Fig. 2:

Fig. 2
figure 2

The schedule information matrix

To present the solution, the Gantt chart is used in Fig. 3 by respecting all OSSP problem assumptions and the total makespan of the solution x is:

Fig. 3
figure 3

Gant chart

Makespan (x) = 13.

Cat swarm optimization algorithm

Cat swarm optimization (CSO) algorithm is a computational intelligent algorithm, inspired from the behavior of cats. The CSO algorithm was introduced by Chu and Tsai (2006). This algorithm is divided into two modes which are the seeking mode and the tracing mode. The seeking mode presents the rest mode in real life of a cat, where it spends the majority lifetime and the tracing mode, when the cat hunts a prey or any moving object. Every cat is characterized by its own position, its velocity, and the flag to identify whether the cat is in the seeking mode or the tracing mode.

The CSO algorithm proposed by Chu and Tsai (2006) was improved by some researchers to ameliorate its efficiency, as using average-inertia weight suggested by Orouskhani et al. (2011), introducing an adaptive parameter control by Wang et al. (2015), parallel cat swarm optimization by PW Tsai et al. (2008), solving combinatorial optimization problem by Bouzidi and Riffi (2013), solving the clustering problem improved by Razzaq et al. (2016), enhanced parallel cat swarm optimization based on the Taguchi method by Tsai et al. (2012). It was also extended to solve multi-objective problems in 2012 by Pradhan and Panda (2012). These improvements were applied to solve some difficult application problems, such as IIR system identification by Panda et al. (2011b), optimizing least-significant-bit substitution by Wang et al.(2012), optimal placement of multiple UPFC’s in voltage stability enhancement under contingency by Kumar and Kalavathi (2014), direct and inverse modeling of plants by Panda et al. (2011a), single bitmap block truncation coding of color images by Cui et al. (2013), linear antenna array synthesis by Pappula and Ghosh (2014), improved metaheuristic techniques for simultaneous capacitor and DG allocation in radial distribution networks by Kawtar et al. (2015).

This paper aims to give an improved CSO algorithm to solve the OSSP problem, and to prove its efficiency, it was applied to solve some benchmark problems.

CSO to OSSP

To solve the OSSP problem by the CSO, its operators and operations (elementary and global) were enhanced. The improvements are described as follows:

Cat’s parameters

In CSO algorithm, the solution presents the global best solution (Gbest) found by the cats in the swarm; for each cat, the position presents the solution that should be a schedule in OSSP problem. The velocity is used to move from a position to another; in OSSP, a novel solution is obtained by applying a set of swaps to the solution, and thus, the set of swaps is presented by the velocity, and the flag is used to know the cat mode. To sum up, the used operators of each cat in a swarm are:

Mode

Parameter

Role

SM and TM

Position

The schedule solution presented by a sequence of all operations

Flag

The cat mode (seeking or tracing mode)

TM

Velocity

Set of couple permutation (i,j) that will be applied to a position, where i and j are a range of two velocities in the selected position

SM

SMP

Number copies of cats in the SM

CDC

Percent length of the mutation

SRD

First rang in the selected solution vector

SPC

A Boolean value

Cat’s process

The metaheuristic is known by its intelligent combination of two principal concepts which are exploring and exploiting. For the CSO metaheuristic, in each mode, we find the concept of exploration and exploitation. The two modes are combined intelligently with the mixture ratio (MR). The proposal CSO process respects the definition proposed by Chu and Tsai (2006), but some improvements are provided to solve the open shop scheduling problem, and these improved modes are described as follows:

Seeking mode

The seeking mode (SM) presents the cat at rest and also as being alert-looking around its environment for its next move. The position is presented by a vector of schedule, and for that the four parameters of this mode were adapted, and the new role of each one is:

The seeking mode steps can be described as follows:

  1. 1.

    Put j copies of the present position of the cat k, with j = SMP. If the value of SPC is true or j = SMP-1, and retain the cat as one of the candidates.

  2. 2.

    Generate a random value of SRD

  3. 3.

    If the fitness (FS) is not equal, calculate the probability of each candidate by Eq. (a), and the default probability value of each candidate is 1.

  4. 4.

    Perform mutation and replace the current position.

$${\text{Pi}} = \frac{{\left| {{\text{FS}}_{i} - {\text{FS}}_{\hbox{min} } } \right|}}{{{\text{FS}}_{\hbox{max} } - {\text{FS}}_{\hbox{min} } }}\quad {\text{where}}\,0 < i < j$$
(a)

where FSi is the fitness of cati, FSmax is the maximum fitness in the swarm and the FSmin is the minimal fitness in the swarm.

Example

Let’s consider in the SM, that the position (solution) \(x = \left\{ {1,2,3,4,5,6,7,8,9,10,11,12} \right\}\). The size is \(n = 12\),\({\text{SMP}} = 5\),\({\text{SPC}} = {\text{True}}\) and the first value of \({\text{SRD}} = 2\).

Step 1: The program makes four copies of the selected cat position and considering the selected position as a candidate, because SPC = True.

Step 2: For each copy, according to the CDC, randomly increase or decrease the current SRD percentage value, and replace the old values b the application of a swap between the SRD position and the second position that is (CDC + SRD) if the sum is less than 12 (the total size of the problem), else the second position is (CDC + SRD) − (12 + 1).

figure a

The tracing mode

The mode tracing (TM) presents the cat at the quick movement, according to its own velocity, while chasing a pray or any moving object. This section is devoted to describing the parameters, the process and elementary operations used in this mode

Tracing mode parameters

The used parameters in this mode are:

X best :

The best solution/position of the cat who has the best fitness value

V k :

The old speed value (current value)

Vk:

The new value velocity of the catk

w :

The inertia weighted parameter

c :

A constant

r :

A random value in the range [0, 1]

Xk:

The new value of the position of the catk

X k :

The actual position of the catk

V k :

The velocity of the catk

  1. (a)

    Tracing mode process:

The process of TM is given as:

  1. (1)

    Update the velocities of each catk According to the equation:

$$V^{\prime}_{k} = \, w*V_{k} + \, r \, * \, c \, * \, \left( {X_{\text{best}} - \, X_{k} } \right)$$
(a)
  1. (2)

    check if the velocities are of the highest order.

  2. (3)

    update the position of the catk according to equation:

$$X^{\prime}_{k} = \, X_{k} + \, V_{k}$$
(b)

Elementary operations:

The elementary operations (addition, subtraction and multiplication) used in tracing mode to solve the OSSP are not the same as those defined by Chu and Tsai (2006) to solve continuous optimization problem. The used operation is like the elementary operations defined to PSO algorithm, by Clerc (2004), to solve the combinatorial optimization problem. These operations will be performed on the velocity and the position of each cat in the TM mode.

Let’s put x and x’ two positions (schedules), and a velocity v represents all permutations to perform.

Let’s put: X = {1, 2, 3, 4, 5} and v = {(2, 5),(3, 1),(4, 3)}

Opposite of velocity:

Let’s put |v| presents the size (number of permutations couple (i, j)) of the velocity v = {(ik, jk)[k: 0 →|v|]}.

The opposite is defined by: ¬v = {(ik, jk)[k:|v|→0]},with: v + ¬v = ø.

Addition:

This is done between a position x and velocity v, in order to have a new position x’.

$$x \, + \, v = x^{\prime}$$

Adding operation which translates the movement represents the set of permutations to be applied to the position x to getting a new position x’.

Example

$$x + v \, = \, \left\{ {4,5,1,3,2} \right\}$$

Subtraction (position–position):

This operation is performed between two positions to get a velocity.

$$x^{\prime} \, - \, x \, = \, v$$

The subtraction is the opposite of the addition operation (x′−x = v ⇔x + v = x′). In this case, by two positions x and x′, the result is all permutations performed on x, to obtain x′. These pairs of permutations are the velocity v.

Example

Let′s x′ = x + v = {4, 5, 1, 3, 2} \(\Rightarrow\) x − x = v

Multiplication:

This operation is performed between a real r and velocity v = (ik, jk)[k:0 |v|]; the result is a velocity. The different possible cases, according to the real r, are:

  • If r = 0: r × v = 0

  • If ((r > 0) and (r  ≤1)): Then r × v = (ik,jk)[k: 0 → (c × |v|)]

  • If (r > 1): then separate. Decimal and integer part, r = n +x. Where n is the integer part of r, and x corresponds to decimal parts. And returns to each party to the previous cases.

  • If (r < 0): r × v = (− r) × ¬v. Now (− r) > 0, and you will consider one of the previous case.

Example

Let′s put v= {(2, 5), (3, 1), (4, 3)}and r= 0, 4 \(\Rightarrow\) r × v = {(2, 5)}.

The complete mode process:

The flowchart represents the description of the complete CSO process as shown in Fig. 4.

Fig. 4
figure 4

CSO process flowchart

Computational results and discussion

This section presents the obtained results and the discussion of the proposal of CSO to solve some benchmark instances proposed by Taillard (1993) and Guéret and Prins (1999). This proposed adaptation is coded in Visual C++, which runs on an Ultrabook characteristic′s 2.1 GHz 2.59 Ghz Intel Core i7 PC with 8G of RAM. Each instance runs for 10 times in the maximum time of 1 h.

Parameter tuning

The used parameters values in CSO process are the SMP, CDC used in the seeking mode, the parameters w, r and c used in tracing mode, and the MR used in the general process.

For respecting the real life of cats, this paper dose not have to change the SMP, CDC, MR, c values and also the range of the variation of the random value r.

About the inertia weighted parameter w, this paper had used a fix parameter values such that it was consider in the proposal of CSO to solve combinatorial problem (Bouzidi and Riffi 2014b), that values were analyzed and discussed by the application to solve the TSP problem (Bouzidi and Riffi 2013). After that it was applied to solve other combinatorial problems by using this parameters values, such as the QAP (Riffi and Bouzidi 2014), JSSP (Bouzidi and Riffi 2014a) and FSSP (Bouzidi and Riffi 2015).

To resume, the used parameters values are shown in Table 1.

Table 1 Used parameter′s values in CSO

Evaluation of the proposed algorithm

This section presents two tables that show the collected results by the application of the adapted CSO algorithm to the benchmark instances of Taillard (1993) in Table 2, and Guéret and Prins (1999) in Table 3. For each instance, the number of job J and machines M is defined (J × M), the best-known fitness solution (BKS) is found by the existing methods to solve the OSSP to the selected problem instance, the best obtained solution (BS) is found by applying CSO algorithm got in ten runs of the program, and the BS lower than the BKS is marked by * after the instance name. To prove the efficiency of the CSO, the relative percentage deviation (RPD) is calculated. And the average execution time (Aver_T) is taken into seconds.

Table 2 Result obtained by applying CSO to solve OSSP instances proposed by Taillard
Table 3 Result obtained by applying CSO to solve OSSP instances of Guéret and Prins

The RPD is calculated by:

$${\text{RPD = }}\frac{{{\text{BS}} - {\text{BKS}}}}{\text{BKS}}$$

Table 2 shows the obtained results by the application of proposed CSO to solve the Taillard (1993) benchmark instances.

Table 2 shows that the application of CSO to solve all proposed Taillard instances problems has a lower-to-negligible RPD value, and each instance problem solution is obtained in a reasonable execution time.

Table 3 shows the obtained results by the application of CSO to solve the OSSP benchmark instances proposed by Guéret and Prins (1999).

Also the CSO can find the optimal solution of many benchmark instance problems of Guéret and Prins that have the lower values of RPD, and the solutions are obtained in a reasonable execution time. Also, when the application takes more time running, the CSO algorithm gives better solutions to all selected benchmark instances. Table 4 proves that CSO application with more time (more than 900 s) can find the best optimal solutions. Table 4 shows some benchmark instance, its size (number of jobs J and number of machines M), the best-known solution BKS for each one, the best solution (BS) found by the application of CSO method, the number of jobs J and the number of machines M; the RPD1 presents the relative percentage deviation by executing the application for a maximal time 900 s, and the RPD2 presents the relative percentage deviation by executing the application more than 900 s, and until it obtains the best solution, and the average time execution in ten runs (Aver_T) to seconds.

Table 4 Results in more than 900 s of CPU time

Comparison with other metaheuristics and discussion

This part aims to compare the average RPD of seven methods, including the one obtained by applying CSO to OSSP. Those other methods are two of hybrid variable neighborhood search methods (VNS), which are VNS-based curtailed fashion VNS (CLS) and VNS-based greedy local search VNS (GLS), two-phase solution method (TPSM), genetic algorithm (GA), the genetic algorithm incorporating the mutation (MGA) (Naderi and Zandieh 2014) and the electromagnetism algorithm (EA). All these methods run in a PC with 2.0 GHz Intel Core 2 Duo and 2 GB of RAM memory.

The average RPD obtained by each metaheuristic for the different problem size of two known benchmark instances (Taillard 1993; Guéret and Prins 1999) is presented as follows:

In order to do the comparative study, the best way used is the graphical representation that provides a visual display to more assess the collected average RPD results to solve the differents benchmark instances by the metaheuristics in study. For this reason, the results in Table 5 were translated into two graphs; Fig. 5 represents the variation of RPD of the seven methods for the different size of Taillard benchmark instances problems; Fig. 6 is like the first but of the benchmark instances are of Guèret and Prins.

Table 5 The average RPD obtained of each of the methods against two well-known benchmarks
Fig. 5
figure 5

Mean plots for metaheuristic versus the problem size of Taillard benchmark instances

Fig. 6
figure 6

Mean plots for metaheuristic versus the problem size of Guèret and Prins benchmark instances

Figure 5 shows that the CSO algorithm had the lower RPD for the different problem size instances, which means that it has more efficiency than the others.

Again, Fig. 6 shows that the CSO algorithm had the lower RPD for the different problem sizes, which means that the CSO is more efficient than other methods.

Conclusion

To conclude, this paper presented the efficiency of the cat swarm optimization algorithm to solve the theoretical problem, open shop scheduling problem, by its ability to find the best-known solution for some benchmark instances and to find the new best solutions. The comparison of the results of the CSO method to some recent existing methods to solve OSSP problem has also proved the efficiency of the CSO algorithm to solve the OSSP problem. That is why, the aim is to extend the proposed metaheuristic to be applied to various real applications based on OSSP.