1 Introduction

In a PFSS problem, there are \( n \) jobs having \( m \) different operations on \( m \) serial machines. These jobs have to follow the same machine order (1→2 → 3 . → \( m \)) with the same sequence. There are \( n! \) possible job sequences in a PFSS problem. Figure 1 illustrates a solution to a PFSS problem instance consisting of 4 jobs and 4 machines. In this study, the PFSS problem is under the effects of learning and deterioration, and the performance criterion is to minimize makespan. The time requirement for solving large-scale PFSS problems is exceedingly high. Therefore, three well-known metaheuristic methods and a hybrid method that is called population-based Tabu search algorithm (TSPOP) with evolutionary strategies are proposed. Taillard’s (1993) problem sets of 20, 50, and 100 jobs with 5, 10, and 20 machines are chosen to test performances of proposed methods.

Fig. 1
figure 1

Permutation flow shop scheduling problem consisting of 4 jobs and 4 machines

The phenomenon of learning effect denotes a decrease in initially determined processing times because of the experience and expertise obtained via continuous repetition of similar tasks on machines or the system. On the contrary, the phenomenon of deterioration effect denotes an increase in initially determined processing times while jobs are waiting in the queue or are being processed on machines. Both of these effects have been widely studied for more than 15 years in scheduling problems. In most of the scheduling problems, processing times are considered constant and researchers assume that the processing time of a job is not dependent on internal factors of the workplace such as learning or deterioration. Getting experience and the ability to learn from the current task can increase a worker’s performance for similar tasks by applying new methods to new same or similar tasks. On the contrary, predetermined and assumed constant task duration can take longer because of deterioration. Gupta and Gupta (1988) presented a well-understood example of deterioration effect. In this example, the temperature of ingots that are to be processed in a rolling machine must be higher at a certain level and if the temperature of any ingot drops below to that certain level, then this ingot must be drawn back in order to be reheated up to that certain temperature level. This reheating process is an example of the deterioration effect.

Biskup (1999) introduced how position-dependent learning effect can be considered in scheduling problems. Let \( P_{r} \) be basic processing time of the job assigned at position \( r \) in the sequence and its actual processing time \( P_{[r]} \) can be calculated as follows:

$$ P_{[r]} = P_{r} r^{a} , $$
(1)

where \( a \) is learning effect coefficient for scheduling environment \( ( - 1 < a < 0) \). Mosheiov (1991) showed the actual processing time \( P_{[r]} \) of a job depends on its starting time and \( P_{[r]} \) increases when starting time of that job increases under linear job deterioration effect. Let \( S_{[r]} \) be starting time of the job at position \( r \), \( P_{[r]} \) can be calculated as follows:

$$ P_{[r]} = P_{r} + BS_{[r]} , $$
(2)

where \( B \) is the linear deterioration effect coefficient for scheduling problems \( (0 < B < 1) \). Both of these effects can be used in scheduling problems simultaneously as follows:

$$ P_{[r]} = \left( {P_{r} + BS_{[r]} } \right)r^{a} . $$
(3)

For some of single machine scheduling problems under effects of learning and deterioration, the existing of polynomial algorithms such as the shortest processing time and the earliest due-date dispatching rules are proven by researchers (Wang and Wang 2011; Wang 2007; Wang et al. 2008b; Cheng et al. 2008; Gordon et al. 2008; Yang and Kuo 2010). Even for some flow shop scheduling problems with some special cases, the existing of polynomial algorithms are proven by researchers (Wang et al. 2008a, b; Wang 2006). These special cases in flow shop scheduling problem are increasing series of dominating machines, 2-machine environment, equal job processing times and a fixed job in the first position of the first machine. Without these special cases, the complexity of the PFSS problem under the effects of learning and deterioration is still NP-Hard.

In this paper, we integrate two strong metaheuristics for combinatorial optimization problems and apply our proposed solution approach to the PFSS problem where jobs are under the effects of learning and deterioration. The proposed algorithm uses the basic structure of Tabu search, and it searches for the best candidate from a solution population instead of improving the current best candidate at each iteration. It also uses some evolutionary strategies such as crossover and mutation operators to escape and renew the solution population. Most of the hybrid algorithms including TS and evolutionary strategies use the genetic algorithm (GA) as the main framework and use TS as a solution improvement tool. On the contrary to papers in the literature, we use evolutionary strategies to escape from local optima. Furthermore, we compare our proposed algorithm with some existing algorithms for PFSS problems.

2 Literature review

The PFSS problems with makespan minimization have been interested among researchers for more than 40 years. There are some review papers in the literature. Some of these review papers are Fernandez-Viagas et al. (2017), Yenisey and Yagmahan (2014), Reza Hejazi and Saghafian (2005) and Framinan et al. (2002, 2004). Due to the complexity of the problem, the PFSS problem is one of the most studied problems in the operations research literature. The PFSS problem under learning and deterioration effects is expressed as \( F_{m} \left| {prmu,LE,DE} \right|C_{ \hbox{max} } \) with the notation of Graham et al. (1979). As far as we known, the best effective algorithms for PFSS without learning and deterioration effects have been variants of iterated greedy (IG) algorithm.

Ruiz and Stützle (2007) proposed an iterated greedy algorithm (IG_RS) that applies two phases iteratively. In their algorithm, the first phase named destruction eliminates some jobs from the incumbent solution, and the second phase named construction reinserts the eliminated jobs into the sequence by using the NEH construction heuristic. They also proposed using a local search technique in their IG_RS. Experimental results in their study show that their proposed IG with local search (IG_RSLS) outperforms the state-of-art algorithms published for the PFSS problem until then. They also presented some new optimum and best solutions for Taillard benchmark instances. Ruiz and Stützle (2008) proposed two iterated greedy algorithms for PFSS problems with sequence-dependent setup times for minimizing the makespan and total weighted tardiness. Another variant of the IG algorithm named IGRIS for the problem was proposed by Pan et al. (2008). This variant of IG uses a new local search named reference insertion schema (RIS) instead of LS proposed by Ruiz and Stützle (2007) and Taillard (1990). The RIS uses the reference permutation obtained from the NEH algorithm, and it removes/reinserts jobs from that referenced list one by one to find better solutions. The RIS and LS use Taillard’s speed-up schema to calculate the makespan or flowtime of the solution. The proposed IGRIS of Pan et al. (2008) outperformed so far existing metaheuristics in the literature. Pan et al. (2008) also proposed a discrete differential evolution (DDERLS) algorithm with RIS for PFSS problems with the makespan criterion. While finding a position for a removed job in the local search phase, there can be lots of partial solutions (ties) having the same objective function value. These ties may lead the algorithm in a cycle. Therefore, these ties must be broken with a tiebreaking mechanism to increase solution quality. Kalczynski and Kamburowski (2008), Dong et al. (2008), Fernandez-Viagas and Framinan (2014), and Vasiljevic and Danilovic (2015) proposed new tiebreaking mechanisms for the problem. Fernandez-Viagas and Framinan (2014) presented a tiebreaking mechanism (TBFF) for NEH, IGRIS, and IG_RSLS algorithms. Their experimental study revealed that these algorithms with TBFF outperform their original versions. Rossi et al. (2016) developed a new heuristic named as Gx by combining their new heuristic with different tiebreaking and initial orders procedures found in the literature. Dubois-Lacoste et al. (2017) suggested optimizing the partial solution after the destruction phase of the classical IG algorithm. Their new variant of the IG algorithm outperformed the existing IG algorithms. Fernandez-Viagas et al. (2017) used their proposed TBFF within lots of different heuristics and different metaheuristics. They compared those algorithms with each other for the same performance criterion. Their experimental study revealed that IGRIS and IG_RSLS with TBFF outperform other existing and promising metaheuristics. They also proved that their proposed TBFF and Taillard’s speed-up schema increase the solution quality of the algorithms. Fernandez-Viagas and Framinan (2019) proposed a best-of-breed (IGBOB) combination of recent variants of IG algorithms and their components. In their proposed IG variant, they inspired by algorithms of Benavides and Ritt (2018), Dubois-Lacoste et al. (2017), and Fernandez-Viagas and Framinan (2014). Their experimental study revealed that their proposed IGBOB is the best-so-far algorithm for the problem. Their IGBOB combines initial solution of Benavides and Ritt (2018), the local search procedure of Benavides and Ritt (2018) and local search for partial solution proposed by Dubois-Lacoste et al. (2017) with their existing TBFF.

Janiak and Portmann (1998) presented a genetic algorithm for PFSS problems with resource allocation for constrained resources such as energy, catalyzer, and raw materials in order to find a schedule that minimizing the makespan. Rajkumar and Shahabudeen (2009) proposed an improved GA including multi-crossover, multi-mutation, and hypermutation operators in order to solve PFSS problems with the makespan performance criterion. Nagano et al. (2008) proposed a constructive GA of which parameters are calibrated design of experiment and their proposed GA uses Nawaz–Enscore–Ham (NEH) and local search heuristic to define fitness values of solutions. Pasupathy et al. (2006) studied multi-objective PFSS problems with their proposed GA in order to find a pareto-optimal solution for makespan and total flowtime performance criteria. Their proposed algorithm makes use of the principle of non-dominated sorting, coupled with the use of a metric for crowding distance being used as a secondary criterion. This approach is intended to alleviate the problem of genetic drift in GA methodology. Chen et al. (2012) presented self-guided GA with a novel strategy that combines global statistical information collected previous solutions and location information about individual solutions. One of the most prominent papers using simulated annealing (SA) in PFSS problems in order to minimize the makespan belongs to Osman and Potts (1989). Xiao et al. (2012) studied SA in PFSS problems with order acceptance and weighted tardiness when the objective is to maximize the total net profit with weighted tardiness penalties. Suresh and Mohanasundaram (2004) proposed an SA with a perturbation mechanism called segment random insertion that is used to generate the neighborhood of a given sequence in PFSS problems with makespan and total flowtime performance criteria. Hybrid algorithms which are designed by using the best parts of well-known metaheuristics or heuristics have been also studied in PFSS problems. Sun et al. (2015) proposed a GA based on SA in order to escape local optima and increase searching efficiency. Lin et al. (2015) used a hybrid algorithm depending on an evolutionary algorithm named as backtracking search algorithm (BSA) in order to solve PFSS problems with makespan minimization. Their hybrid BSA includes crossover/mutation strategies and SA mechanism. Laha and Chakraborty (2009) investigated PFSS problems with the makespan criterion and presented a new hybrid heuristic algorithm that is designed by combining elements from SA, NEH, and their previously published composed heuristic. Li et al. (2008) considered a multi-objective PFSS problem by proposing a hybrid algorithm based on particle swarm optimization (PSO), NEH, and SA algorithms. In their proposed hybrid algorithm, different well-known heuristics are used to create better evolutionary search results and to evaluate these search results’ fitness. Haq et al. (2010) compared two heuristics that are dependent on the artificial neural network (ANN) and GA for a PFSS problem where the objective is to minimize makespan. One of their algorithms is ANN–GA starting with random population. The second algorithm is also ANN–GA, but this algorithm uses the random insertion perturbation scheme (RIPS) and they named this algorithm as ANN–GA–RIPS. They showed that ANN–GA–RIPS outperforms ANN–GA. Zobolas et al. (2009) proposed a hybrid metaheuristic for a PFSS problem with makespan minimization. Their proposed algorithm consisted of three heuristics. These are a greedy randomized constructive heuristic for initial population generation, a GA for solution evaluation and a variable neighborhood search (VNS) to improve the population. Tseng and Lin (2010) considered a PFSS problem where the objective is to minimize total flow time of the schedule, and they proposed a hybrid metaheuristic including GA for global search and a Tabu search (TS) for local search.

The learning effect has been a hot topic among scheduling researchers for more than 15 years. However, there has been a smaller number of papers focusing on PFSS problems with learning effect consideration. He (2016) considered a PFSS problem with a general exponential learning effect when the objective is to minimize maximum lateness by proposing several heuristic methods. Lee and Chung (2013) proposed a branch-and-bound algorithm and two heuristic methods to find an optimum or near-optimum solution when the objective is to minimize total tardiness of a PFSS problem under learning effect. Chung and Tong (2012) considered a machine-based learning effect in the PFSS problem when the objective is to minimize the weighted sum of total completion time and makespan. For an optimum solution, they proposed a branch-and-bound algorithm and for a near-optimum solution, they proposed two heuristic methods. In another study of Chung and Tong (2011), they considered learning effect in the PFSS problem with makespan minimization by proposing a dominance theorem and a lower bound to accelerate the branch-and-bound algorithm seeking an optimal solution. Another study using a branch-and-bound algorithm to solve the PFSS problem with learning consideration was conducted by Wang and Zhang (2015). Qin et al. (2016) studied position-dependent learning effect in the PFSS problem for different performance criterions such as makespan, total completion time, total weighted completion time, and maximum lateness by proposing GA and quantum differential evolutionary algorithm. Toksarı and Arık (2017) addressed some performance criteria such as makespan, the sum of completion times, and the sum of weighted completion times on single machine under fuzzy learning effect with fuzzy processing times. They proposed a credibility-based chance-constrained programming approach for their proposed MINLP and they proved that these problems can be solvable in polynomial time. Shiau et al. (2015) proposed a branch-and-bound algorithm and several GA algorithms in order to obtain feasible solutions for a two-agent scheduling problem in a two-machine permutation flow shop with learning effects. Xu et al. (2016) investigated re-entrant permutation flow shop scheduling with a position-based learning effect to minimize the total completion time. They developed some heuristics and a GA to search for approximate solutions. Mustu and Eren (2018) proposed GA, the kangaroo, and the variable neighborhood search algorithms for PFSS under position-dependent learning effect. Shi and Wang (2019) investigated two-machine no-wait PFSS with common due window assignment, learning effect, and resource allocation. Geng et al. (2019) addressed the no-wait flow shop scheduling problem with simultaneous consideration of common due-date assignment, convex resource allocation, and learning effect in a two-machine setting. Wang et al. (2019a, b) investigated PFSS problems with a truncated exponential sum of logarithm processing time-based and position-based learning effects. Wang et al. (2019a) investigated position-weighted learning effect and job release dates on single machine environment, and they proposed a branch-and-bound algorithm and heuristics for the problem.

The deterioration effect has been also studied by researchers in scheduling literature. Yin and Kang (2015) studied the makespan performance criterion in the PFSS problem with proportional deterioration. Furthermore, they showed the problem can be polynomially solvable for some special cases of the problem. Lee et al. (2014) investigated total tardiness minimization in PFSS problem with deterioration consideration. They proposed a branch-and-bound algorithm and two metaheuristic methods that are particle swarm optimization and SA. Wang and Wang (2013) considered three-machine PFSS problem with deteriorating jobs in order to minimize makespan, and they solved their problem by using a branch-and-bound algorithm of which efficiency is increased with two heuristic methods. Bank et al. (2012) investigated a PFSS problem with deteriorating jobs and they solved their problem with two different methods. These are particle swarm optimization with local search and SA. They showed that particle swarm optimization with local search outperforms SA in terms of solution quality but SA takes less time to find a solution. Lee et al. (2009) addressed total completion time minimization in the PFSS problem, and they tested several well-known heuristics for their problem with several deterioration patterns by proposing a dominance rule and efficient lover bound to increase search efficiency. Sun et al. (2019) investigated PFSS problems with simple linear deterioration where the objectives are to minimize the logarithm of the makespan, total logarithm of the completion time, the total weighted logarithm of the completion time, and the sum of the quadratic job logarithms of the completion times. They proposed branch-and-bound algorithms for the problems. Wang and Liang (2019) considered a single machine group scheduling problem with deteriorating jobs and resource allocation.

There are some papers investigating learning and deterioration effects simultaneously. As far as we know, the first paper that investigated these effects simultaneously was proposed by Wang (2006). Wang (2007, 2009) investigated some performance criteria for single machine scheduling problems under both effects, and they showed the existence of polynomial algorithms for these problems with/without some special cases. Toksarı and Güner (2008) proposed a MINLP for parallel machine scheduling problem under the effects of deterioration and learning where the objective is to minimize earliness/tardiness costs. Toksarı and Güner (2010) investigated a parallel machine scheduling problem under learning and deterioration effects with sequence-dependent setup times and a common due date. They proved that the optimal solution is V-shaped. Arık and Toksarı (2018) investigated a multi-objective fuzzy parallel machine scheduling problem where the objectives are to minimize earliness cost, to minimize tardiness cost and to minimize the cost of setting due dates. In their study, all parameters such as processing times, coefficients of learning and deterioration, and decision variables except binary decision variables are in form of fuzzy numbers. They proposed a local search algorithm to solve their problem, and they compared their method with fuzzy mathematical programming methods in the literature. Arık and Toksarı (2019) proposed a MINLP model for a fuzzy parallel machine scheduling problem under fuzzy job deterioration and learning effects with fuzzy processing times in order to minimize fuzzy makespan by using possibilistic distributions of fuzzy parameters and possibilistic linear programming approaches. Lu (2016) considered no-idle permutation flow shop scheduling problems with time-dependent learning effect and deteriorating jobs where the objectives are to minimize the makespan and the total completion time.

For combinatorial optimization problems, the hybridization of two or more metaheuristics is a common approach to use specific advantages of those algorithms. For instance, while GA presents a population-based stochastic search to except from local optima and TS uses a deterministic search with restricting the feasible neighborhood by neighbors that are excluded. There are some valuable hybrid approaches including GA and TA at the same time. Glover et al. (1995) used TS as a strategic oscillation in GA to allow effective transitions between feasible and infeasible regions. Abdinnour-Helm (1998) integrated TS into GA for uncapacitated hub location problem. Liaw (2000) integrated TS into GA for the open shop scheduling problem where the objective is to minimize the makespan. Li et al. (2003) used TS in a classical GA for assembly process planning problem. Jat and Yang (2011) proposed a two-phase hybrid algorithm for post-enrollment course timetabling. In their proposed method, GA is used in the first phase to improve the solution population, and TS is used in the second phase to improve the solution quality of the best solution found by GA. Meeran and Morshed (2012) proposed a hybrid algorithm including GA and TS for job shop scheduling problems. Zhang et al. (2013) proposed a hybrid algorithm including GA and TS for a multi-objective dynamic job shop scheduling problem with random job arrivals and machine breakdowns. Palacios et al. (2015) proposed a genetic Tabu search algorithm for fuzzy flexible job shop scheduling problem where the objective is to minimize the makespan. In their algorithm, the TS algorithm is applied to all solutions in the population after GS operations. Li and Gao (2016) proposed a hybrid solution approach including GA and TS for flexible job shop scheduling problem. In their algorithm, the TS algorithm is applied to all solutions in the population after GS operations.

The PFSS problems need a single job sequence from \( n \)! possible alternative sequences for all machines. Exact solution algorithms may not always solve these problems in polynomial time because number of input does not increase polynomially. In this study, IG_RSLS, IGRIS, DDERLS, and TSPOP methods are proposed in order to find approximate and faster solutions. Each of the investigated algorithms has advantages for solving combinatorial optimization problems. Each of the proposed solution techniques is executed for Taillard’s (1993) test problems consisting of 20, 50, and 100 jobs with 5, 10, and 20 machines. For most of Taillard’s (1993) test problems without learning and/or deterioration effects, the best makespans or upper bounds of makespans are known. Since there are no published upper bounds for the PFSS problem under the effects of learning and deterioration, we solved some of the test problems of Taillard’s (1993) with a commercial solver. The results of the proposed algorithms are compared with upper bounds found by us in the section of numerical examples.

3 Mathematical model

In this section, a MINLP model is introduced for permutation flow shop scheduling problems under the effects of learning and deterioration when the objective function is to minimize the makespan.


Indices

  • \( i{:}\,{\text{job}}\;{\text{index}},\; i = 1 \ldots .n \)

  • \( j{:}\,{\text{machine}}\;{\text{index}},\; j = 1 \ldots .m \)

  • \( r{:}\,{\text{common}}\;{\text{position}}\;{\text{index}}\;{\text{in}}\;{\text{all}}\;{\text{machines}}\; r = 1 \ldots .n \)


Parameters

  • \( P_{i,j}{:}\,{\text{basic}}\;{\text{processing}}\;{\text{time}}\;{\text{of}}\;{\text{job}}\;i\;{\text{on}}\;{\text{machine}}\;j \)

  • \( a{:}\,{\text{learning}}\;{\text{effect}}\;{\text{coefficent}} \)

  • \( B{:}\,{\text{deterioration}}\;{\text{effect}}\;{\text{coefficent}} \)


Decision variables

  • \( X_{i,r}{:}\,{\text{if}}\;{\text{job}}\;i\;{\text{is}}\;{\text{assigned}}\;{\text{on}}\;{\text{position}}\;r\;{\text{of}}\;{\text{all}}\;{\text{machines}},\;{\text{then}}\; {\text{it's}} \;1, \;{\text{otherwise}}\; 0 \)

  • \( P_{[r],j}{:}\,{\text{actual}}\;{\text{processing}}\;{\text{time}}\;{\text{of}}\;{\text{the}}\;{\text{job}}\;{\text{assigned}}\;{\text{on}}\;{\text{position}}\; r\;{\text{in}}\;{\text{machine}}\;j \)

  • \( C_{[r],j}{:}\,{\text{completion}}\;{\text{time}}\;{\text{of}}\;{\text{the}}\;{\text{job}}\;{\text{assigned}}\;{\text{on}}\;{\text{position}}\; r\;{\text{in}}\;{\text{machine}}\;j \)

  • \( S_{[r],j}{:}\,{\text{starting}}\;{\text{time}}\;{\text{of}}\;{\text{the}}\;{\text{job}}\;{\text{assigned}}\;{\text{on}}\;{\text{position}}\; r\;{\text{in}}\; {\text{machine}}\;j \)

  • \( C_{ \hbox{max} }{:}\,{\text{makespan}}\;{\text{of}}\;{\text{the}}\;{\text{schedule}} \)

Model

$$ {\text{Min}}\; z = C_{ \hbox{max} } $$
(4)
$$ \begin{aligned} & {\text{s.t.:}} \\ & C_{ \hbox{max} } \ge C_{[n],m} \;\;\forall r,j \\ \end{aligned} $$
(5)
$$ \mathop \sum \limits_{i}^{n} X_{i,r} = 1 \;\forall r $$
(6)
$$ \mathop \sum \limits_{r}^{n} X_{i,r} = 1 \;\forall i $$
(7)
$$ C_{[r],j} \ge S_{[r],j} + P_{[r],j} \;\forall r,j $$
(8)
$$ S_{[r],j} \ge C_{[r],j - 1} \;\forall r, \;j = 2, \ldots ,m $$
(9)
$$ S_{[r],j} \ge C_{[r - 1],j} \;\forall j,\; r = 2, \ldots ,m $$
(10)
$$ P_{[r],j} = \left( {\left( {\mathop \sum \limits_{i}^{n} X_{i,r} *P_{i,j} } \right) + B*S_{[r],j} } \right)*r^{a} $$
(11)
$$ C_{[0],1} = 0 $$
(12)
$$ C_{ \hbox{max} } \ge 0 $$
(13)
$$ C_{[r],j} , P_{[r],j} , S_{[r],j} \ge 0 $$
(14)
$$ X_{i,r} \in \{ 0,1\} . $$
(15)

The objective function (4) is to minimize the makespan value of the schedule. Constraint (5) assures that the makespan is the maximum completion time of all jobs. Constraint (6) assures that position number \( r \) for all machines is used for only one job. Constraint (7) assures that a job is assigned on only a position number \( r \) of all machines. Constraint (8) shows that the completion time of the job assigned on a common position \( r \) in all machines is equal to or greater than the sum of its starting time and actual processing time. Constraint (9) shows that the starting time of the job on position \( r \) in \( j \) machine is greater than or equal to the completion time of the job in the same position of the previous machine. Constraint (10) shows that the starting time of the job on position \( r \) in \( j \) machine is greater than or equal to the completion time of the job of the previous position in the same machine. Constraint (11) shows the calculation of the actual processing time of the job assigned on position \( r \) in machine \( j \). It is required to determine which job is assigned to which position of which machine. Therefore, transitions among job positions and jobs are required. Since transition among the processing time of a job in position \( r \) \( (P_{[r],j} \)) and jobs’ processing times (\( P_{i,j} \)) makes the problem nonlinear, the proposed mathematical model is a mixed-integer nonlinear mathematical model. These transitions are made by Constraint (11). Constraint (12) shows that all jobs are ready to be processed at the beginning. Constraints (1314) show that starting times, actual processing times, and completion times are greater than or equal to zero. Constraint (15) shows that the decision variable \( X_{i,r} \) is binary.

4 Population-based Tabu search with evolutionary strategies

The Tabu search algorithm was introduced by Glover (1989, 1990) to present a search strategy for solving combinatorial optimization problems whose applications range from graph theory and matroid settings to general pure and mixed-integer programming problems. Tabu search is a deterministic search algorithm to prevent cyclical solutions by transforming only one solution into another. In order to avoid cycling, the TS stays away from certain moves that create undesired neighborhoods. These moves or undesired solutions are listed in a short-term memory named as Tabu list. Although Tabu search was originally designed for a single current solution to create better solutions by avoiding cycling, this paper proposes a Tabu search with a population-based search and evolutionary strategies. There are so many possible and feasible solutions in the solution space, and most of them can be reached by simple moves among solutions. This proposed search method uses a solution population and searches for best candidates by locally searching the population’s individuals. Then, the proposed algorithm holds and forbids the current solution with the help of a Tabu list to create better solutions. If the solution is trapped in a local area and the solution population starts to be ineffective for improving the solution, then some evolutionary strategies such as crossover and mutation take place to create a new solution population that helps to improve the current solution. The hybrid algorithms (Zhang et al. 2013; Palacios et al. 2015; Li and Gao 2016) including GA and TS in the literature use generally the main framework of GA such as evaluation, selection, crossover, and mutation operators; then, they use TS algorithm to improve the best solution obtained from GA operators. In this paper, we use the main framework of the TS algorithm to improve the solution quality of individuals in the population and use evolutionary strategies such as crossover and mutation to escape the local optima. Algorithm 1 shows the general schema for the proposed population-based Tabu search with evolutionary strategies (TSPOP).

figure a

The Initial_Population procedure in Algorithm 1 is designed to produce a solution population that may be expandable to a global optimal schedule. Algorithm 2 produces n solutions. Then, the number of solutions in the population is decreased or increased to 60 solutions. The first position of the job orders in these solutions start with each possible solution. That means the first job of the first solution in the population has job#1, and the first job of the second solution in the population has job#2. Thus, each job is assigned to first positions of solutions. Then, find and assign the best job to the second position of the job orders that minimize the total idle times of machines for second position. For instance, if there are 5 jobs \( j = \left\{ {1,2,3,4,5} \right\} \) and 3 machines \( k = \left\{ {1,2,3} \right\} \), so we can produce 5 solutions for the population. These solutions are \( \pi_{1} = \left\{ {1,?,?,?,?} \right\} \), \( \pi_{2} = \left\{ {2,?,?,?,?} \right\} \), \( \pi_{3} = \left\{ {3,?,?,?,?} \right\} \), \( \pi_{4} = \left\{ {4,?,?,?,?} \right\} \) and \( \pi_{5} = \left\{ {5,?,?,?,?} \right\} \). The first positions of solutions are fulfilled, and now the second positions of job orders of solutions are selected from unassigned jobs to minimize the total idle time of machines. For solution#1 \( \pi_{1} = \left\{ {1,?,?,?,?} \right\} \), the unassigned jobs are {2, 3, 4, 5}, and we can select a job that minimizes the total idle time of machines. In that situation, if job#3 assures the minimum total idle time, then \( \pi_{1} = \left\{ {1,3,?,?,?} \right\} \). This goes on until there are no unassigned job remains for each solution in the population. This procedure depends on the profile fitting procedure proposed by McCormick et al. (1989). The profile fitting heuristic was originally proposed for minimizing the cycle time of serial workstations in an assembly line with the blocking constraint. We used that heuristic for creating an initial solution population. Each job is assigned to the first position of each solution, then a search is made for determining the job for the second position by considering the total idle times of machines. This goes on until there is no unassigned job remaining. After creating initial population, the solutions ordered in an increasing order of their makespan values. Then, the number of solution in the initial population is increased or decreased to 60 by selecting best 60 solutions from the initial solution. If the population size is 20, then these 20 solutions are directly placed in 60 solutions. For remaining 40 solutions, randomly generated new solutions are placed in the population. On the contrary, if the population size is 100, then the first best 60 solutions are directly placed in the population.

figure b

After creating the first population, the same Local_Search_Population procedure in Algorithm 1 is designed to improve solution quality for the first \( B \) solutions in the population. Then, these solutions are individually sent to Local_Search operator of the proposed algorithm. The basic idea of the proposed Local_Search_Population is to produce better neighborhoods that have a chance to be the best current candidate. The Local_Search_Population and Local_Search procedures are given in Algorithms 3 and 4.

figure c

After creating the first population, the same Local_Search procedure in Algorithm 3 is designed to improve the incumbent solution. The basic idea of the proposed local search is to produce new neighborhoods that have a chance to be the best current candidate. To produce new neighborhoods of a solution, three different search operations are used \( C \) (predetermined number of local search iterations) times by selecting a random job from the current solution. Insertions, swapping, and double-swapping operations are applied, respectively, to the current solution. If the candidate solution is not in the Tabu list and if the makespan value is less than or equal to the incumbent solution’s makespan, then the incumbent solution is replaced with this new candidate solution. Insertion local search is one of the most used search operators for PFSS problems. In this study, the proposed Local_Search procedure uses the insertion search by selecting a random job from the incumbent, and it tries to find a better solution by inserting that job to possible all positions. The swapping operation in this study uses a randomly selected job from the incumbent solution, and it swaps that job’s position with all possible jobs in the solution. The double-swapping operator selects a random position r. Then, the operator removes the jobs in positions \( r \) and \( r + 1 \) from the solution and tries to find a better solution by inserting them to all possible positions again. After inserting these two jobs in the solution, these jobs are also swapped to find a better solution. The length of the Tabu list is 100. If a new best makespan is found, then this makespan and its schedule are added to the Tabu list. In Local_Search procedure, the solutions are replaced with their neighborhoods, so to avoid turn back to previous solutions, the new better solution value is added in the Tabu list by using Check_Tabu_List operator that is given in Algorithm 5. If the number of solutions in the Tabu list exceeds 100, then the oldest solution in the Tabu list is removed. The Local_Search algorithm is given in Algorithm 4.

figure d
figure e

Evolutionary strategies take place when the number of forbidden solutions and the number of iterations with no improvement exceeds a certain number \( K \) as seen in Algorithm 1. This step is to escape local optima and produce a new solution population that may include new candidate solutions. In order to produce a new population, the previous population, the best solution found so far and the incumbent solution (the best solution in the current population) are used as seen Algorithm 6. The crossover operator in this study is a two-point crossover and the mutation operator is a swap-mutation. The encoding of the solution is permutation encoding, and each substring is defined with job indices. The other evolutionary operations such as evaluation and selection are not necessary because they increase the time requirement for obtaining a solution. Therefore, the crossover operation takes place for pairs of solutions in the order of {(\( \pi_{1} ,\pi_{2} \)), (\( \pi_{3} ,\pi_{4} \)), …, (\( \pi_{59} ,\pi_{60} \) } with a probability \( p_{c} \) by selecting randomly two crossover points. Figure 2 shows a two-point crossover and repair operation for a solution pair. The mutation operator selects randomly two jobs in the solution and inverses the substring between these two randomly selected jobs with a probability \( p_{m} \) as seen in Fig. 3. The counter for local searches with non-improvement \( k \) is set as zero, and the new solution population is obtained by using the best solution in the memory and the incumbent solution with crossover and mutation operator. TSPOP algorithm runs until a predetermined stopping condition exists. In this study, the stopping condition for the TSPOP is the total elapsed time in milliseconds.

figure f
Fig. 2
figure 2

Two-point crossover and repair operations

Fig. 3
figure 3

The mutation operation

5 Numerical examples

Taillard’s (1993) test problems consisting of 20, 50, and 100 jobs with 5, 10, and 20 machines are used in order to show the performance of metaheuristic methods. For all problems, the learning effect coefficient and deterioration effect coefficient are − 0.8 and 0.1, respectively. While trying to find an optimum solution for a combinatorial problem; if time requirement for finding an optimum is exceedingly high, if the solver does not improve the solution and if the optimality gap is being reduced very slowly, then limiting execution time and optimality requirement by using metaheuristic methods can be reasonable. Metaheuristic methods try to find optimal solutions, but mostly they yield near-optimum ones. Therefore, parameter design in any metaheuristic is so significant. There are seven parameters for the proposed TSPOP algorithm, these are \( B \) (the predetermined number of solutions that will be used in the Local_Search_Population procedure), \( C \) (the predetermined number of local search iterations in the Local_Search procedure), the length of the Tabu list, \( K \) (a predetermined number of the maximum allowable iterations with no improvement in Algorithm 1), the crossover probability \( p_{c} \) and the mutation probability \( p_{m} \). In our first experiments, we used lots of combinations of these parameters. We determined these parameters as \( B \) = 5, \( C \) = 5, the length of Tabu size is 100, \( K \) = 10, \( p_{c} \) = 0.85 and \( p_{c} \) = 0.15.

As rivals of the proposed TSPOP, we used two IG algorithms and a DDE algorithm for PFSS problems under the effects of learning and deterioration. IG algorithm for PFSS problems was firstly proposed by Ruiz and Stützle (2007). The IG algorithm is a single-solution metaheuristic method. In the IG algorithm for PFSS, the initial solution is obtained by using the well NEH heuristic. The IG algorithm (IG_RS) for PFSS problems applies two phases iteratively. These phases are names as destruction and construction. In the destruction phase of the algorithm, some jobs are removed from the incumbent solution. After the destruction phase of the algorithm, the removed jobs are reinserted into the partial solution to construct a complete solution again (the incumbent solution). Every time a removed job is inserted into the partial solution, a greedy selection among all possible positions that jobs can be inserted in the partial solution. In each iteration, a constant number (d) jobs are removed and reinserted. When a candidate solution has been completed, an acceptance criterion decides whether the new solution will replace the incumbent solution. IG_RS uses a simulated annealing like acceptance criterion with a constant temperature. This constant temperature is calculated as follows:

$$ {\text{Tempreature}} = T \cdot \frac{{\mathop \sum \nolimits_{i}^{n} \mathop \sum \nolimits_{j}^{m} p_{ij} }}{n \cdot m \cdot 10} $$
(16)

where T is the second parameter of IG_RS to be adjusted for the temperature of simulated annealing like acceptance criterion. After the destruction and construction phase of the algorithm, an optional insertion-based local search (LS) can be adapted to increase the efficiency of the IG_RS algorithm. The LS operator randomly removes a job from the complete solution and reinserts it to all possible positions of the partial solution. If the LS operator finds better objective function value while inserting the removed job in different positions, the job is inserted into that position. This is repeated for another job. The process terminates when all jobs have been placed in all possible positions without improvements. The complexity of calculating makespan or flow time of a solution is O(nm), and if there are k possible positions after removing a job, this complexity increases to O(n2m). Taillard (1990) proposed a mechanism named Taillard’s acceleration in the following, so the evaluation of the k subsequences can be done in O(nm) thus reducing the overall complexity of the heuristic to O(n2m). Taillard’s acceleration can be used in any phase of IG algorithms such as NEH, destruction/construction, and local search. Of course, this acceleration schema works when the performance criterion is the minimization of the makespan or flowtime of a schedule when there is no effect such as learning and/or deterioration. This variant of the IG algorithm was named as IG_RSLS. There are two parameters (d and T) of IG_RSLS. Ruiz and Stützle (2007) suggested these parameters as d = 4 and T = 0.4 according to their parameter tuning. Another variant of the IG algorithm was proposed by Pan et al. (2008). This variant named as IGRIS uses a referenced insertion schema (RIS) instead of LS proposed by Ruiz and Stützle (2007). This version of the local search operator uses a referenced solution obtained from a heuristic like NEH and to determine which jobs will be selected and removed from the complete solution. In RIS operator, jobs are not extracted randomly but in the order given by a referenced permutation. Pan et al. (2008) also suggested the same parameter setting (d = 4 and T = 0.4) for IGRIS as Ruiz and Stützle (2007). The essential difference between IG_RSLS and IGRIS is using different local search procedure for the solution. If the IG uses the LS operator for the PFSS problem under learning and deterioration effects, it is IG_RSLS. When it uses the RIS operator for the problem, it is IGRIS.

Differential equation algorithm is a population-based solution method for the continuous optimization problem. Due to the discrete structure of the PFSS problem, Pan et al. (2008) proposed a DDE algorithm for the problem. In the DDE algorithm, the target individual is represented by a permutation of jobs. The previous generation’s best solution in the target population is perturbed in order to obtain the mutant individual and achieve the differential variation. DDE algorithm uses a referenced local search (RLS) operator of the RIS for local search of individuals of the population. Pan et al. (2008) proposed the parameters of DDERLS as \( d \) = 4, population size is 10, \( p_{c} \) = 0.80 and \( p_{c} \) = 0.20. The IG and DDE algorithms used in this study is not different from the original algorithms proposed by Ruiz and Stützle (2007) and Pan et al. (2008). The only difference of the algorithms in this paper is that the algorithms do not use Taillard’s acceleration schema for calculation of maximum completion time because of processing times under the effects of learning and deterioration.

In the literature for comparison of algorithms for PFSS problems, the time limitation (in milliseconds) of execution of algorithms is determined with the formula of \( t \cdot n \cdot m/2 \) where t is constant, n is the number of jobs and m is the number of machines. In this study, we used three different \( t \) values where \( t \in \left\{ {30,60,90} \right\} \) for comparison. Since there are no published upper bounds for the PFSS problem under the effects of learning and deterioration, we solved some of the test problems of Taillard’s (1993) with a commercial solver. For the first 90 test instances (from 20 jobs with 5 machines to 100 jobs with 20 machines) of Taillard’s (1993) benchmark problems, Commercial solver software, AIMMS, is used to solve test instances by using the MINLP model introduced in Sect. 3. While solving these problems in AIMMS, the execution of each problem is limited until 1000 s or reaching the solution’s optimality gap to 0.0002. All metaheuristic algorithms (by using their original parameters) were coded with a standard desktop computer having an Intel i5 CPU and 8 GB RAM by using C# programming language with MS Access database. The well-known performance measure used to evaluate a solution method’s performance for flow shop scheduling problems is the average relative percentage deviation (ARPD) as follows:

$$ \mathop \sum \limits_{i = 1}^{R} \left( {\frac{{(f_{i} - f_{\text{best}} )100}}{{f_{\text{best}} }}} \right)/R $$
(17)

where \( f_{i} \) is the objective function value of the proposed heuristic or metaheuristic method in \( i{\text{th}} \) independent run, \( f_{\text{best}} \) is the best-known solution (optimum or upper bound of optimum) for the problem, and \( R \) is the number of independent runs of the solution approach. \( R \) value was set as 5 for all test problems. In this study, we used the solutions obtained by using AIMMS solver as \( f_{\text{best}} \) values for test instances. These solutions of the first 90 problems of Taillard’s (1993) benchmark problems and all results obtained from compared metaheuristics are available upon request for the readers. Table 1 shows ARPD values of compared algorithms when \( t \) value set as 30 for time limitation. Tables 2 and 3 show ARPD values of compared algorithms where \( t \) = 60 and \( t \) = 90 for time limitation, respectively.

Table 1 ARPD values of compared algorithms where \( t \) = 30
Table 2 ARPD values of compared algorithms where \( t \) = 60
Table 3 ARPD values of compared algorithms where \( t \) = 90

The best ARPD values are marked with bold font in Tables 1, 2 and 3 for each combination of #of jobs and #of machines. As seen from Tables 1, 2 and 3, the TSPOP algorithm has almost all of the best ARPD values for test instances. Since TSPOP algorithm has a mechanism to avoid cycling solutions in each execution of the problem, we checked how many times the proposed algorithm disables a cycling solution for all test instances within the experiment with \( t = 30 \). The average ratio for avoiding cycling solutions per problem is 9.73%. Thus, TSPOP does not use these solutions that were already found and improved in previous iterations. Furthermore, TSPOP generates new solutions that have chances to be new better solutions by escaping from cycling solutions. For better comparison, an ANOVA test was made with 95% confidence level for performance comparison. We tested the following factors: (1) the number of jobs (\( n \)), tested at three values: 20, 50, and 100. (2) The number of machines (\( m \)), tested at three values: 5, 10, and 20. (3) Type of methods, tested at four variants: IG_RSLS, IGRIS, DDERLS, and TSPOP. (4) Predetermined stopping criteria, tested at three variants: t = 30, t = 60 and t = 90. The detail of ANOVA test is given in Table 4. As seen from Table 4, all factors except predetermined stopping criteria (\( t \cdot n \cdot m/2 \)) have a significant difference with 95% confidence level because these factors’ p values are less than 0.05.

Table 4 Anova results of for comparison of solution methods

The ANOVA results in Table 4 show that there is a significant difference between solution methods. For a more detailed comparison, the interval plot of ARPD values in Fig. 4 shows that the TSPOP algorithm presents less ARPD values comparing other algorithms. If we consider ARPD values for each \( t \) value where \( t \in \left\{ {30,60,90} \right\} \), the interval plot in Fig. 5. For ARPD values of each algorithm for each \( t \) value show that the TSPOP algorithm outperforms other algorithms for each \( t \) value.

Fig. 4
figure 4

Interval plot of ARPD values obtained by solution approaches

Fig. 5
figure 5

Interval plot of ARPD values obtained by solution approaches for each \( t \) value

For a more detailed comparison, Wilcoxon signed-rank tests with a 95% confidence were done between the proposed TSPOP algorithm and other algorithms considering all t values (t \( \in \) {30, 60, 90}). The results of Wilcoxon signed-rank tests are given in Table 5. As seen in Table 5, all p values are less than 0.05. There are significant differences between TSPOP and any of the other algorithms for each \( t \) value. Therefore, we say that the proposed TSPOP algorithm outperforms extremely IG_RSLS, IGRIS, and DDERLS algorithms in all predetermined stopping criteria for PFSS problems under the effects of learning and deterioration.

Table 5 Results of Wilcoxon signed-rank tests

6 Conclusion

In this study, PFSS problems under the effects of position-dependent learning and linear deterioration are studied when the objective function is to minimize the makespan. A hybrid solution algorithm called population-based Tabu search algorithm (TSPOP) and well-known heuristic methods (IG_RSLS, IGRIS, and DDERLS) are used to solve PFSS problems under the effects of dependent learning and linear deterioration. For comparison of solution approaches, some of Taillard’s (1993) benchmark problems under the effects of learning and deterioration are solved with a commercial solver. These solutions are used in the comparison of the algorithms as upper bounds of the problems. The experimental results show that the proposed TSPOP outperforms other existing algorithms, then the problem’s objective is to minimize the makespan with jobs under learning and deterioration effects. For future research, the results in this study can be used for benchmarks of other metaheuristic methods for PFSS problems under the effects of position-dependent learning and linear deterioration. Furthermore, the proposed TSPOP algorithm can be used for sequence-dependent or flexible flow shop scheduling problems.