Introduction

Most of the manufacturing organizations try to implement some of the ideas adopted by Just in Time (JIT) philosophy, like on time delivery or minimum inventory (Salameh and Ghattas 2001). In this paper we introduce and formulate a JIT single machine scheduling problem with a periodic preventive maintenance on the machine. In most of the scheduling problems it is assumed that the machine is available interruptedly while, in practice, it may be unavailable due to causes like breakdown or preventive maintenance. According to the British Standard Institute (BSI), “Maintenance is a combination of any actions to retain an item in, or restore it to an acceptable condition” (BSI 1984). Periodic preventive maintenance, which is a fundamental part of JIT production, consists of regular preventive measures to increase machine reliability and to decrease breakdown probability during manufacturing process. In some of the manufacturing processes, overuse of the tool might decrease quality of the work piece. Therefore, when the work piece is expensive or when the accuracy is necessary, we change the tool before it is amortized. A well-known example is the printed circuit board manufacturing process in which the drilling machine is one of the most important devices. Thus not only should the machine stop to maintain after a period of processing time, but also the machine should stop to change the micro-drill after fixed times of using. Accordingly, the proposed model holds a limitation on the maximum number of processed jobs during each period. The objective of the proposed bi-objective mixed integer model is minimizing total earliness-tardiness costs and makespan simultaneously.

Machine unavailability problem has been investigated in the literature due to causes like machine breakdown, tool change or preventive maintenance. Machine breakdown or product quality loss is probable when a machine continues to work unceasingly for a long time. To avoid this situation, preventive maintenance is conducted on the machine which may be periodic or flexible (Xu et al. 2015). In a flexible maintenance the earliest and latest start time are determined and the maintenance process is allowed to start during this period. Yang et al. were the first to study a single machine scheduling problem with a flexible maintenance (Yang et al. 2002). They investigated the problem to minimize makespan and provided a proof for NP-Hardness of the problem. Qi et al. (1999) studied a problem in which multiple maintenance processes and jobs are to be scheduled on a single machine. They proposed heuristics as well as Branch and Bound based approaches to determine the sequence of jobs and maintenance start times while total completion time is minimized. Furthermore, Chen proposed a mixed binary integer programming and a heuristic to minimize mean flow time (Chen 2006). Also, Wan (Wan 2014) investigates on minimizing total earliness and tardiness in a single machine scheduling problem with a common due date for all jobs and a flexible maintenance. Luo et al. proposed polynomial algorithms for a single machine scheduling with flexible maintenance and various objective functions (Luo et al. 2015). Low et al. studied a single machine scheduling with flexible maintenance under two strategies, the first one was the flexible maintenance and the latter was changing the tool, after a predetermined number of jobs were conducted on the machine (Low et al. 2010). Their model was aimed at minimizing the makespan.

In addition, there are various scheduling researches addressing a periodic maintenance process. For example (Liao and Chen 2003) proposes a branch and bound based algorithm to minimize maximum tardiness or (Chen 2006) proposes a heuristic to minimize mean flow time in a problem with periodic maintenance. Benmansour et al. investigated on a JIT single machine scheduling problem with periodic maintenance in which the objective was to minimize maximum tardiness and maximum earliness (Benmansour et al. 2014), they also proposed a heuristic to solve the problem efficiently. On the other hand, in some cases like (Liao et al. 2007; Yang et al. 2008) it is assumed that the machine must stop for maintenance after a fixed number of processes. Hsu et al. studied a single machine scheduling problem, with a makespan minimizing objective, under two strategies; namely periodic maintenance and limited number of operations during each period (Hsu et al. 2010). They proposed a two stage binary integer programming and two efficient heuristics, best fit butterfly (BBF) and best fit decreasing (DBF), for solving large scale problems. Also Ebrahimyzade and Fakhrzad proposed a new mathematical model and dynamic genetic algorihtm (GA) to solve this problem (Ebrahimy Zade and Fakhrzad 2013). Computational results revealed that the solutions from the proposed dynamic GA had a better quality than the BBF and DBF.

On time delivery and minimum inventory costs are amongst the most important targets of a JIT manufactoring system. However, minimizing earliness and tardiness costs does not necessarily imply minimum inventory. Therefore, in some cases like Behnamian (2014), Gao et al. (2014), Behnamian and Fatemi Ghomi (2014), Gao (2010), and Eren (2007) total earliness–tardiness and makespan minimization are considered simultaneously, however, none of them mentioned the preventive maintenance, despite its substantial role in JIT philosophy. Table 1 delineates some properties of the proposed model with the most related articles in the literature.

Table 1 Some extensions to the scheduling problem with maintenance process

Considering the above literature review, main contributions of the article are as follows:

  1. 1.

    Proposing a mathematical model for JIT single machine scheduling problem with periodic maintenance.

  2. 2.

    Simultaneous minimization of makespan, earliness and tardiness in a JIT scheduling problem with periodic maintenance.

  3. 3.

    Proposing three different multi-objective optimization algorithms to solve the problem.

The rest of this paper is organized as follows. The proposed mixed integer model is presented in the next section and considering computational complexity of the problem we propose three multi-objective optimization algorithms for solving the problems in the following section. Parameters tuning for each algorithms are described next, followed by section on computational results and finally conclusions and further research directions are provided.

Proposed model

In this section, we define the discussed problem formally. The problem is composed of n nonresumable jobs available at time zero to be scheduled on a single machine. It is assumed that the machine does not have ready time and the jobs will be delivered to customers immediately after they are completed. Each job has a unique due date. When a job is submitted to the customer before the due date it is called an early job and if it is delivered after the due date, it is a tardy job. It is assumed that both earliness and tardiness are costly, although when earliness is desired from the customer’s point of view, a negative cost may be applied in the model. Preventive maintenance is conducted on the machine after a fixed period T. Moreover, to maintain the quality of the products, there is a limitation on the maximum number of allowable jobs during each working period and after that, during the maintenance period, the tool will be changed.

The first objective in the proposed model minimizes total earliness-tardiness. On the other hand, machine/operator idleness and unnecessary work in process (WIP) are costly to the manufacturing system but the first objective does not encompass them. For example assume a problem in which the fixed processing period is 5, fixed maintenance duration is 2, and maximum number of doable jobs during each processing period are 3. Process time and due date for each jobs are presented in Table 2. Two different sequences, namely A and B, are demonstrated in Fig. 1. As evident, total earliness-tardiness for both sequences is 5, thus they are equivalent from the first objective’s point of view. However, in sequence B we have less idle time and subsequently total working time for the machine and operator are less than sequence A which results in a less expensive manufacturing system. For this purpose, the second objective in the proposed model minimizes the makespan.

Table 2 Process time and due dates
Fig. 1
figure 1

Two different sequences

The set of problem parameters and indices in the proposed model are as follows:

i :

Index for the jobs; i = {1, 2,…, n}.

j :

Index for the position of a job in a period; j = {1,2,…, k}.

s :

Index for the periods.

p i :

Process time for job i.

t :

Duration of a working period (between two consecutive periods).

m :

Fixed maintenance duration.

d i :

Due date for job i.

α i :

Earliness penalty coefficient for job i.

β i :

Tardiness penalty coefficient for job i.

And the set of decision variables are as follows:

x ijs :

Is a binary variable which is 1 if job i is in the jth position of period s; otherwise it is 0.

C ijs :

Completion time for job i in the jth position of period s.

T i :

Tardiness for job i.

E i :

Earliness for job i.

Considering the fact that neither the number of periods nor the number of jobs in each period are predetermined, initially \(\left( {l = \hbox{max} \left\{ {n,\left[ {\frac{{\hbox{max} d_{i} }}{t + m}} \right] + 1} \right\}} \right)\) maximum number of possible periods are assumed, each of which containing k positions for the jobs. Thereafter considering the duration of each period, maximum number of applicable jobs in each period, and the objective functions; the jobs are arranged in the periods. Figure 2 presents a possible solution for a problem with n = 7, k = 4.

Fig. 2
figure 2

A hypothetical solution

The proposed mixed integer nonlinear model is as follows:

$$\hbox{min} \sum\limits_{i} {\alpha_{i} E_{i} + \beta_{i} T_{i} }$$
(1)
$$\hbox{min} \,\mathop {\hbox{max} }\limits_{i,j,k} \left\{ {c_{ijs} } \right\}$$
(2)

Subjects to:

$$\sum\limits_{j} {\sum\limits_{s} {x_{ijs} } } = 1\quad \forall i$$
(3)
$$\sum\limits_{i} {x_{ijs} } \le 1\;\forall j,s$$
(4)
$$\sum\limits_{j} {\sum\limits_{i} {p_{i} } } x_{ijs} \le t\quad \forall s$$
(5)
$$\sum\limits_{i} {\sum\limits_{j} {x_{ijs} } } \le k\quad \forall s$$
(6)
$$c_{ijs} = (t + m)(k - 1)x_{ijs} + \left( {\sum\limits_{{i^{\prime } = 1}}^{n} {\sum\limits_{{j^{'} = 1}}^{j} {p_{{i^{\prime } }} } x_{{i^{\prime } j^{\prime } s}} } } \right)x_{ijs} \quad \forall i,j,s$$
(7)
$$T_{i} \ge \sum\limits_{j} {\sum\limits_{s} {c_{ijs} } } - d_{i} \quad \forall i$$
(8)
$$E_{i} \ge d_{i} - \sum\limits_{j} {\sum\limits_{s} {c_{ijs} } } \quad \forall i$$
(9)
$$x_{ijs} \in \left\{ {0,1} \right\},\quad c_{ijs} ,T_{i} ,E_{i} \ge 0$$
(10)

In the above equations, Eq. (1) is the first objective of the proposed model that minimizes total earliness-tardiness and Eq. (2) is the second objective that minimizes the makespan. Equation (3) guarantees that each job appears in a unique position of a period. According to constraints Eq. (4), it is guaranteed that at most one job will be located in each position of a period. Constraints Eq. (5) ensure that the sum of processing time for the jobs being in the same period is less than t. According to Eq. (6), maximum number of doable jobs in each period is less than k. Equation (7) calculates the completion time for job i in the jth position of period s. Right side of this equation totalizes duration of the previous periods with the process time of the jobs prior to i (including i) in the jth period. For a given job i, tardiness and earliness are max{0, C i  − d i } and max{0, d i  − C i } respectively. Constraints Eqs. (8) and (9) provide lower bounds for tardiness (T i ) and earliness (E i ) respectively. According to Eq. (10), x ijs is a binary variable and C ijs , Ti, E i are nonnegative variables.

Proposed solving algorithm

Due to the computational complexity of the problem, multi-objective particle swarm optimization (MOPSO) algorithm is implemented. In addition to the MOPSO, two other multi-objective optimization algorithms are used for comparing the results.

MOPSO algorithm

Optimization algorithm of particle swarm which was first introduced in 1995 (Yousefi et al. 2013), is a population based stochastic optimization technique inspired by social behavior of the swarm members such as bird flocking, fish training, etc. PSO’s capability of storing the previous solution helps to reduce memory usage and speed up the CPU. In addition, a few parameters which should be adjusted in PSO and obtaining remarkable results in literature make PSO a useful method to apply. A brief description of how the algorithm works is as follows: Initially, a particle is identified as the best particle in a neighborhood of particles, based on its fitness. All particles are then accelerated in the direction of this particle as well as the direction of their own best solutions that they have covered previously.

Consider a group of N particles that are searching a global optimal solution in a D dimensions space. Vectors X i  = (x 1i , x 2i ,…, x Di ), V i  = (v 1i , v 2i ,…, v Di ) and Pbest = (Pbest i1 , Pbest i2 , …, Pbest iD ) Shows position, velocity, and the best personal position of each particle respectively. Gbest is also a leader position visited by the whole particle swarm population. In any iteration of PSO, the position and velocity of particles are updated according to Eqs. (11) and (12) and Fig. 3.

$$v^{t + 1}_{ij} = w \times v_{ij}^{t} + C_{1} \times r_{1} \times (Pbest^{t}_{ij} - x^{t}_{ij} ) + C_{2} \times r_{2} \times (leader^{t}_{ij} - x^{t}_{ij} )$$
(11)
$$x^{t + 1} ij = x^{t} ij + v^{t + 1} ij$$
(12)
Fig. 3
figure 3

Decoding and encoding solution

Based on Eqs. (11) and (12), \(v_{j}^{t}\) and \(v^{t + 1}_{ij}\) are the velocity of particle i in jth dimension and iteration t and t + 1, \(x_{ij}^{t}\) and \(x_{ij}^{t + 1}\) are the position of particle i in jth dimension and iteration t and t + 1, \(Pbest_{ij}^{{^{t} }}\) is the best previous position of particle i in jth dimension and iteration t, and \(leader_{ij}^{t}\) (best global position) is the leader position for particle i in jth dimension and iteration t. \(C_{1}\) and \(C_{2}\) are the personal learning coefficient and the social learning factor respectively. \(r_{1}\) and \(r_{2}\) are random numbers between 0 and 1 and \(\omega\) is an inertia factor usually in the range [0.8, 1.2].

Multi-objective PSO (MOPSO) is an extension of PSO proposed by (Coello et al. 2004) for multi-objective optimization problems. MOPSO stores the non-dominated solutions in ‘repository’, an external archive for solutions. The members of repository are not dominating each other and they provide an approximation of real Pareto frontier of the optimization problem. Repository members are updated by region based selection (grid index). Furthermore, in MOPSO, each particle chooses a solution in the repository as its leader with region based selection, instead of a unique global best for all particles. The Pseudo Code of MOPSO is presented as follows:

figure a

For decoding process of MOPSO algorithm, N random numbers between [0, 1] is used, where N is the number of works. Encoding process is implemented by ordering these numbers increasingly while the minimum and maximum numbers are given to the first and the last works, respectively. Figure 4 illustrates the process.

Fig. 4
figure 4

Decoding and encoding solution

Non-dominated sorting genetic algorithm II (NSGA-II)

NSGA-II algorithm was proposed by Deb, Pratap, Agarwal and Meyarivan (Deb et al. 2002). It uses a ranking scheme called the fast non-dominated sorting approach which requires a computational complexity of at most to rank the individuals, where M is the number of objectives and N is population size.

The Pseudo Code of NSGA-II is presented as follows:

figure b

where P m is percentage of mutation and P c is percentage of cross over.

Non-dominated ranking genetic algorithms (NRGA)

Al Jadaan et al. (2008) presented NRGA by exchanging the selection strategy of NSGA-II from the tournament selection to the roulette wheel. The Pseudo Code of NRGA is presented as follows:

figure c

PSO algorithm was developed for continuous searching space optimization problems. Since we encode the searching space of this problem continuously, therefore, PSO algorithm is used appropriately in this paper. Moreover, regarding to population based property of the MOPSO, we use two well-known population based algorithms namely NRGA and NSGAII to verify and validate the MOPSO results.

Parameter tuning

Undoubtedly, the results of a MOPSO, NRGA, and NSGA II algorithms to attain better fitness function value significantly depends on their parameters. The main parameters of a PSO algorithm that should be calibrated at the best level are: cognitive factor (C1), social factor (C2), swarm size (N-Particle), number of iterations (N-It), number of repository (N Rep), number of grid (N Grid), and inertia factor (w). Also, the main parameters of NRGA and NSGA II algorithms are: percentage of crossover (P c), percentage of mutation (P m), maximum iteration (Max It), and population size (N pop).

Therefore, in this section, the parameters of all algorithms are calibrated by using Taguchi method (Fazel Zarandi et al. 2013). Instead of the Fisher factorial designs, Taguchi developed fractional factorial experiments (FFEs) to reduce complexity of experiments in the full factorial designs. Taguchi method analyses the results in two ways: (1) analysis of variance for experiments with a single replicate, (2) the signal to noise ratio (S/N) for experiments with multiple replications where (N) is noise factor and (S) is controllable factors. Since the one with multiple replications has a better performance, the S/N is applied in this research to analyze the solutions. For more information regarding the Taguchi method see Taguchi et al. (2005).

For tuning of the proposed algorithm’s parameters with Taguchi model properly, a new response which is representing different quality of a solution is considered for the experiments. In Pareto based algorithms, two main goals are interesting (1) convergence and (2) diversity. Mean ideal distance (MID) is the one that measure the convergence rate of the algorithms. MID measures the convergence rate of Pareto frontier members to a certain point (0, 0). Also, diversity measures the extension of the Pareto frontier. Spacing is the one that measure the diversity rate of the algorithms. It measures the standard deviation of the distances among solutions of the Pareto frontier. The Diversity and the MID metrics, as representatives of the multi-objective goals, are used to define multi-objective coefficient of variation (MOCV) in Eq. (13) as follows:

$${\text{MOCV}} = \frac{\text{MID}}{\text{Diversity}}$$
(13)

By this definition, the two goals of the Pareto-based algorithms are considered simultaneously as a single response and more reliable outputs can be expected.

To conduct the Taguchi method more comprehensively, we have implemented a three stages orthogonal array experiments with MOCV. The purpose of conducting these arrays is to define the optimum level for each controllable parameter which cooperates to get the better fitness function value. From orthogonal arrays, each problem is run three times and since a solution with the highest MOCV is desired, the aim is to find minimize S/N calculated by Eq. 14 (Sadeghi et al. 2014).

$$S/N = - 10\begin{array}{*{20}c} {} \\ \end{array} \log \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {MOCV_{i}^{2} } } \right)$$
(14)

where, MOCV i , i = 1, 2, 3 is the solution in ith replication of the Taguchi method and n = 3 is the number of replications in experiments.

Table 3 shows the experimental results of MOPSO with L27 orthogonal array under different scenarios of the parameters combinations, respectively where “1”, “2”, and “3” refer to the first, the second, and the third level of the parameters. Regarding Eq. (14), these tables present S/Ns as well. In addition, it can be seen from Fig. 5 that the highest mean of S/N is the best. Therefore, Table 4 contains the optimal parameter values of the MOPSO algorithm. The same calculation is done for NRGA and NSGA II algorithms and the optimal parameters along with their levels are presented in Table 5.

Tables 3 S/N for three repetitions in for MOPSO with orthogonal array L27
Fig. 5
figure 5

The mean S/N plot for different levels of the MOPSO parameters

Table 4 Optimal parameters for MOPSO
Table 5 Optimal parameters for NSGA-II and NRGA

Results comparison and discussion

In this section, we study the ability of algorithms (MOPSO, NRGA, and NSGAII) on test problems which is implemented in Matlab Software R2013a. It should be noted that, since there is no executive library for this problem, all data in this paper have been generated randomly. However, these data are chosen in such a way which mirrors the real condition of the company and can serve as proxy for real cases. Therefore, 20 samples are generated and six well-known metrics including: MID, Max Spread, SNS, NPS, RAS, and Spacing (see Coello et al. 2007) are calculated for each algorithm and sample (see Tables 6, 7, 8). After obtaining the results of metrics, we evaluate and rank the algorithms with MCDM model. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is one of well-known multi-criteria decision making modeling (Akhavan et al. 2015).

Table 6 The result metric for NSGA-II
Table 7 The result metric for NRGA
Table 8 The result metric for MOPSO

In this research, the metrics are used as criteria and algorithms are considered as alternatives. The goal is to prioritize alternatives based on criteria and select the algorithm which has the best performance.

In this model, first of all, the average of six metrics (criteria) in all problems is calculated then this matrix is normalized. After calculating the normalized decision matrix, the Euclidean distance of alternatives from positive and negative ideal solutions (\(d_{i}^{ + } ,d_{i}^{ - }\)) are calculated using Eqs. (1516).

$$d_{i}^{ + } = \sqrt {\sum\nolimits_{j = 1}^{n} {(v_{ij} - v_{j}^{ + } } )^{2} } \quad i = 1,2, \ldots ,m$$
(15)
$$d_{i}^{ - } = \sqrt {\sum\nolimits_{j = 1}^{n} {(v_{ij} - v_{j}^{ - } } )^{2} } \quad i = 1,2, \ldots ,m$$
(16)

Finally, relative closeness value for each alternative calculated using Eq. 17. The alternative which has larger relative closeness value should be selected as the best one.

$$Cl_{i} = \frac{{d_{i}^{ - } }}{{d_{i}^{ - } + d_{i}^{ + } }}\quad i = 1,2, \ldots ,m$$
(17)

The average decision matrix, normalized weighted decision matrix, Euclidean distance of alternatives, and relative closeness values of alternatives for MOPSO, NRGA, and NSGAII are shown in Table 9. The results show that MOPSO algorithm’s performance in solving problems is better than others.

Table 9 The result of TOPSIS method for problem

Conclusion

This paper presented a Bi-objective model for scheduling a JIT single machine with a periodic preventive maintenance while total earliness-tardiness and makespan are minimized simultaneously. Furthermore, the proposed mixed integer model takes the maximum number of allowable jobs in each period which helps to maintain quality of the products.

To solve the model, we propose NSGA-II, NRGA, and MOPSO algorithms. The parameters of these algorithms are tuned by Taguchi method, and finally, six performance metrics are used to analyze the diversity and convergence of proposed algorithms. Based on MADM analysis of these metrics, we have shown that MOPSO has better metric performance to other algorithms and has better uniformity within the solutions of their Pareto curves.