Abstract
Identical parallel machine scheduling problem with controllable processing times is investigated in this research. In such an area, our focus is mostly motivated by the adoption of justintime (JIT) philosophy with the objective of minimizing total weighted tardiness and earliness as well as job compressions/expansion cost simultaneously. Also the optimal set amounts of job compressions/expansion plus the job sequence are determined on each machine. It is assumed that the jobs processing times can vary within a given interval, i.e., it is permitted to compress or expand in return for compression/expansion cost. A mixed integer linear programming (MILP) model for the considered problem is firstly proposed and thereafter the optimal jobs set amounts of compression and expansion processing times in a known sequence are determined via parallel net benefit compression–net benefit expansion called PNBC–NBE heuristic. An intelligent water drop (IWD) algorithm, as a new swarmbased natureinspired optimization one, is also adopted to solve this multicriteria problem. A heuristic method besides three metaheuristic algorithms is then employed to solve small and medium to largesize samplegenerated instances. Computational results reveal that the proposed IWDNN outperforms the other techniques and is a trustable one which can solve such complicated problems with satisfactory consequences.
Introduction
One of the main tasks in production scheduling is regularizing jobs to meet delivery dates demanded by clients, especially when they adopt a justintime (JIT) policy. Owing to the extensive acceptance of JIT philosophy in recent years, the due date requirements have been studied widely in scheduling problems, especially those with earliness–tardiness penalties. In fact, JIT philosophy explores to recognize and remove waste components such as defective products, waiting time, processing, inventory, transportation and movement (Wang 2006). Completion of jobs process before their due dates may result in additional costs such as deterioration of perishable goods, holding costs for finished goods whiles jobs which are tardy incur a tardiness penalty such as lost sales, backlogging cost and loss of goodwill or maybe need to further stage in production line. In such an environment, both earliness and tardiness are disadvantageous to manufacturers and customers which should be minimized.
The greater part of the researches on earliness/tardiness (E/T) scheduling problems deals with the single machine, while the parallel machine scheduling problem is an important and applicable problem. The majority of earlier studies on parallel machine scheduling have dealt with performance criteria such as mean flow time, mean tardiness, makespan and mean lateness. In accordance with increasing current trends toward JIT policy, the traditional measures of performance are no longer applicable. In its place, the emphasis has shifted towards E/T scheduling taking earliness in addition to tardiness into account (Baker and Scudder 1990). Baker and Scudder (1990) presented the first survey on E/T scheduling problems. Also the E/T problem has proven to be NPhard (Sun and Wang 2003; Hall and Posner 1991).
Despite of reallife situation, most of the classical scheduling models assume that jobs processing times are fixed, while the processing times depend on amount of resources such as budgets, facilities capabilities, and manpower. Such as CNC machines which are a high usage example in which jobs processing times could be controlled by setting the cutting speed and/or the feed rate on the machines. The controllable processing time idea comes up from project planning and control. The controllable processing time assumption is justified in situations where jobs could be accomplished in shorter or longer durations by increasing or decreasing the allocation of resources to process individual jobs. In fact, the controllable processing time means that each job could be processed in a shorter or longer time depends on its efficacy on objective function by reducing or increasing the available resources such as equipment, energy, financial budget, subcontracting, overtime, fuel or human resources. When the jobs processing times are controllable, nominated processing times affect both the scheduling performance in production process and the manufacturing costs. For more applications of controllable processing times, one could refer to Kayvanfar et al. (2013).
Pinedo (2008) showed that parallel machines are included in ‘NPhard’ category in many situations and consequently a “good” solution obtained by a heuristic or metaheuristic algorithm in shorter computational time is often attractive. There are three major classes of metaheuristic techniques which could be distinguished based on the way in which solutions are manipulated: (1) local search metaheuristics which work on a single complete solution and iteratively enhance it via making small changes on such a solution, (2) constructive metaheuristics which build solutions from their constituting parts, and (3) populationbased metaheuristics which combine iteratively solutions into new ones so as to find better (higher quality) solutions. However, the abovementioned categories are not mutually exclusive and many metaheuristic techniques unite ideas from different categories which are called hybrid metaheuristics (Sorensen and Glover 2013). A local search initiates with a complete solution and makes an endeavor to improve it over time while in constructive algorithms the solutions could be constructed from scratch by piecemeal appending solutions’ components to the initially empty solutions. A new swarmbased natureinspired optimization method, called intelligent water drop (IWD), is developed in this paper on the identical parallel machines. The IWD is based on the river systems’ dynamic, actions and reactions which happen among the water drops in rivers and is one of the recently proposed metaheuristics in the field of the swarm intelligence (ShahHosseini 2007). ShahHosseini (2012) demonstrated that the IWD is naturally suitable for combinatorial optimization problems and it has already been shown to have the convergence property in value (ShahHosseini 2008).
In this research, we study the earliness/tardiness problem on identical parallel machine in which jobs have controllable processing times. Our objective is to determine (1) the job sequence and (2) the optimal set amount of compression/expansion of jobs processing times on each machine (when the sequence is given), simultaneously, so that the total incurred costs including tardiness and earliness as well as compression/expansion of jobs processing times costs are minimized. The IWD algorithm is successfully customized so as to solve the considered problem. To the best of authors’ knowledge, this is the first research on the application of IWD algorithm to solve the parallel machines in which jobs processing times are controllable.
The rest of paper is organized as follows: the next section surveys the related literature. Problem description is presented in Sect. 3. Section 4 gives details about the intelligent water drops algorithm. In Sect. 5, we explain how this algorithm is applied on identical parallel machines. Experimental results are demonstrated in Sect. 6 and finally, Sect. 7 includes conclusions and future study.
Literature review
There are several papers in which earliness and tardiness criteria are studied simultaneously on parallel machines. Of them, Sivrikaya and Ulusoy (1999) developed a genetic algorithm (GA) approach to attack the scheduling problem of an independent set of jobs on parallel machines with earliness and tardiness penalties. Yi and Wang (2003) studied a model for scheduling grouped jobs on identical parallel machines with the objective of minimizing total tardiness and earliness penalties. Sun and Wang (2003) studied the problem of scheduling \(n\) jobs with a common due date and proportional early and tardy penalties on \(m\) identical parallel machines. They showed that the problem is NPhard and proposed a dynamic programming algorithm to solve it. Omar and Teo (2006) considered the problem of scheduling jobs on a set of identical parallel machines with distinct due dates, process time and early due date restrictions. Su (2009) addressed the identical parallel machine scheduling problem in which the total earliness and tardiness about a common due date are minimized subject to minimum total flow time. KedadSidhoum et al. (2008) addressed the parallel machine scheduling problem in which the jobs have distinct due dates with earliness and tardiness costs. They proposed new lower bounds for the considered problem. M’Hallah and AlKhamis (2012) tried to minimize the total weighted earliness and tardiness on parallel machines using an exact mixed integer programming (MIP) for smallsized instances whereas larger instances were solved by means of hybrid algorithms. Recently, Polyakovskiy and M’Hallah (2014) studied the weighted earliness tardiness parallel machine problem where jobs have different processing times and due dates are distinct. A MIP was also proposed and thereafter a deterministic heuristic was employed based on multiagent systems for solving the considered problem.
From both theoretical and practical points of view, there is a considerable linkage between early/tardy (E/T) scheduling problems and controllable processing times notion, since by controlling the jobs process time, earliness and tardiness could be decreased and therefore the system approaches toward JIT philosophy. Vickson (1980) has studied one of the first researches on controllable processing time scheduling problems with the objective of minimizing the total flow time as well as the total processing cost incurred owing to jobs processing times compression. Researches on scheduling problem with controllable processing times and linear cost functions up to 1990 are surveyed by Nowicki and Zdrzalka (1990). Wan (2007) studied a nonpreemptive single machine common due window scheduling problem where the jobs processing times are controllable with linear costs and the due window is movable. The objective of this research was to find a job sequence, a processing time for each job as well as a position of the common due window so as to minimize the total cost of weighted earliness/tardiness and processing time compression. A complete survey on controllable processing time scheduling has done by Shabtay and Steiner (2007). Kayvanfar et al. (2013) addressed single machine scheduling problem with controllable processing times in which jobs could be either compressed or expanded up to a given bound, at a cost proportional to the reduction or increase. Yang et al. (2014) studied parallel machine scheduling problem considering controllable processing times and ratemodifying activities simultaneously. They assumed that the jobs processing times could only be compressed by allocating a greater amount of a common resource to process the job. Their objective was to determine the optimal positions of the ratemodifying activities, the optimal job compressions as well as the optimal schedule so as to minimize the total occurred cost which depends on the total completion time and total job compressions.
A bicriterion preemptive parallel machine scheduling has been studied by Nowicki and Zdrzałka (1995) for jobs having processing costs which are linear functions of variable processing times. Gürel and Akturk (2007) surveyed the identical parallel CNC machines with controllable processing time where a time/cost tradeoff consideration is conducted. In nonidentical parallel machine environment, Alidaee and Ahmadian (1993) surveyed this category of scheduling with linear compression cost functions to minimize: (1) the total compression cost and the total flow time; and (2) the total compression cost and the weighted sum of earliness and tardiness penalties. They formulated both problems as transportation problems. As another work on nonidentical parallel machine, Gürel et al. (2010) considered a status in which processing times could be controlled at a certain compression cost. An anticipative approach was applied so as to form an initial schedule so that the limited capacity of the production resources is used more effectively. Cheng et al. (1996) addressed unrelated parallel machine scheduling problem with unrestrictively large due dates in which jobs processing times could only be compressed through incurring an extra cost. Jansen and Mastrolilli (2004) studied the identical parallel machines makespan problem with controllable processing time in which the jobs are allowed to only compress its processing time in exchange for compression cost. Shakhlevich and Strusevich (2008) provided a unified approach to solve preemptive scheduling problems with uniform parallel machines and controllable processing times. They demonstrated that a single criterion problem of minimizing total compression cost subject to the constraint that all due dates should be met can be formulated in terms of maximizing a linear function over a generalized polymatroid. Li et al. (2011) addressed minimization of makespan on identical parallel machine in which the processing times are linear decreasing functions of the consumed resource. Turkcan et al. (2009) considered parallel CNC machines with the aim of minimization manufacturing cost and total weighted earliness and tardiness in which jobs processing times are controllable. In their study, the parts have jobdependent earliness and tardiness penalties. Also due dates are distinct and idle time is allowed, but decompression costs have not been considered. Leyvand et al. (2010) addressed parallel machine scheduling regarding controllable processing times. Their objective was to maximize the weighted number of jobs which are completed exactly at their due dates as well as minimizing the total resource allocation cost. Drobouchevitch and Sidney (2011) considered a problem of scheduling \(n\) identical nonpreemptive jobs with a common due date on \(m\) uniform parallel machines. Their objective was to determine an optimal value of the due date and an optimal jobs allocation onto machines so as to minimize a total cost function including earliness, tardiness and due date values. Recently, Kayvanfar et al. (2014) studied unrelated parallel machines with the goal of minimization earliness and tardiness penalties in which the jobs processing times are controllable. They assumed the idle time is not allowed after starting machine processing and the jobs due dates were taken distinct in their paper. To tackle the mentioned problem, a MIP model was then proposed. Besides, two wellknown populationbased metaheuristics, i.e., genetic algorithm (GA) and imperialist competitive algorithm (ICA) as well as a hybrid algorithm called IIGNN comprising GA, ICA and ISETP (a heuristic technique) were employed for solving medium to largesize instances.
The IWD algorithm has also been checked by means of several standard optimization benchmark problems and proved that it can find good solutions. Of such problems, one could mention to traveling salesman problem (TSP) which is solved by ShahHosseini (2007, 2009). The air robot path planning was solved via Duan et al. (2008, 2009) in different environments. The \(n\)queen puzzle and the multiple knapsack problems (MKP) were tackled in ShahHosseini (2009) which resulted in optimal and nearoptimal solutions. Niu et al. (2012) used the IWD for solving jobshop scheduling problems. Besides that, they proposed five schemes so as to improve the original IWD algorithm. Niu et al. (2013) addressed multiobjective jobshop scheduling (MOJSS) problems using IWD with the aim of finding the best compromising solutions (Pareto nondominance set) considering multiple criteria, i.e., makespan, tardiness as well as mean flow time of the schedules.
Besides the aforesaid areas, the IWD algorithm has also been applied to different optimization problems in different fields of research. It is already demonstrated that the IWD yields higher quality or at least comparable results compared to the other wellknown optimization techniques such as ant colony optimization (ACO) and particle swarm optimization (PSO) (Niu et al. 2013). Of such works, one could mention to Kamkar et al. (2010) which solved the Vehicle Routing Problem (VRP) in the field of transportation, distribution and logistics. Abbasy and Hosseini (2008) tackled the economic dispatch and emission dispatch problems in power systems. Also, the selection of textural features for developing precision irrigation system was addressed in Hendrawan and Murase (2011). Dariane and Sarani (2013) applied the IWD on the reservoir operation problem which is a challenging problem in water resource systems. They concluded that despite the IWD algorithm finds relatively higher quality solutions, it is also able to overcome the computational time consumption deficiencies inherited in the ACO algorithm. The latter case is very important in largesized instances with a great amount of decision variables where the computational time plays a limiting factor role for the model.
Problem formulation
Suppose there is a set of \(n\) independent jobs \(N = \{J_{1}, J_{2},\ldots , J_{n}\}\) to be processed on \(m\) identical parallel machines. All machines are capable to process all jobs and each machine can handle at most one job at a time. All jobs are available in time zero and they could be processed only by one machine finally. The normal processing time could be compressed by an amount of \(x_{j}\) or expanded by an amount of \(x_{j}^{\prime }\) which necessitates a unit cost of compression or expansion, respectively. Obviously, processing a job at its normal processing time, \(p_{j}\), will arise no additional processing cost. No idle time could be inserted into the schedule, after starting the process by machine. Since all machines are identical, the processing time of each job on each machine is the same and a job could be processed by any free machine. Also, the jobs preemption is not allowed and the number of jobs and machines are fixed. Machine setup times and transportation time between machines are negligible. Also, machines are available throughout the scheduling period, i.e., no breakdown is assumed.
A mixed integer linear programming (MILP) mathematical model for the considered problem is proposed in this section so as to minimize the total incurred costs, i.e., tardiness–earliness penalties as well as job compressions/expansion costs. The processing times, due dates as well as maximum jobs amount of compression and expansion are assumed to be integer numbers.
Notations
Subscripts
 \(N\) :

Number of jobs
 \(M\) :

Number of machines
 \(i,j,k\) :

Index for job (\(i,j,k =1,2, \ldots ,N)\)
 \(m\) :

Index for machine (\(m=1, 2, \ldots ,M)\)
Input parameters
 \(p_{j}\) :

Normal processing time of job \(j\)
 \(p_{j}^{\prime }\) :

Crash (minimum allowable) processing time of job \(j\)
 \(p_{j}^{\prime \prime }\) :

Expansion (maximum allowable) processing time of job \(j\)
 \(c_{j}\) :

Unit cost of compression of job \(j\)
 \(c_{j}^{\prime }\) :

Unit cost of expansion of job \(j\)
 \(\alpha _j\) :

The earliness unit penalty of job \(j\)
 \(\beta _j\) :

The tardiness unit penalty of job \(j\)
 \(d_{j}\) :

Due date of job \(j\)
 \(M\) :

A big positive arbitrary number
Decision variables
 \(C_{j}\) :

Completion time of job \(j\)
 \(y_{jkm}\) :

1 if and only if jobs \(j\) and \(k\) are processed on machine \(m\) while job \(j\) precedes job \(k\).
 \(E_{j}\) :

Earliness of job \(j; E_{j}\) = \(\max \{0, d_{j}C_{j}\}\)
 \(T_{j}\) :

Tardiness of job \(j; T_{j}\) = \(\max \{0, C_{j}d_{j}\}\)
 \(x_{j}\) :

Compression amount of job \(j\), 0 \(\le x_{j} \le L_{j}\)
 \(x_{j}^{\prime }\) :

Expansion amount of job \(j, 0 \le x_{j}^{\prime } \le L_{j}^{\prime }\)
 \(L_{j}\) :

Maximum amount of job \(j\) compression, \(L_{j}=p_{j}p_{j}^{\prime }\)
 \(L_{j}^{\prime }\) :

Maximum amount of job \(j\) expansion, \(L_{j}^{\prime }=p_{j}^{\prime }p_{j}\)
Consider a nonpreemptive parallel machine scheduling problem with \(N\) jobs on \(M\) identical parallel machines simultaneously available at time zero such that \(N > M\). Associated with each job \(j\), \(j = 1,\ldots ,N\), there are several parameters: \(p_{j}\), the normal processing time for job \(j\) that will incur no additional processing cost; \(d_{j},\) the due date for job \(j; \beta _{j}\), the tardiness penalty per unit time if job \(j \)completes after its due date and \(\alpha _{j}\), the earliness penalty per unit time if job \(j\) completes before \(d_{j}\). A unit cost of compression (\(c_{j})\) or expansion (\(c'_{j})\) is occurred, if the processing time is reduced or increased by one time unit, respectively. Every job could be compressed or expanded to the maximum reduction or expansion allowed level, \(L_{j}, (L_{j}=p_{j}p'_{j})\) or \(L'_{j}, (L'_{j}=p''_{j}p_{j})\), respectively. It is noticeable that each job could only be compressed or expanded. The goal is to determine the job sequence as well as the optimal set amount of compression/expansion of jobs processing times on each machine simultaneously, so that total earliness and tardiness penalties as well as compression/expansion costs of jobs are minimized.
The mathematical model
Subject to
The objective function consists of four components: earliness penalties, tardiness penalties and cost of compressing and expanding of jobs which depends on amount of compression/expansion. Equality (2) ensures that dummy job 0 must be processed prior to any other job on each machine and then the other jobs could be processed. Constraint (3) makes sure there is utmost only one job before processing another job on the same machine. Constraint (4) ensures that only one job after another utmost could be processed on each machine. In better words, constraints (3) and (4) together make sure that on the same machine there is only one job before and one job after the current certain job. This fact that all jobs must be processed on machines is exerted using Eq. (5). Constraint (6) ensures that if both variables \(y_{jkm}\) and \(y_{kim }\) are 1, then \(y_{ijm}= 0\). For instance, if job 2 is processed before job 3 and job 5 is processed after job 3, then job 2 could not be obviously processed after job 5. In better words, this constraint plays a complementary role so as to present the correct job sequences. Constraint (7) defines the earliness and tardiness of job \(j\). Each job could obviously only be tardy or early, if has not been completed timely and that is why \(T_{i}\) and \(E_{i}\) cannot take value simultaneously. Constraints (8) and (9) together guarantee that only after starting the process by machine, no idle time could be inserted into the schedule and no jobs preemption is allowed as well. Also these constraints make sure that only one job at any time on machine \(m\) could be processed. Constraints (10)–(13) bound the compression/expansion amount of each job. Constraints (14) and (15) provide the logical binary and nonnegativity integer necessities for the decision variables.
Intelligent water drops algorithm
Intelligent water drop algorithm (IWD) is a populationbased natureinspired optimization algorithm which is inspired by the movement of natural water drops which flows in rivers. As it pointed out earlier, the IWD could be employed so as to solve a large variety of optimization problems (ShahHosseini 2009). Despite there are many different kinds of obstacles on the way, the IWDs discover very good path to the rivers’ destination. A natural river often finds good paths among plenty of possible paths in its way from the source (starting place) to destination. A very good path (solution) could be found considering the conditions of its surroundings to reach to its final destination which is often a sea, ocean or a lake. The good solutions pursue from actions and reactions happening among the water drops and the water drops with the riverbeds. Several artificial water drops affect on the around environment as they move through the river bed in the way toward the final destination. This flowing toward destination is caused by the gravitational force of the earth. It is obvious that if there are no obstacles/barriers in the water drops path, they would follow a direct path toward the destination, which is undoubtedly the shortest path from the starting place to the destination. Since in real situation there are numerous twists, turns and other types of obstacles/barriers in the river path, the real path may be therefore different from the ideal one. It should be pointed out that this constructed path sounds to be optimum one with respect to the distance from the destination as well as the environment constraints (Duan et al. 2008, 2009).
There are two main characteristics in the IWD algorithm: (1) the amount of the soil it carries now, soil (IWD) and (2) the velocity which it is moving now, velocity (IWD). The velocity makes the water drops possible to transfer soil from one place to another. Faster water drops could clearly collect and move more soil from the river beds. Those parts of river bed get deeper by being removed from soil, more water could be attracted since they could hold more amount of water. The removed soils, which are carried in the water drops, are unloaded in slower river beds. The soil amount in each path affects on the IWDs’ soil collection and movement. As a matter of fact, each IWD holds soil in itself and eliminates soil from its path through movement in the environment. The less carried amount of soil, the faster the water drops speed and the IWDs could therefore gather more soil from that path, while a path with more soil is the opposite. As mentioned above, the removed soil amount from the path is found out via the velocity of an IWD flowing over a path. In contrast, the IWD velocity is also changed by the path such that a path with little soil amount increases the IWD velocity more than a path with a remarkably large amount of soil.
There are often many paths from a given source to a desired destination in an environment in which the position of the destination may be known or unknown. In fact, the IWD environment depends on the problem at hand. If the position of the destination is known, the aim is noticeably to discover the shortest (best) path from the source to the destination. In those cases with unidentified destination, the goal is to discover the optimum destination which may be interpreted as cost or any appropriate measure for the problem on hand. In the IWD algorithm, the movement of IWDs from source to destination is performed in discrete finitelength time steps. The velocity increment is nonlinearly proportional to the inverse of the soil between the two locations in moving IWDs from one location to the next. The IWDs’ soil is increased in such a situation, since some soil is eliminated from the path. The soil amount increment is inversely proportional to the time needed for the IWDs to pass from its current location to the next one. The time duration to travel from a given location to the next location is also computed according to the physics law for linear motion, i.e., the time taken is proportional to the distance between the two locations and inversely proportional to the velocity of the IWD.
The soil amount on the path in the IWD algorithm is interpreted as the hardness. A branch of the path is less desirable, if it contains higher soil amount than other branches. This branch selection mechanism on the path is executed via the inverse probabilistic function of the soil. A water drop also prefers an easier path to a harder one when it should select among several branches exist in the path from the source to the destination. In the IWD algorithm, a parameterized probabilistic model is applied to build the solutions. The parameters values are also updated to improve the probability of constructing highquality solutions.
As pointed out earlier, it has already proved that the IWD has the convergence property in value (ShahHosseini 2008) which means the IWD algorithm is capable to find highquality solution if the number of iterations be sufficiently large. So, the IWD algorithm is appropriate for a large variety of applicable problems such as VRP, TSP, robot path planning and scheduling problems like parallel machine ones which is investigated in this research. There are three reasons which present the necessity and significance of the IWD algorithm:

(I)
It provides good quality solutions using average values.

(II)
Higher convergence speed compared to the other techniques.

(III)
Flexibility in the dynamic environment and incorporating popup threats easily.
As the IWDs flow in an environment, both velocity and soil may change. Also, the environment where IWDs are moving on is assumed to be discrete and consists of \(N\) nodes (all possible assignment with \(k\) identical machines and \(w\) jobs) as well as \(E\) edges denoted as complete graph \((N, E)\). In this graph, each IWD has to travel from one node to another. Every two nodes are connected by an arc which holds a given soil amount. The soil of each arc may be varied according to the IWDs’ activities flowing in the environment. As a matter of fact, each IWD begins to construct its solution slowly until the IWD completes its solution by traveling on the graph nodes along the edges of the graph. An iteration of the algorithm is carried out through completing all IWDs’ solutions. The set of the elite (best) solutions in each iteration, \(S_{e}\), is found through implementing each iteration and is utilized so as to update the best solution attained so far, GBS. The soil amount on the edges of \(S_{e}\) is also reduced based on the solutions’ goodness. The generated random set of IWDs (\(S_{r})\) plus \(S_{e}\) are altogether called \(S_{po}\) (of size \(N_{po})\) and are selected as the initial solution of the employed local search method in this study. The soil amount of \(S_{e}\) and GBS are subsequently again updated. The algorithm begins a new iteration with new IWDs, however, with the same soils on the graph paths and then the whole process is repeated. In IWD algorithm, two criteria could be defined as stop criteria; (1) the GBS reaches the expected quality and (2) the maximum number of iterations (MaxItr). Also, there are two types of parameters in IWD; the first type is called “static parameters” and stays steady throughout the algorithm lifespan while the second type is “dynamic” and is reinitialized after each iteration. The employed scheme to opt the values of the static parameters of the IWD algorithm is the same in (ShahHosseini 2008) until specified otherwise. Main steps of intelligent water drops algorithm are presented in Fig. 1.
Intelligent water drops algorithm on identical parallel machines
An intelligent water drops algorithm is applied on identical parallel machine scheduling problem in this research. The considered objective is to simultaneously assign and sequence \(n\) jobs over \(m\) machines so as to minimize the total earliness and tardiness of jobs as well as the compression/expansion cost of jobs processing times. Two main decisions should be made in order to solve the considered problem via IWD algorithm: (1) determining the jobs assignment to machines and (2) determining the jobs order to acquire a good solution. Following subsections explain the steps previously outlined in Fig. 1.
Initialization
All parameters including static and dynamic ones like each edge’s soil and each IWD’s velocity are firstly initialized. In fact, all IWDs have the same initial velocity and all the edges are set to have the same amount of initial soil in the IWD algorithm.
Solution structure
To illustrate the construction of the feasible solution on the parallel machines, the jobs are represented as square in this study (which could also be defined as node clusters). Each square has \(m\) nodes which corresponds to the machines on which each job could be processed. Despite the nodes on each square are not connected, every node in a square is connected to all other nodes in other squares. A dummy node, which could be regarded as the start/end point of a water drop’s tour, is defined and connected to every other node on the graph. The defined graph has \(n\) squares, \(m \times n\) nodes as well as one dummy node. The total number of edges are \((nm)^{2}\). In order to build a solution, the artificial water drop travels to all squares [i.e., visits only one node (machine) on that square] and thereafter comes back to the dummy node. The latter move completes the tour on the defined graph. In such a solution construction, all jobs must be assigned on machines where each job could only be processed on one machine. This tour spans (\(n + 1\)) edges. By completing a tour through water drop, the order of visiting each square yields the order of jobs assignment to the machines. Also, the visited node at each square identifies the job assignment to a machine. Figure 2b demonstrates such a solution construction. There are six jobs as well as four machines in this schematic figure.
The employed solution construction way of parallel machine scheduling problem in IWD (Fig. 2a) is generated as follows: The water drop’s tour (depicted by dotted) is (0), (1, 3), (2, 2), (6, 3), (5, 4), (4, 1), (3, 1) and (0) where the pair \((i, k)\) denotes (job, machine). The solution corresponds to this path as follows: jobs 4 and 3 are firstly processed on machine 1, respectively. There is only job 2 assigned on machine 2. On machine 3, jobs 1 and 6 are processed, respectively. Finally, job 5 is processed on machine 4. This representation arrangement could be represented in another form, i.e., the water drop’s route could be changed subject to not changing the sequence of those jobs which are processed on the same machine. In better words, suppose that the water drop’s tour is as follows: (0), (1, 3), (5, 4), (6, 3), (2, 2), (4, 1), (3, 1) and (0). As it could be seen, the sequence of jobs 3 and 4 on machine 1 has not been changed, but job 5 has now been processed on machine 4 earlier than previous sequence. This sequence interchangeability, to a large extent, yields better solutions, since by constituting different seeds, the proposed embedded iterative local search is talented to seek a widespread area which is a significant advantage.
Selection of the next node
In each step, each IWD is supposed to go from the current node \(i\) to the next node \(j\). This matter is carried out using roulette wheel mechanism. To do so, the next node is selected randomly to build a solution (path) as it has already been described in Sect. 5.1 according to the probability distribution \(\mathrm{Pro}_i^\mathrm{IWD} (j)\):
Update velocity
Update each IWD’s velocity moving from node \(i\) to node \(j\) on the disjunctive graph, as follows:
This updating affects on the calculating of \(\Delta \mathrm{soil}_{i,j}^t\) in the next step which eventually results in a higher priority of some paths in the future node selection.
Compute delta soil
For each IWD, compute the soil amount which is loaded from the edge (\(i,j)\) as follows:
where \(\mathrm{Dist}_{i,j}\) is the undesirability of an IWD to go from the current node \(i\) to the next node \(j\) in the generated tour as it has already depicted in Fig. 2. Let us describe such a notion via an illustrative example. According to Fig. 2, suppose that only three jobs, i.e., jobs 1, 2 and 6 are assigned to machines 3, 2 and 3, respectively so far. In order to assign the unscheduled jobs, i.e., jobs 3, 4 and 5, all possible circumstances to complete a given IWD tour are 12 (4 machines \(\times \) 3 jobs). Now, an expression called \(U_{wk} =\left {C_{\max _{wk}} d_w } \right \) is defined where \(w\) signifies an unscheduled job and \(k\) implies the machine index. It should also be pointed out that \(C_{\max _{wk}}\) shows the maximum completion time of machine \(k\) when job \(w\) is assigned to and \(d_{w}\) represents the due date of job \(w\). So, in the abovementioned example, \(j=12\), \(k=4\) and \(w=3\) which are used to calculate \(\mathrm{Dist}_{ij}\).
Update edge soil and IWD soil
For each IWD, update the edge soil traversed by that IWD and the soil included in the IWD:
This updating as a learning mechanism will be used in the next generations so as to guild the IWDs toward to the higher quality solutions. The obtained information by this updating step as well as the other ones yield the guiding information to control the diversification and intensification of the algorithm.
Evaluation of each solution in IWD
In this subsection, a calculation scheme is demonstrated so as to evaluate the fitness values by calculating the goodness of complete constructed solutions in the population with respect to the considered objective function in this study, i.e., total tardiness and earliness as well as job compressions/expansion cost. It is obvious that the higher the fitness value, the better the performance of the solution. Consequently, those ways traversed by better solutions have the chance to be visited in next iterations by other IWDs and lead to convergence of population towards the global maximum fitness. Among different calculation schemes in the literature for fitness value computation, the following one is selected for doing so:
where the fitness value for each individual is regarded as difference of the objective function from a large positive number (\(\psi \)). In Eq. (24), the objective function value equals to the sum of earliness and tardiness of the current solution/IWD.
Setup elite set
Constitute a set of elite solutions (\(S_{e})\) of size \(N_{e}\), a proportion of whole population. This set will be updated as new better solution is found.
Setup post optimization set
A solution set \(S_{po}\) is constituted to be used as seed of embedded local search procedure to search new potential local optima (Sect. 5.12). This set is composed of elite as well as some randomly chosen solutions:
Update (GBS)
It is applied so as to update the global best solution, GBS, via the best iteration solution (the best solution in \(S_{e}\), called TS\(_{e})\) as follows:
where \(q\)(GBS) is a quality function and is defined as the fitness of the given schedule, i.e., the earliness and tardiness smaller, the better is the fitness.
Global soil propagation
Update the edge soil included in the current elite IWDs’ solutions (\(S_{e})\):
where \(d\) addresses the number of nodes in the predefined graph \((N,E)\). The global soil propagation speeds up the convergence rate of the IWDs toward the elite found solutions so far.
Embedded local search
Completed solutions using preceding steps could be further improved by a local search procedure embedded in our proposed IWD algorithm. So, an iterative local search employs the solutions of \(S_{po}\) as initial solutions and tries to enhance their quality via three local search operators [local search (I)–(III)] which are described as follows:

Step 1.
Apply Local search (I). If a better solution is found, then go to Step 1, otherwise go to Step 2.

Step 2.
Apply Local search (II). If a better solution is found, then go to Step 1, otherwise go to Step 3.

Step 3.
Apply Local search (III). If a better solution is found, then go to Step 1, otherwise terminate the local search procedure.
Local search (I)—jobs swaps at a given machine Every possible swap on a given machine is investigated via this local search. This procedure has a time complexity of \(O(m.n^{2})\) which is shown in Fig. 3.
Local search (II)—jobs swaps on different machines All jobs swap between jobs belonging to different machines are assessed. A wider range of solutions are explored in this local search than local search (I) which is shown in Fig. 4, where its time complexity is \(O(m^{2}.n^{2})\).
Local search (III)—job insertion This method seeks for new solutions transferring jobs from the machine with the highest “objective function, namely, total earliness and tardiness as well as job compressions/expansion cost” to the machine with the lowest one. The time complexity of this procedure is \(O(n^{2})\) which is shown in Fig. 5 as follows:
Abovedescribed steps are iteratively repeated according to Fig. 1 until stop criteria is met. The pseudocode of the proposed IWD algorithm on identical parallel machines is shown in Fig. 6.
Applying PNBC–NBE on parallel machines
In this subsection, we employ parallel net benefit compression–net benefit expansion (PNBC–NBE) technique proposed by Kayvanfar et al. (2014) to determine the optimal set amount of compression/expansion of jobs processing times. The PNBC–NBE is actually expansion version of net benefit compression–net benefit expansion (NBC–NBE) algorithm for single machine which is proposed by Kayvanfar et al. (2013). The PNBC–NBE method profits the net benefit compression/expansion in parallel machines and its logic is based on computing the difference between decreased total tardiness and the cost of compression on one hand and decreased total earliness and the cost of expansion of a job on the other hand which is applied by reducing or increasing of a jobs processing times by one time unit. The objective function reduces through compressing or expanding the jobs processing times. Each job may compress or expand, depending on its tardiness or earliness on given machine, if this compression/expansion is economical, i.e., the compression or expansion cost should be smaller than its benefits (decreased tardiness or earliness). In such a way, the objective function value will be decreased and such job will be compressed or expanded.
In order to obtain a suitable initial sequence on parallel machines considering JIT approach, we apply a heuristic called initial sequence based on earliness–tardiness criterion on parallel machines (ISETP) according to Kayvanfar et al. (2014). Since minimizing tardiness and earliness simultaneously on parallel machines is obviously hard, we employ ISETP so as to acquire an appropriate jobs assignment on parallel machines.
Procedure 1. PNBC–NBE

1.
Assign the jobs on parallel machines using ISETP heuristic technique, as an initial sequence.

2.
Employ NBC–NBE (Kayvanfar et al. 2013) so as to find out the amount of reduction/expansion of jobs processing times.
Do Steps 3 to 5 until stop criterion is met:

3.
Swap jobs and their corresponding machines randomly in order to reduce the earliness and tardiness values.

4.
Accept new sequence if objective function value is reduced, go to Step 5, else keep the previous sequence. Go to Step 3.

5.
Apply NBC–NBE to find out whether a given job could be further compressed or else expanded. Go to Step 3.

6.
Determine the final sequence and calculate the final amount of compression/expansion of jobs processing times.
In Procedure 1, the stop criterion is defined as the maximum number of nonimprovement iterations which is set at 70.
Experimental evaluation
Test instances
A set of test examples are implemented in order to demonstrate performance of the proposed heuristic techniques. All test instances are implemented in MATLAB 7.11.0 and run on a PC with a 3.4 GHz Intel\(^{{{\circledR }}}\) Core™ i72600 processor and 4 GB RAM memory. We generate two categories of test examples, i.e., small and medium to largesize ones. The characteristics of such generated instances are defined in Table 1.
Where \(d_\mathrm{min}\) = max (0, \(P (\upsilon \rho \)/2)) and \(P=1/m\sum \nolimits _{i=1}^n {p_j}\). The expression of \(P\) aims at satisfying the criteria of scale invariance and regularity described by Hall and Posner (2001) for generating experimental scheduling instances. The two parameters \(\upsilon \) and \(\rho \) are the tardiness and range parameters, respectively. We generate instances for \(J\) varying in {10, 20 and 50}, \(M \in \) {2, 3, 5}, \(\upsilon \in \) {0.2, 0.5, 0.8} and \(\rho \in \) {0.2, 0.5, 0.8}. For each quadruple (\(J\), \(M\), \(\upsilon \), \(\rho \)) five instances are generated where each case has run 4 times in all methods so as to guarantee constancy of these techniques. So, we generate \(3^{4}\times 5 \times 4 = 1{,}620\) problems totally for each method.
It should be pointed out that contrary to the mathematical model, a twophased approach is employed in solving the sample instances via PNBC–NBE, simulated annealing (SA) equipped with NBC–NBE called SNN, genetic algorithm (GA) equipped with NBC–NBE called GNN and IWD equipped with NBC–NBE called IWDNN techniques. In better words, ISETP, SA, GA and IWD search the best possible sequence on each machine in the first phase and thereafter NBC–NBE gives the optimal set amounts of compression/expansion of jobs on a given machine in the second phase. Among these techniques, ISETP and IWD are constructive ones and make the sequence by themselves while SA and GA need the initial sequence(s).
Those instances with small size (problems with 10 jobs on 2, 3 and 5 machines) have solved optimally via lingo and are then compared to PNBC–NBE and three other methods, i.e., SNN, GNN and IWDNN. Also, since the optimum solution could not be found via Lingo in medium to largesize instances, we compare only our proposed heuristic (PNBC–NBE) and adopted metaheuristics (SNN, GNN and IWDNN).
Implementation details
Table 2 shows the parameters used for the proposed IWD in this study as well as their values. The initial values of the parameters are set based on theoretical studies and similar reported researches such as ShahHosseini (2008). In better words, general parameters such as \(a_{v}\), \(b_{v}\) and \(\varepsilon \) are used according to ShahHosseini (2008) and Niu et al. (2012) and those parameters concerned to the characteristics of onhand problem are tuned experimentally. In some instances, a larger value of some parameters will give rise to higher quality solutions however by consuming a longer computation time; consequently, tradeoff values are obtained based on experiments for these parameters. They are tuned through both trial and error and by designing experiments with different combinations of the parameters.
Computational results
Our GA implementation is mainly based on the framework of presented GA in Kayvanfar et al. (2014), i.e., crossover, mutation and reproduction operators. The initial population is generated randomly and “70 nonimprovement successive iterations” is determined as the stopping criterion. Also, 5 % of the best solutions in each generation are selected for the elitism mechanism.
The SA algorithm is initiated via an initial randomly generated solution, e.g., \(x\), where its cost is denoted by \(l_{x}\). A neighbor \(y\) of this initial solution is generated, with the cost of ly. If \(\Delta l_{xy} = l_{y}l_{x}\) is discovered to diminution, the current solution \(x \)is then replaced by \(y\); else, the current solution is kept. A new neighbor is generated and this process is repeated until no further improvement is found in the neighborhood of the current solution. Accordingly, the algorithm terminates at a local minimum. The SA accepts all moves with \(\Delta l_{xy}<\) 0 while the moves with \(\Delta l_{xy}>\) 0 are accepted via a probability of \(\Gamma =\exp (\Delta l_{xy}/T)\) in which \(T\) is called control parameter (temperature) which is set initially 6,000 in this research. The temperature decreases during the search procedure according to the cooling scheme as \(T_{k+1}=\alpha \times T_{k} (k=1,2, \ldots \) and \(0 < \alpha < 1\)). A random number is generated over \(U(0,1)\) called \(r\) and is then compared to the obtained probability value. If \(\Gamma > r\) then the worse move is accepted otherwise is declined. One of the common approaches for generating neighbors in parallel machines is to randomly exchange jobs within machines or between machines. This approach is easy to apply and does not necessitate any problemspecific information. The algorithm terminated whenever the temperature reaches to 0.01 (final temperature).
Since in smallsize problems the optimum solution could be found, the percentage relative error (PRE) is employed in this class of instances as the performance measure.
where \(\mathrm{Alg}_\mathrm{sol}\) is the objective value obtained by selected technique and \(\eta \) is the optimum value obtained by lingo. In Table 3, the first two columns “\(M\)” and “\(N\)” represent the number of machines and the number of jobs of a test problem, respectively The “\(\mathrm{PRE}_\mathrm{avg}\)” columns also represent the average gap in percentage between the obtained solution value via each employed algorithm and the optimum solution value. This percentage actually represents the quality of the obtained solutions for each method. The computational results for small and medium to largesize instances are shown in Tables 3 and 4.
As could be seen in Table 3, in smallsize cases, by increasing the number of machines, all the employed algorithms lose their quality, however, it could be already anticipated, since generally speaking heuristics and metaheuristics usually yield better results in smallsized problems compared to large ones. The proposed IWDNN yields the best results among all other applied algorithms with an only 6.66 % average gap from the optimal solutions, which is completely satisfactory. The largest average gap with the optimum solutions is observed in the PNBC–NBE results. It is noticeable that the spent computational time by all methods is incomparable with lingo which could be a remarkable point.
To confirm the statistical validity of the obtained results for small instances and to verify which the best algorithm is, a design of experiments as well as analysis of variance (ANOVA) is conducted where the different algorithms are considered as a factor and the PRE is regarded as response variable. Normality, independence of residuals and homoscedasticity are the three necessary conditions for carrying out ANOVA. Fig. 7 shows the responding means plot as well as honest significant difference (HSD) intervals at 95 % confidence level. Overlapping the intervals identifies that there is no statistically significant difference among the means. According to Fig. 7, there is a clear statistically significant difference between performances of the employed techniques.
Figure 8 shows the average deviation of all applied algorithms from the optimal solutions in smallsize instances. In each algorithm, the average PRE for each category of samplegenerated instances (2, 3 and 5 machines with 10 jobs) in each algorithm are depicted and compared to each other.
The relative percentage deviation (RPD) is employed in medium to largesize instances in order to evaluate the efficiency of employed techniques in respect of each other for all cases which is defined as follows:
where \(\mathrm{Alg}_\mathrm{sol}\) is the objective value obtained by the selected heuristic and \(\mathrm{Min}_\mathrm{sol}\) is the best solution obtained for each instance. Obviously, lower values of PRE and RPD are preferable.
Where MCPU time is mean of CPU time for all cases and is calculated to second. In Table 4, the first two columns “\(M\)” and “\(N\)” represent the number of machines and the number of jobs of a test problem, respectively.
In medium to largesize instances as could be seen in Table 4, the proposed IWDNN outperforms all other techniques on average with a 4.24 % average gap from the best obtained solutions among all other techniques. The average spent computational time via IWDNN is about 23 % larger than GANN. The latter case is happened in that constructive algorithms such as ACO and IWD build a complete solution in each iteration, while evolutionary techniques like GA improve a solution (individual) in each iteration. However, it should be pointed out that this matter does not always necessitate more spent computational time. Moreover, depending on our problem size, the employed IWD traverses more steps with respect to the GA. That is why, the spent computational time of our IWD is larger than GA.
Similar to small instances, to confirm the statistical validity of the attained consequences for medium to largesize cases and to determine which the best algorithm is, a design of experiments and ANOVA is carried out where the different algorithms are considered as a factor and the RPD is regarded as response variable. Figure 9 shows the responding means plot and HSD intervals at 95 % confidence level. According to Fig. 9, there is obvious statistically significant difference between performances of the employed algorithms.
Also, Fig. 10 depicts the performance of each algorithm as well as average variation of all techniques with respect to the best obtained solutions in medium to largesize instances. In each algorithm, the average RPD for each category of samplegenerated instances (2, 3 and 5 machines with 20 and 50 jobs) in each algorithm is depicted and compared to each other.
According to the computational results, it could be generally said that the proposed IWDNN algorithm is an efficient method for solving such complicated problems for both small and medium to largesize instances on parallel machines.
Conclusions and future study
Minimizing total weighted tardiness and earliness as well as job compressions/expansion cost has been successfully implemented in this study on identical parallel machines with distinct due dates. No job preemption is allowed and no idle time could be inserted into the schedule. Also, the jobs processing times are assumed to be controllable and can differ within a assumed interval. In better words, it is allowed to either be compressed or expanded in exchange for compression/expansion cost. The optimal set amount of job compressions/expansion in a given sequence was determined using PNBC–NBE heuristic so as to approach justintime (JIT) policy. This paper contributes to the existing literature by proposing a mathematical model deal with the identical parallel machines which could be solved optimally in smallsize samplegenerated instances. As a main contribution, an intelligent water drop (IWD) algorithm, as a new swarmbased natureinspired optimization one, was also adopted so as to solve the considered problem. A set of good paths among plenty of possible paths could be found via a natural river in its ways from starting place (source) to destination which results in finally to find very good path to their destination.
In addition to present the mathematical model as well as the populationbased constructive algorithm, IWD, an attempt is made to improve the IWD algorithm via a local search method which has the improvement ability and permits the possible combination of some basic characteristics from a reference set of metaheuristics and aims at analyzing the usefulness of the resultant customizable algorithm for a difficult problem as well as test instances. The PNBC–NBE, genetic algorithm (GA) equipped with NBC–NBE called GNN, simulated annealing (SA) equipped with NBC–NBE called SNN and a IWD algorithm equipped with NBC–NBE called IWDNN were then employed to solve all small and the medium to largesize instances. Contrary to the mathematical model, a twophased approach is employed in solving the sample instances via PNBC–NBE, SNN, GNN and IWDNN techniques. In better words, ISETP, SA, GA and IWD search the best possible sequence on each machine in the first phase and thereafter NBC–NBE gives the optimal set amounts of compression/expansion of jobs on a given machine in the second phase. In smallsize problems, where optimal solution could be found via lingo, percentage relative error (PRE) as performance measure was employed which determined the solutions’ quality of different algorithms. Also, relative percentage deviation (RPD) was used for medium to largesize instances for doing so.
The computational consequences revealed that the proposed IWDNN is a capable one which can solve such JIT problems with acceptable outcomes in small and medium to largesize instances in terms of average PRE and RPD, respectively.
It is the authors’ opinion that the capability of the IWD could be extended on other environment such as flowshop. As another opportunity for future research, it could be considered a ready time for the start time of each job which makes the considered problem so complicated, however more realistic. Solving the considered problem in multiobjective area could be interesting direction for future studies which is so attractive and closer to the reallife situations.
References
Abbasy A, Hosseini SH (2008) Ant colony optimizationbased approach to optimal reactive power dispatch: a comparison of various ant systems. IEEE, Piscataway
Alidaee B, Ahmadian A (1993) Two parallel machine sequencing problems involving controllable job processing times. Eur J Oper Res 70(3):335–341
Baker KR, Scudder GD (1990) Sequencing with earliness and tardiness penalties: a review. Oper Res 38:22–36
Cheng TCE, Chen ZL, Li CL (1996) Parallelmachine scheduling with controllable processing times. IIE Trans 28:177–180
Dariane AB, Sarani S (2013) Application of intelligent water drops algorithm in reservoir operation. Water Resour Manag 27(14):4827–4843
Drobouchevitch IG, Sidney JB (2011) Minimization of earliness, tardiness and due date penalties on uniform parallel machines with identical jobs. Comput Oper Res 39(9):1919–1926
Duan H, Liu S, Lei X (2008) Air robot path planning based on Intelligent Water Drops optimization. In: Neural networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE international joint conference on 1397–1401
Duan H, Liu S, Wu J (2009) Novel intelligent water drops optimization approach to single ucav smooth trajectory planning. Aerosp Sci Technol 13(8):442–449
Gürel S, Akturk MS (2007) Scheduling parallel CNC machines with time/cost tradeoff considerations. Comput Oper Res 34(9):2774–2789
Gürel S, Körpeoğlu E, Aktürk MS (2010) An anticipative scheduling approach with controllable processing times. Comput Oper Res 37(6):1002–1013
Hall NG, Posner ME (1991) Earliness–tardiness scheduling problems. I: weighted deviation of completion times about a common due date. Oper Res 39:836–846
Hall NG, Posner ME (2001) Generating experimental date for computational testing with machine scheduling applications. Oper Res 49:854–865
Hendrawan Y, Murase H (2011) Neuralintelligent water drops algorithm to select relevant textural features for developing precision irrigation system using machine vision. Comput Electron Agric 77:214–228
Jansen K, Mastrolilli M (2004) Approximation schemes for parallel machine scheduling problems with controllable processing times. Comput Oper Res 31(10):1565–1581
Kamkar I, AkbarzadehT MR, Yaghoobi M (2010) Intelligent water drops a new optimization algorithm for solving the vehicle routing problem. In: 2010 IEEE international conference on systems, man and cybernetics (SMC2010), 10–13 October 2010. IEEE, Piscataway, pp 4142–4146
Kayvanfar V, Komaki GHM, Aalaei A, Zandieh M (2014) Minimizing total tardiness and earliness on unrelated parallel machines with controllable processing times. Comput Oper Res 41:31–43
Kayvanfar V, Mahdavi I, Komaki GHM (2013) Single machine scheduling with controllable processing times to minimize total tardiness and earliness. Comput Ind Eng 65(1):166–175
KedadSidhoum S, Solis YR, Sourd F (2008) Lower bounds for the earliness–tardiness scheduling problem on parallel machines with distinct due dates. Eur J Oper Res 189(3):1305–1316
Leyvand Y, Shabtay D, Steiner G, Yedidsion L (2010) Justintime scheduling with controllable processing times on parallel machines. J Comb Optim 19(3):347–368
Li K, Shi Y, Yang SL, Cheng BY (2011) Parallel machine scheduling problem to minimize the makespan with resource dependent processing times. Appl Soft Comput 11(8):5551–5557
M’Hallah R, AlKhamis T (2012) Minimising total weighted earliness and tardiness on parallel machines using a hybrid heuristic. Int J Prod Res 50(10):2639–2664
Niu SH, Ong SK, Nee AYC (2012) An improved intelligent water drops algorithm for achieving optimal jobshop scheduling solutions. Int J Prod Res 50(15):4192–4205
Niu SH, Ong SK, Nee AYC (2013) An improved intelligent water drops algorithm for solving multiobjective job shop scheduling. Eng Appl Artif Intell 26(10):2431–2442
Nowicki E, Zdrzalka S (1990) A survey of results for sequencing problems with controllable processing times. Discrete Appl Math 26:271–287
Nowicki E, Zdrzałka S (1995) A bicriterion approach to preemptive scheduling of parallel machines with controllable job processing times. Discrete Appl Math 63(3):237–256
Omar MK, Teo SC (2006) Minimizing the sum of earliness/tardiness in identical parallel machines schedule with incompatible job families: An improved MIP approach. Appl Math Comput 181(2):1008–1017
Pinedo M (2008) Scheduling theory, algorithms, and systems, 3rd edn. Prentice Hall, Upper Saddle River
Polyakovskiy S, M’Hallah R (2014) A multiagent system for the weighted earliness tardiness parallel machine problem. Comput Oper Res 44:115–136
Shabtay D, Steiner G (2007) A survey of scheduling with controllable processing times. Discrete Appl Math 155:1643–1666
ShahHosseini H (2007) Problem solving by intelligent water drops. In: Proceedings of IEEE congress on evolutionary computation, CEC 2007. Institute of Electrical and Electronic Engineers Computer Society, Singapore, pp 3226–3231
ShahHosseini H (2008) Intelligent water drops algorithm: a new optimization method for solving the multiple knapsack problem. Int J Intell Comput Cybern 1(2):193–212
ShahHosseini H (2009) The intelligent water drops algorithm: a natureinspired swarmbased optimization algorithm. Int J BioIns Comput 1(1–2):71–79
ShahHosseini H (2012) An approach to continuous optimization by the intelligent water drops algorithm. Proc Soc Behav Sci 32:224–229
Shakhlevich N, Strusevich V (2008) Preemptive scheduling on uniform parallel machines with controllable job processing times. Algorithmica 51(4):451–473
Sivrikaya F, Ulusoy G (1999) Parallel machine scheduling with earliness and tardiness penalties. Comput Oper Res 26:773–787
Sorensen K, Glover F (2013) Metaheuristics. In: Gass SI, Harris CM (eds) Encyclopedia of operations research and management science, 3rd edn. Springer, New York
Su LH (2009) Minimizing earliness and tardiness subject to total completion time in an identical parallel machine system. Comput Oper Res 36(2):461–471
Sun H, Wang G (2003) Parallel machine earliness and tardiness scheduling with proportional weights. Comput Oper Res 30(5):801–808
Turkcan A, Akturk MS, Storer RH (2009) Predictive/reactive scheduling with controllable processing times and earliness–tardiness penalties. IIE Trans 41(12):1080–1095
Vickson RG (1980) Two single machine sequencing problems involving controllable job processing times. AIIE Trans 12:258–262
Wan G (2007) Single machine common due window scheduling with controllable job processing times. Lect Notes Comput Sci 4616:279–290
Wang JB (2006) Single machine scheduling with common due date and controllable processing times. Appl Math Comput 174:1245–1254
Yang DL, Cheng TCE, Yang SJ (2014) Parallelmachine scheduling with controllable processing times and ratemodifying activities to minimise total cost involving total completion time and job compressions. Int J Prod Res 52(4):1133–1141
Yi Y, Wang DW (2003) Soft computing for scheduling with batch setup times and earliness–tardiness penalties on parallel machines. J Intell Manuf 14(3–4):311–322
Author information
Affiliations
Corresponding author
Additional information
Communicated by José Mario Martínez.
Rights and permissions
About this article
Cite this article
Kayvanfar, V., Zandieh, M. & Teymourian, E. An intelligent water drop algorithm to identical parallel machine scheduling with controllable processing times: a justintime approach. Comp. Appl. Math. 36, 159–184 (2017). https://doi.org/10.1007/s4031401502183
Received:
Revised:
Accepted:
Published:
Issue Date:
Keywords
 Controllable processing times
 Earliness and tardiness
 Intelligent water drops algorithm
 Identical parallel machines
Mathematics Subject Classification
 Primary 90B35
 Secondary 68M20