Discrete imperialist competitive algorithm for the resource-constrained hybrid flowshop problem with energy consumption

The resource-constrained hybrid flowshop problem (RCHFS) has been investigated thoroughly in recent years. However, the practical case that considers both resource-constrained and energy consumption still has rare research. To address this issue, a discrete imperialist competitive algorithm (DICA) was proposed to minimize the makespan and energy consumption. In the proposed algorithm, first, each solution was represented by a two-dimensional vector, where one vector represented the scheduling sequence and another one showed the machine assignment. Then, a decoding method considering the resource allocation was designed. Finally, we combined DICA and the simulated annealing algorithm (SA) to improve the performance of the proposed approach. Furthermore, we tested the proposed algorithm based on a randomly generated set of real shop scheduling system instances and compared with the existing heuristic algorithms. The results confirmed that the proposed algorithm can solve the RCHFS with high efficiency.


Introduction
Hybrid flowshop scheduling problem (HFSP) is a generalization of the conventional flowshop scheduling problem. Compared with other types of scheduling problems, the multi-process and multi-stage characteristics of HFSP are deemed more realistic. In a typical manufacturing industry, various dynamic events may occur in the actual production process, such as limited resources and machine breakdown. Therefore, it is necessary to investigate the RCHFS taking this into consideration. As the major source of global warming, manufacturing activities are required to satisfy the regulations on environment protection and energy consumption. In this paper, the resource-constrained hybrid flowshop problem with energy consumption was studied, and this problem has significant practice relevance. With regard to the conventional HFSP, many variants and solutions have been discussed by Rubén Ruiz et al. [1]. To solve HFSP, several researchers have applied the exact algorithms, such as the Lagrangian relaxation algorithm [2][3][4] and the branch and bound algorithm [5][6][7]. However, with an increase in the scale of the problem, the resolution of accurate algorithms has become limited, and consequently, metaheuristic or heuristic algorithms have been used more widely to solve this problem. For example, the genetic algorithm (GA) has been applied by Behnamian et al. [8] and Dugardin et al. [9]. Then, the simulated annealing algorithm (SA) has been adopted by Elmi and Topaloglu to solve the problem of multi-robot scheduling [10,11]. The tabu search algorithm has been considered for the two-stage hybrid flowshop problem by Figielska [12]. The discrete firefly algorithm has been proposed by Marichelvam to solve the bi-objective HFSP [13]. The imperialist competitive algorithm has been considered by Naderi and Yazdani to cope with the HFSP with sublots and setup times [14]. In addition, it should be mentioned that the hybrid artificial bee colony algorithm [15], the hybrid fruit fly optimization algorithm [16], the hybrid squirrel search [17], and others have also achieved good results in the field of HFSP. RCHFS defined as a variant of HFSP has also been investigated by many researchers in recent years. The Lagrangian decomposition and coordination approach has been developed by Nishi [18]. The branch and bound algorithm was proposed by Sural et al. [19]. The approach combining the Bat algorithm and variable neighborhood search (BA-VNS) has been proposed by Pei et al. [20] for the dual resourceconstrained scheduling problem. Li et al., have investigated the improved artificial bee colony algorithm (ABC) and the two-phase-based encoding mechanism [21][22][23]. The polynomial algorithm has been proposed by Cheng et al. [24] for different cases. The search algorithm based on GA has been proposed by Leu and Hwang, and sensitivity analysis has been performed accordingly [25].
In recent years, green scheduling in the manufacturing industry has become one of the main directions of research and development. Gao et al. [26] aims to provide a comprehensive literature review of production scheduling for intelligent manufacturing systems with the energyrelated constraints and objectives. A unified framework for automated design of production scheduling heuristics with genetic programming is developed [27]. A genetic-SA has been adopted by Dai et al. [28] to achieve a significant advance in this field. The improved multi-objective evolutionary algorithm based on decomposition has also been considered [29]. A Pareto-based chemical-reaction optimization algorithm has been proposed by Li et al. [30]. A new concept of teaching-learning-based optimization (TLBO) has been considered to minimize total energy consumption and total tardiness by Lei et al. [31]. The multi-objective iterative greedy algorithm has been proposed by Ding et al. [32] to solve the bi-objective problem. In addition, a hybrid iterated greedy algorithm [33], an efficient multi-objective algorithm [34], and an improved Jaya algorithm [35] has been studied in the multi-objective field. Many experts have also applied energy-aware models [36][37][38], turn off/on scheme [39], different types of machines [40], and so on.
The main contributions of the present paper are as follows: (1) we considered resource-constrained and energy consumption based on HFSP and proposed corresponding models. To the best of our knowledge, it is the first work to consider the HFSP with resource constrained and energy consumption objective; (2) a novel DICA method was proposed to solve the RCHFS with energy consumption. In addition, we considered the encoding and decoding strategies that were adapt to this problem. (3) DICA and SA were combined to improve the performance of the proposed algorithm.
The rest of this article is organized as follows. Section "Problem description" briefly describes the research problem. In Sect. "Proposed algorithm", we describe the canonical ICA and improved algorithms, encoding, decoding, and the local search process. Then, in Sect. "Numerical experiments", we discuss the experiments conducted on a randomly generated set of instances of the actual shop scheduling system and compare the obtained results with those of several other algorithms. Finally, Sect. "Conclusions" summarizes the overall conclusions.

Problem definition
The problem considered in the present study can be formally described as follows. There are n jobs that require s stages of processing. In the entire process, there is at least one stage using two or more parallel processing machines. After completion of processing in the first stage, the work can be continued in the second stage. When a machine is available in the second stage, the work can be processed in the second stage, and if the first stage of processing is completed, the work can remain in an infinite capacity buffer space until the machine becomes idle in the second stage. In phase 2, the jobs can be processed on any processor, and interruption is not allowed after processing begins.
We assume that there are h resources in the shop comprising R 1 , R 2 , . . . R h . The number of resources corresponding to each type is defined in advance. The process related to each machine is based on cooperation among different types of resources. Therefore, to start the operation, it is necessary to ensure that the required machines and resources are available simultaneously.
In addition, in the realistic HFSP, there is at least one stage with multiple machines having different processing capabilities in which each machine has two states, namely the processing state and the standby state. The amount of energy consumed by different states is different. The objective is to minimize the makespan and energy consumptions.
The notations used in this study are summarized below. The objective is expressed as: Subject to n l 1  ( 1 0 ) Equation (1) indicates schedule objective. Equation (2) ensures that each priority position can only correspond to one job. Equation (3) ensures that each job has only one priority position. Equation (4) means that only one machine can process each job at any stage. Equation (5) represents the relationship between the completion time and the start time of the process at the same stage. Equation (6) means that the same job must complete the current process before proceeding to the next stage. Equation (7) indicates that the earlier job in the scheduling arrangement of the same stage, the earlier the processing time begins. Equation (8) indicates that the backward job assigned to the same machine at the same stage must wait for the predecessor job to be processed before proceeding. Equation (9) promise the consumed resource is within the limitation. Equations (10)- (12) are the energy consumption, where represents the energy consumption when the machines stay at the processing state; represents the energy consumption when the machines stay at the idle state.

Illustrative example
In this subsection, we present the following example to illustrate the research problem and the solution method. There are five jobs and two stages. There is one machine in the first stage and two parallel machines in the second stage. The total number of resource types is set to two. Table 1 represents the processing time for each job in each stage. Table 2 outlines the resources and energy required by the machines in each stage. Here, Y indicates that the corresponding resources are required, and N denotes that the corresponding machines do not need the resources. Figure 1a represents the machine Gantt chart for the sample solution in which the sequence of jobs in the first stage is as follows: J 1 , J 2 , J 3 , J 4 , J 5 . It is evident that J 2 can enter the second stage immediately after the first one is finished as the second stage machine is idle; however, J 2 has to be delayed as resources R 1 are insufficient to start processing until time 9. In Fig. 1b, it can be seen that the occupancy of each resource type in each time period does not exceed the total number of resources currently used in any time period; this proves that the solution is feasible. Table 3 illustrates the total consumption of resources during each time period.
For the example in Fig. 1a, the energy consumption unit is mega joules (MJ) and the time unit is seconds. M 1 needs to process the five jobs in the first stage; the processing time is 11; the processing energy consumption is 11 × 2.5 27.5; and there is no standby energy consumption. In the second stage, the processing time of M 2 is 9; the standby time is 2; the energy consumption of M 2 is 9 × 2 + 2 × 0.05 18.1; the processing time of M 3 is 4; and there is no standby time, so that the energy consumption is 4 × 3 12. Finally, the total energy consumption corresponding to the example is 27.5 + 18.1 + 12 57.6.

Canonical ICA
Inspired by the imperialist colonial competition mechanism, a new swarm intelligence optimization algorithm called ICA was proposed by Gargari and Lucas in 2007 [41]. After that, ICA was applied widely to various practical optimization problems. For example, it was used by Zahra et al. [42] to find the optimal solution in a shorter time compared with the other existing algorithms. Lucas et al. [43] designed a low-speed, single-sided linear induction motor and demonstrated that ICA performed better than GA in this case. Niknam et al. [44] combined ICA with the K-means algorithm and successfully applied this hybrid method to the task of processing data clusters.
Recent research has demonstrated that ICA is an efficient algorithm for the job shop scheduling problems, such as the flowshop and single machine scheduling ones [45][46][47][48][49]. ICA has advantages in terms of its rapid convergence speed, and many improved algorithms have been proposed on its basis. For example, the chaotic mapping has been applied by Bahrami et al. [50]to determine the changes in directions of colonies in an assimilation operator. Lin et al. [51] have proposed applying perturbation to the ICA algorithm and replaced the relatively weak imperialist countries with artificial ones to enhance the information exchange between empires.
Similar to other evolutionary algorithms, ICA starts with a set of initial solutions called populations. Each individual in a population is denoted as a "chromosome" in GA and a "country" in ICA. Each country is divided into the two parts: imperialist countries and colonies. Countries with better fitness are considered to be empires, and those with worse fitness are denoted as colonies of empires. Competition occurs between the empires, and a powerful empire is likely to seize a colony from a weak one. The entire competition process lasts until only one empire remains; the maximum number of iterations of the algorithm is reached; or the optimal solution is found. The canonical ICA process involves producing the initial empire, assimilating the colonies, proceeding with competition among the empires, and executing the disappearance of empires [52]. Construct an initial empire 3.
While the stopping criterion is not met, do 4.
For i=1 to Nim, do 5.
Assimilate colonies by the mutated imperialist
Update the imperialists 9.
Do an imperialist competitive strategy 10.
For i=1 to Nim, do 11.
Do an imperialist development strategy 12.
End for 13.
Apply local search on the imperialist 14. End while

Empire initialization
In ICA, the individuals that constitute the population together are denoted as a country.
For a D-dimensional optimization problem, a country can be represented as follows (13): The initial country generated by the schedule is denoted as N pop from which N imp with the best fitness value are selected. One country is set as an empire, and the remainder comprise N col . The countries are distributed as colonies and are randomly assigned to various empires according to their power. The powers of the empires are standardized according to Eq. (14): where c n is the fitness value of the nth empire, and C n is its standardized fitness value. Based on the standardized fitness values, Eq. (15) is defined to calculate the power of each empire: where C n is the total normalized cost of each empire; N imp i 1 c i is the total empire cost; and P n is the normalized power of each imperialist. The empire with the greater power will have more colonies. Based on P n , each empire is divided into the number of colonies it should have according to Eq. (16): where N.C. n is the number of colonies initially owned by each empire. According to this number, the colonies are randomly assigned to each empire. The initialization of all empires and colonies is then complete.

Colonial assimilation
In the middle of eighteenth century, world imperialism continued to divide and plunder the colonies. To control the colonies in a better manner, powerful empires promoted customs, culture, and political ideas in the colonies. Accordingly, the colonial assimilation stage of ICA is aimed at imitating this process. The concept of this assimilation process implies forcing all colonies within an empire tend to the local optimal solution within the empire. The process of assimilation can be achieved by moving the colonies into the empire. The corresponding movement process is represented in Fig. 2.
As illustrated in Fig. 2, the colonies move to the empire by a distance x to the new location, and x is defined as where β is a real number greater than 1, and d is the distance between the colony and the empire. The condition β > 1 is required to ensure that the colonies can approach the colonial countries from different directions. A random offset variable θ is added after the move to perform a wider search around the empire. θ is subject to a uniformly distributed random number: where γ is a parameter used to adjust deviation from the original direction. The values of β and γ are typically set to 2 and π /4, respectively, as shown in Fig. 2.

Empire competition
In the imperialism competition stage, we need to select new colonies among weak empires to move to a powerful empire. Therefore, the problem also involves the task of identifying weak empires. In the present study, we propose the two strategies that can be randomly selected to find a weak empire. First, the empires with less colonies can be treated as weak empires. Then, we sort empires by the number of colonies to identify the weak empires and disrupt their colonial order before selecting selectNum colonies as the ones belonging to the powerful empires. Second, we can also sort the fitness of the colonies within the empire from small to large, before selecting the least fit as the weak empire and then disrupting the colonial order of the weak empires. We select some of these colonies to move to the powerful empires. The imperialist competition process is represented in Fig. 3.

Empire demise
After a period of time, all except the most powerful empire collapse, and all colonies are included under the control of this unique empire. An empire will perish if it loses all of its colonies, and the algorithm ends when the last empire that is left, or the maximum number of iterations is reached.

Problem encoding
We use a two-phase coding mechanism composed of a two-vector-based (TVB) representation and a machine Ganttbased solution (MGB) representation. We use these two representations, as in the earlier evolutionary stage, such two-vector-based representation can be used to identify a promising search space with a wide search capacity. In the later evolution stage, we employ the solution representation based on a machine Gantt diagram to encode the detailed scheduling for each machine and to search through a suffi-

TVB representation
Machine allocation and operation ordering are considered in the encoding process for the two-vector-based solution representation, as they are commonly used for the standard hybrid flowshop scheduling problems. The resource selection task is dynamically completed during the decoding process. The first vector is called the machine assignment vector, in which each element is used to identify the machine number assigned for the corresponding operation.
The scheduling vector is set as {1, 2, 3, 4, 5}, which defines the process order for the first stage. Each job is transferred to the next stage immediately after the current phase is completed, and the process starts when the assigned machines and resources are available. Therefore, the process order may change in the later stages because of unavailability of the machines and resources. Figure 4 presents the TVB solution representation example based on Fig. 1a.

MGB representation
The second type of solution representation called the MGB representation is designed on the basis of the current Gantt diagram for the machine, as illustrated in Fig. 1a. The solution requires the vectors for each machine in each phase for which the scheduling sequence for the assigned operations is provided. In the case of the example presented in Fig. 1a, the MGB solution representation can be {1,2,3,4,5}, {{1,3,5}, or {2,4}}. The first vector corresponds to the coding in the first stage, and the second vector is the machine coding in the second stage.

Problem decoding
In the decoding process, we use the TVB and MGB solution representations to determine the start time for each operation and the suitable resources for each machine. The three following conditions need to be satisfied simultaneously to start the operation: The job arrives at the specified machine; the specified machine is available; and the resources required by the specified machine are available (Table 3).
In the subsequent stages, each job can be passed to the following stages immediately after its completion time in the current stage. If the assigned machine and the required resources are available, the operation can be scheduled. If the required resources are not available, the operation should be postponed until the availability of the required resources is confirmed.
In operation O i, j , when we calculate its starting time B i, j,k , we consider the following conditions: (1) the completion time of J i in the previous stage denoted as E i, j−1,k ; (2) the available machine time I k ; and (3) the maximum time available for the resources required by machine k. Therefore, B i, j,k can be computed as follows: where A r is the time available for resource r ∈ R h , and AR i,k,r is the maximum time available for all resources required by machine k.
If O i, j is the first operation of J i , then the starting time B i, j,k can be calculated as follows: After completion of O i, j , the resources R i,k,r are consumed and then, immediately released to be used in other operations. Therefore, the available time for R i,k,r and I k can be updated as follows: After scheduling all operations, the makespan of the system can be calculated as follows:

Local search for the TVB coding method
In the present study, we consider the representation of solutions based on the two vectors and use a local search strategy operator, which is described as follows.
Step 1 For the machine assignment vectors, we randomly select an operation and assign it to different available machines Step 2 For the scheduling vectors, we randomly use the swap and insert operators. With regard to the swap operator, we randomly select two job numbers in the scheduling vector and then swap the two jobs, aiming to generate different solutions. In the case of the insertion operator, we randomly select two jobs in the scheduling vector before deleting one job and inserting it instead of another selected job

Local search for the MGB coding method
Considering the representation of the MGB solution, we use a new local search operator, which is described as follows.
Step 1 Calculate the completion time for each machine in the last stage Step 2 Identify the machines with the longest completion time and store them in a vector denoted as B M Step 3 Randomly select a machine B m from B M and store all jobs that are being processed on B M in a vector denoted as B J Step 4 Randomly select a job B j in B J ; then, randomly select a stage j; and find the designated machine M J that handles B j in stage j Step 5 Delete job B j from M j ; randomly allocate B j to another machine in stage j; and find the best location for B j in the newly allocated machine

The improved ICA
In the canonical ICA, the empires are fixed until they are swapped for colonies and transformed into them; this is considered as a disadvantage of ICA. Therefore, we attempt to solve this problem by hybridizing the SA and ICA. By running SA from imperialist positions to improve their status, a new operator called imperial improvement is added to the canonical ICA. SA is a stochastic optimization algorithm inspired by the annealing process in metallurgy. Starting from a higher initial temperature, SA stochastically finds the global optimal solution of the objective function in the solution space with the decreasing temperature parameters. In other words, the local optimal solution can be found probabilistically, and the global optimal solution can be eventually reached. Similarly to natural annealing, SA solutions for optimization problems are heated (randomly generated). Then, the solution can select one of the nearby locations in the neighborhood with a specific acceptance probability value. The acceptance probability depends on a global parameter T, namely temperature, which decreases as the iterations of the algorithm are being executed.
Acceptance probability is formulated as a Boltzmann-like equation proposed by Kirkpatrick et al. [53]. Generally, if the current solution is represented by x, and the newly created solution is represented by x new , the acceptance probability of x new is defined as P(x, x new , T ). If x new is better than x, it is accepted, meaning that P(x, x new , T ) 1; otherwise, the value of P(x, x new , T ) is randomly obtained in the interval [0, 1]. The acceptance probability formula is defined as where is the difference between the fitness of x new and x; namely f (x new ) − f (x). There are several cooling schemes, which can be applied in the case when the exponential cooling is considered. The temperature at iteration k of the algorithm is defined according to Eq. (24): where 0 < α < 1 is the reduction rate of the temperature. Evidently, the smaller α is, the slower the temperature drops. The pseudocode of SA is described as follows: With a probability of 1/2, go to step 2; otherwise go to step 3.

2.
Select a random element in vector 1 v (the machine assignment vector) and replace it by a random number in the interval [0, 1]. Then, go to 6. 3.
With a probability of 1/2, go to step 4; otherwise go to step 5.

4.
Select two distinct elements in 2 v (the scheduling vector) and swap their locations. Then, go to step 6.

5.
Select two distinct elements in 2 v and reverse the sequence between them (including themselves).

Experimental instances
To solve the RCHFS problem and verify the validity of the DICA algorithm, we randomly generated 20 large-scale test cases for RCHFS problems based on the actual production data. The number of identical factors for each test problem was randomly generated. The test set of instances was classified into the four categories according to the number of jobs (50, 100, 150, and 200). To test the efficiency of the proposed DICA algorithm in the environments with different complexity, each category was divided into the five sub-problems with different numbers of stages (2)(3)(4)(5)(6)(7)(8)(9)(10). The instances can be found in the website: https://www.researchgate.net/ publication/334735417_RCHFS-suanli. For example, the code instance 1 indicates that the problem is characterized with 50 jobs, two stages, and nine parallel machines in the first stage. Through the experiments, we set the empire competition selection rate to 0.2. In addition, Zandieh et al. [54] considered that the robustness of the algorithm was better when the parameters were set as in Table 4.

Experimental parameters
We conducted a preliminary experiment to appropriately set the system parameters. The key parameters considered in the experiment were as follows: (1) the population size (P m ) The beta factor of SA 1.9 T 0 factor of SA 500 Maximum iteration of SA 20  (2) the probability of crossover (P c ) that determines the likelihood of each individual's crossover operation; and (3) the probability of mutation (P m ), which defines the probability of mutation operation. The Taguchi method of design of experiment (DOE) [55] is utilized to test effects of these parameters on the performance of the proposed algorithm, the levels of the three critical parameters determined according to the preliminary experiment are shown in Table 5. An orthogonal array L16(4 3 ) was used to tune the parameters. The proposed algorithm was run 30 times for each independent combination of parameters. We then calculated the average RPI value obtained by the algorithm used for comparison as the response variable and the results are presented in Table 6. As shown in Fig. 5, the P s is set to 50, the P c is set to 0.6 and the P m is set to 0.05, so that the algorithm has the best performance. In addition, we set the empire competition selection rate to 0.1 [41,42].

Performance compared with the exact solver CPLEX
To investigate the validation of the proposed optimization model, we employ an exact solver IBM ILOG CPLEX 12.7 to calculate the proposed model for the fourteen small-scale instances. The small-scale instances are generated at random. In the exact solver, the maximum number of threads is 3, and the CPU time limits is set to 3 h for each run. Table 7 shows the comparison results between the DICA algorithm and CPLEX solver. The first column gives the instance number, and the second column shows the problem scale (where 4-4-2 means the instance includes 4 jobs, 4 machines and 2 stages). We run the DICA algorithm 10 times for each instance and take the minimum value. The last two columns show the RPI values for the two methods. We can see that: (1) the proposed DICA algorithm obtains a higher  The performance measure is relative percentage increase (RPI), which is calculated as follows: where C b is the best solution found by all the compared algorithms, while C c is the best solution generated by a given algorithm.

Efficiency of the solution representation
To investigate the efficiency of the solution representation defined above, we considered a permutation-based representation to compare with the proposed algorithm. The parameter sets for the two considered algorithms were the same as those defined above. The main difference between the two algorithms considered in the comparison was the embedding of a two-level coding mechanism in DICA. Table 8 presents the comparison of the detailed results obtained by the two algorithms after 30 independent runs. The instance name is presented in the first column of Table 8; the scale, in the second column. The RPI values obtained for DICA and DICAnd are presented in the following two columns. The results presented in Table 8 can be summarized as follows: (1) All optimal values were obtained by DICA, and DICAnd provided none of them; (2) the last row in the table also demonstrates that DICA performed better than DICAnd; and (3) we achieved the better performance using the proposed two-phase encoding method. Table 9 shows the comparison of the average running results of our algorithm and the canonical ICA, we can see that DICA has a significant advantage. The advantages of the proposed two-phase coding method are as follows: (1) The two vector-based coding methods are adopted in the early stage of evolution; therefore, they can allow rapidly locating the optimal search space and improving the search efficiency; (2) in the second part of the evolution process, the proposed algorithm performs a fine-grained search at the optimal search position using the Gantt chart-based representation and, thus, allows improving the quality of the solution; and (3) the proposed algorithm uses a two-phase representation to balance the exploration and development capacities. To determine whether the results were significantly different, we also performed a multivariate analysis of variance often used to compare the performance of different heuristics.

Comparisons with other efficient algorithms
Finally, to assess the efficiency of the proposed DICA algorithm, we compared it with the discrete DABC algorithm [56], LABC [57] and hybrid ICA (HICA) [58]. We integrate energy consumption and makespan into one objective function by weighted sum approach, which is used by many researchers [59,60]. The total energy consumption and makespan are normalized and can be calculated using Eqs. (26) and (27), respectively. The final objective function of weighted sum is presented as Eq. (28).
where F max and E max are the maximum makespan and maximum energy consumption, respectively. w is the weight to reflect the significance of F which is defined in the range 0 ≤ w ≤ 1, and we set w to 0.8. Each algorithm was run 30 times on the same computer considering 20 test cases. The comparative experimental results are presented in Tables 10, 11, and 12. The instance name is presented in the first column. The second column presents the best value for each instance. The following three columns provide the best values obtained by each algorithm considered in the comparison. To intuitively compare the quality of the solutions obtained by the three algorithms, we calculated the percentage deviation for the solutions. The corresponding results are presented in the last three columns.
The results outlined in Table 10, 11, and 12 can be summarized as follows: (1) The DICA algorithm obtained 11, 11, and 13 optimal solutions for a given example, which were far greater compared with the results of the other considered algorithms; and (2) as presented in the last row, the average validity period and average percentage deviation were lower in the case of DICA compared with the other algorithms. In conclusion, the experimental results indicated that the proposed DICA algorithm is a more efficient method for solving the RCHFS problem compared with other recently proposed algorithms. In addition, we randomly selected four instances "Instance1, Instance5, Instance11, Instance17". Figures 6a-d record the convergence curves of these examples, showing the effectiveness of the proposed DICA algorithm. We can clearly see that among the convergence curves of each example, the DICA algorithm is the most efficient. Figure 7 shows a resource Gantt chart for Example 6, representing 50 jobs and 10 machines in two stages, requiring three resources corresponding to R 1 , R 2 , R 3 .

Comparative analysis
The conducted experimental comparison has led to the following conclusions with regard to the proposed algorithm: (1) The two-phase coding mechanism can be used to improve the search capacity of the algorithm in different evolution stages, thereby improving the performance of the algorithm; (2) the decoding process considers resource constraints to provide the feasible and efficient solution; and (3) the local search process can efficiently utilize the two-phase encoding mechanism to improve the search capacity of the algorithm.

Conclusions
In this paper, we proposed the DICA method for a variety of RCHFS problems associated with energy. The main contributions of the present study are as follows: (1) we considered resource-constrained and energy consumption based on HFSP and proposed corresponding models. To the best of our knowledge, it is the first work to consider the HFSP with resource constrained and energy consumption objective; (2) a novel DICA method was proposed to solve the RCHFS with energy consumption. In addition, we considered the encoding and decoding strategies that were adapt to this problem. (3) DICA and SA were combined to improve the performance of the proposed algorithm. We verify the effectiveness of the proposed algorithm by comparing with CPLEX and other advanced algorithms. In future research, we aim to apply the proposed DICA method to solve other types of scheduling problems, such as distributed green hybrid flowshop scheduling problems. The proposed algorithm was tested on the problems of different scales having various structures. Several efficient existing algorithms were compared to the proposed DICA. The experimental results confirmed the efficiency of the proposed algorithm. Future research will be mainly focused on the following issues: (1) applying the proposed algorithm to solve other types of scheduling problems, such as robot-related problems in a hybrid flowshop and distributed optimization problems with more realistic constraints; (2) considering two or more goals and developing a multi-objective optimization algorithm to solve more realistic green intelligent optimization problems; and (3) solving large-scale scheduling problems, such as steelmaking scheduling, chemical production or parking lot scheduling.