The goal of the conducted experiments was to investigate the following issues:
-
robustness of ACO approach for MS-RCPSP based on given dataset,
-
robustness of various update pheromone methods,
-
comparing HAntCO to classical ACO approach and other (meta-)heuristics.
Therefore, we have compared the results obtained for different update pheromone methods and results for hybrids and classical ACO approach. Furthermore, the results for simple heuristic scheduling have been provided to get a reference for the ACO-based mechanism.
The obtained results (project schedules) are described by duration time ([days]) and performance cost ([currency units]). Those project schedule properties have been used to compare the investigated methods.
iMOPSE dataset
Due to evaluating not only the project schedule duration, but also the cost of the schedule, we cannot use the standard PSPLIB benchmark dataset (Kolisch et al. 1996) that does not contain any information about the task performance cost. What is more, PSPLIB dataset instances do not reflect the MS-RCPSP. Hence, lack of benchmark data has encouraged us to prepare the iMOPSE dataset, containing 36 project instances that have been artificially createdFootnote 1, on the basis of real-world instances, obtained from an international enterprise. We recommend other scientists using iMOPSE dataset as a benchmark for investigating their approaches in solving MS-RCPSP as defined.
Instances of the dataset have been created according to the analysis made in cooperation with experienced project manager from Volvo IT. We were not allowed to get real project data because of their sensitive character for the enterprise. However, we made a statistical analysis of real projects. Then, we prepared artificial dataset instances according to the analysis result, regarding the most common project characteristics, such as a number of tasks, a number of resources, various skill types in enterprise and the structure of critical chain (a number of tasks involved by precedence relations), etc.
The iMOPSE dataset summary is presented in the Table 1. There are two groups of created project instances: one contains 100 tasks and the second—200 tasks. Within each group, project instances are varied by a number of available resources and the precedence relationship complexity. The number of resources for instances from both groups was chosen in a way to preserve constant average resource load and average task relations ratio for given instances. Hence, for project instances with 200 tasks, the number of possible resources and precedence relations is twice bigger than for project instances containing 100 tasks. The skill variety has been set-up to 9 or 15 different skill types for each project instance, while any resource can dispose of exactly six different skill types. Because of the different resources’ and relations’ numbers, the scheduling complexity for each project is varied.
Table 1 iMOPSE dataset summary
This dataset stands as an extension of dataset presented in Skowroński et al. (2013a, b), Myszkowski et al. (2013), and that is the reason some instances are named with suffix Dx. This suffix refers to dataset instances that have been previously created and presented in those papers. Because of the extension of the dataset, the need of introducing more clear name system has arisen. Suffix has been added to a reference of previously created files, keeping the naming convention applied after dataset extension.
Experiments’ set-up
The experiments have been divided into investigating the influence of ACO parameters’ configurations for project duration and performance cost in three various components’ weights in evaluation function: duration optimization (DO: \(w_\tau =1\), see. Eq. 10), balanced optimization (BO: \(w_\tau =0.5\)) and cost optimization (CO: \(w_\tau =0\)). Because of the stochastic nature of ACO-based methods, each experiment for given parameter configuration has been repeated ten times. For K–S test and t test, each experiment has been repeated 50 times (see Tables 9, 10). On the other hand, deterministic character of heuristics allowed us to obtain results for those methods in only one iteration for every parameters’ configuration.
The further step of the conducted experiments was to compare results obtained for random initial solution with boosting initial solution using described hybrids. Initial solution has been previously obtained using the above-mentioned heuristics and then set them as input for ACO and made those results as more favorable in local search by enhancing the pheromone value left in this path representing initial solution. We decided to use SLS(D) (Skowroński et al. 2013b) for DO mode and RS(A) (Skowroński et al. 2013b) for CO mode optimization within HAntCO hybrid. Because of some code refactoring, we were able to tune our heuristics and obtain a better solution than the found ones in Skowroński et al. (2013b). That is the reason why the results of those heuristics in this paper are slightly better than the results in Skowroński et al. (2013b) for given dataset instances.
To present averaged results in detail (see Table 4), a standard deviation measure (\(\sigma \)) has been introduced and applied to each average value, presented as a percentage value in relation to the average. We have also added information about the best found solution for a given method (see Table 2) that have been compared with the results obtained by most promising heuristics, described in Skowroński et al. (2013b).
Table 2 Comparison of the best obtained results for DO, BO and CO modes in classical ACO and selected heuristics from Skowroński et al. (2013b)
Both for the best and averaged results, pheromone update methods have been compared and the one that provided best results (shortest duration in DO, smallest cost in CO and smallest evaluation function value in BO) is presented in Tables 2 and 4. The notation for methods used in tables with obtained results is as the following: E—update ELITE pheromone method, A—update ALL, D—update DIFF. If more than one pheromone update methods turned out to be the best and gave the same results, they have been presented both separated by “/” (e.g., E/D—both update DIFF and update ELITE methods gave the same, best results). In Table 2, a sign \(*\) has been also introduced to indicate a situation where all three methods provided the same, regarded as the best, result.
All the results presented in tables have been obtained for given ACO parameter configuration: \(p=12\), \(\mu =0.1\), \(p_{\mathrm{init}}=1.5\), \(\alpha =1\), \(\delta =0.05\), \(p_{\mathrm{min}}=0.05\), \(h_{\mathrm{init}}=1\), \(\beta =0\), \(\gamma =150\), \(\sigma =30\), \(\psi =0.1\), \(\kappa _{\mathrm{init}}=20\). This configuration has been regarded as the best, defined as a result of the previous parameter-tuning experiments. The same configuration has been chosen to be used in every pheromone update method (ALL, ELITE, DIFF), every optimization mode approach (DO, BO, CO) both for ACO and HAntCO approaches.
Experiments’ performance
The processing time was varied in relation to the used update method. For ALL method that could be regarded as the simplest, the processing time was relatively small (from 7 to 90 s, depending on processed dataset instance). However, for ELITE and DIFF methods that are regarded as more complex because of the need of sorting ants and choosing best/worst, the processing time varied from 30 to 270 s per one execution in one CPU for given parameter configuration.Footnote 2
The best found results
The best results obtained by ACO for CO and DO modes have been compared with the results obtained using heuristics proposed in Skowroński et al. (2013b). In Table 2, this comparison is presented. For each dataset instance and optimization mode, the best results have been chosen from various pheromone update methods. Indication which method provided the best results is stored in columns named \(M\) for every optimization mode.
The obtained best results have been compared with the heuristic results. We decided to omit the name of heuristic if possible to reduce the space covered by the table. For heuristic results in CO, SA heuristic name has been omitted without losing any important information, as the parameter configuration for that method has been written in the table. To give a more detailed view about those methods, please refer to Skowroński et al. (2013b).
Better values from comparison optimization modes between ACO and heuristics have been written in bold. If key values (duration for DO or cost for CO) were equal for ACO and heuristic approaches, the smallest value of the second aspect has been taken into account to choose a better solution. If both project schedule properties turned out to be the same, both solutions were written in bold.
To determine the best obtained result for BO mode, neither duration nor cost has been investigated. Instead of those aspects, the evaluation function value has been taken into account. Furthermore, we were not able to compare strictly the results of BO for ACO with corresponding ones for heuristics, as no evaluation function has been used to evaluate results of heuristics.
A similar analysis has been made for the best found results within investigated hybrid. The best HAntCO results are presented in Table 3. The most significant difference for HAntCO best results table in comparison with table of best results for classical ACO is that there is no BO mode included. It is because hybrid is activated only for DO or CO mode—depending on selected heuristic for initialization.
Table 3 Best results obtained for HAntCO with various pheromone update methods in DO and CO optimization modes
Taking into account the results gathered in Table 3, we can assume that the ELITE strategy mode for HAntCO generally provides better results than DIFF in DO mode. It provided better results in 26 cases (72 %). However, in CO, we noticed that the DIFF strategy turned out to be more suitable than the ELITE, provided better results in 9 cases (25 %), while the ELITE became better in only one case (less than 3 %). In remaining cases, both strategies gave the same best results. An interesting fact is that for DO, no equal best results for both strategies have been found.
Also comparing HAntCO best results (see Table 3) to single heuristics results (see. Table 2), we can see that hybrid ACO with heuristics is more effective for DO than CO mode. In most instances (89 %), HAntCO found a better solution than simple heuristic or ACO.
Table 4 Averaged results obtained for classical ACO in various optimization modes
Averaged results
Averaged results obtained for various pheromone update methods are presented in Table 4 in a similar way as the ones in Table 2, respectively. We also provided in Table 4 the notation for the method that provided best results (A, D, E, D/E). In opposition to Table 2, no comparison to averaged heuristic results has been introduced, because heuristics are deterministic methods for which result can be obtained in only one iteration. On the other hand, in Table 4, a standard deviation measure (\(\sigma \)) has been introduced to indicate the level of variability of the obtained results. It is presented as a percentage value of an average.
For DO and CO modes, the smallest averaged values of project duration or project cost, respectively, have been taken into account to determine the best pheromone update method. If values of given aspect are equal, the smallest value of the second aspect is taken into account. If there is still no possibility to determine which pheromone update method provides better solutions, the standard deviation of more important aspect is taken into account (duration for DO and cost for CO, respectively) and the method with smaller standard deviation value is regarded as better.
We have also provided averaged results for HAntCO approach, presented in Table 5. Analogously to best HAntCO approach results, averaged ones regard only DO and CO modes. Averaged values are supported by standard deviation values that reflect the variability of the obtained results. We have also decided to count how many times one strategy became better than another also in averaged results. For DO, ELITE strategy became better in 25 cases (69 %), while DIFF became better in the remaining ones. For CO, DIFF strategy provided better results in 14 cases (39 %), while only in one case ELITE strategy became better. For the remaining ones, the obtained averaged results became the same. It leads to conclusion that HAntCO searches space in CO mode in very directed way, being unable to explore other parts of the solution space. Independent character of searching is, in many cases, regardless of applied pheromone update strategy.
Table 5 Averaged results obtained for HAntCO with various pheromone update methods in DO and CO optimization modes
To investigate the level of stability of HAntCO in comparison with classical ACO, we have checked how many times 0-equal standard deviation value has been obtained in the conducted experiments. Those results are presented in Table 6. The results gathered in this table prove that the proposed hybrid approach is more directed and thus, the proposed approach found the same solution in many more cases than classical ACO which stochastic nature allows to explore the search space more widely.
Table 6 Number of 0-equal standard deviation measures for given pheromone update strategies and optimization modes
The most interesting results found in Table 6 concern CO mode. For that mode, HAntCO found the same cost solutions 21 (58 %) times for ELITE and 16 times (44 %) for DIFF strategies, while the same duration solutions have been found 24 (67 %) and 25 (69 %) times, respectively.
Computational complexity
Our research has been extended by investigating the level of complexity of compared methods. The complexity has been estimated as a number of potential assignments of resources to a given task as dominant operations. As this value is constant regardless of the optimization process and depends only on initial skill constraints, we can compute the level of complexity as a factor of an average number of iterations and a number of possible assignments. The results of those computations are presented in Table 7.
Table 7 Average number of dominant operations (divided by \(10^3\)) during optimization process using investigated methods for given parameters’ configurations
As we decided to set a constant number of iterations in most methods such as TS, EA S and EA C, the complexity level for those methods was easy to compute. For ACO and HAntCO, we decided to get an average number of iterations from all optimization modes (DO, BO, CO) and update pheromone methods (ALL, ELITE, DIFF) as the value that should be multiplied by a number of possible assignments.
Based on the results gathered in Table 7, we can notice that ACO and HAntCO are most processing complex methods. However, the level of complexity for HAntCO is lower than for classical ACO. It is because the number of iterations for hybrids is generally smaller, as searching is started from more directed place in the solution space.
Complexity level of heuristics has been computed as multiplication of a number of possible assignments by 1, as there is only one iteration in heuristic scheduling process. What is more, heuristics are deterministic approaches. Therefore, we always get the same results that are obtained in only one iteration. Hence, heuristic could be used as a powerful tool to get the first glance of optimization possibilities for given dataset instance.
Results and discussion
Both for the best and averaged results for classical ACO, ELITE update pheromone method turned out to be the best for DO mode, while DIFF update pheromone method became the most suitable for BO mode. However, it is not possible to get such straightforward conclusions for CO mode, because DIFF method became the most suitable choice for the best obtained results, while both DIFF and ELITE methods provided equally good results for average obtained optimization results.
We have also compared pheromone update methods in hybrids performance. For that approach, ELITE mode turned out to be the most suitable for DO, while any (\(*\)) proposed pheromone update method became equally good for CO mode for most project instances. No difference between pheromone update method has been also observed in 15/36 (42 %) cases in CO. It could lead to conclusion that pheromone update method is not as crucial as for classical ACO. It is because initial solution is preferred—hybrid is more exploitation—than exploration-oriented.
We have also compared how many times heuristics provided better results than the best ones obtained from an application of ACO approaches (see Table 2). For DO, SLS heuristic became better 9 times (25 %); while for CO SA or RS, heuristics became better 18 times (50 %). It shows that classical ACO approach proposed in this paper cannot be fully regarded as better in comparison with heuristic methods. However, combining it with heuristic in hybrid (HAntCO) approach turned out to give usually much better results than any other investigated methods, especially for CO mode.
An interesting fact is that DO mode is generally more stable than other based on the provided results. It has been deducted by counting number of bigger than 10 % \(\sigma \) values in Table 4. For DO, there were no such values, while, for BO, there were 3 over 10 % values (2 for duration aspect and 1 for a cost aspect). Finally, for CO, there were 7 over 10 % values of standard deviation—all for a duration aspect.
An interesting conclusion that could be made regardless of the best or averaged results is that a DIFF strategy provided better solutions in DO mode but mostly for dataset instances containing 200 tasks. The best results obtained by a DIFF strategy were better than obtained by an ELITE in 9 cases for 200 task-project instances (50 %), while ELITE strategy provided only one better solution than a DIFF (5 %). Averaged results obtained in a DIFF mode were better in 12 cases (67 %), while ELITE strategy still provided only one better solution in comparison with a DIFF.
Comparing the best results obtained by ACO and HAnt-CO, it can be noticed that HAntCO outclasses classical ACO, whichever pheromone update method would be used. For DO, classical ACO approach has been better than HAntCO in only 5 from 36 cases; while for CO, HAntCO became better than ACO for every project instance. Analysing averaged results, there are only 3 cases with ACO results better than HAntCO ones. Still only for DO. For CO, ACO has never been better than HAntCO. It proves the legitimacy of using hybrids that become robust way of boosting optimization process.
To get bigger awareness of classical ACO and HAnt-CO approaches’ robustness, we decided to compare the obtained best results for ACO with best results obtained using other methods, as EA (Skowroński et al. 2013a) and TS (Myszkowski et al. 2013). However, we needed to distinguish the best results obtained for DO and CO modes from BO mode, because no heuristic scheduling method has been proposed for BO. Comparison of DO, BO and CO modes is presented in Table 8.
Table 8 Comparison of best obtained results for investigated methods in DO, BO and CO modes
This comparison has been made only for project instances D1–D6, because only those have been investigated in Skowroński et al. (2013a, b), Myszkowski et al. (2013). The compared methods are Taboo Search (TS), specialized Evolutionary Algorithms (EA S), classic EA (EA C), classical ACO, HAntCO and heuristics (H).
Table 9 Comparison of averaged obtained results for investigated methods in DO and CO modes
The results presented in Table 8 show that both HAnt-CO and TS outclassed other methods in DO mode, obtaining best cost results for half of investigated project instances for each method (D1, D2, D5, D6 for HAntCO and D3, D4 for TS). For CO mode, classical ACO became the best approach for D2 and D3 instances, while HAntCO obtained the best results for the same instances plus D1. However, the most successful approach for these instances is a heuristic one that allowed to get best results in 5/6 cases.
The averaged results of investigated methods are presented in Table 9. It differs slightly from the results in Table 8, as methods are non-deterministic. However, conclusions are very similar: HAntCO outperforms other methods in almost every case or results are comparable. We developed extra statistical analysis to prove a quality of presented method. We have provided the Kolmogorov–Smirnov (K–S test) to investigate the normality of the distribution of gained results. The K–S test proved that results of used methods are normally distributed and t test can be used. Moreover, a sample size around 50 allows the normality assumptions conducive for performing the unpaired \(t\) tests (Flury 1997). We used two tailed t test with 95 % confidence interval (see results in Table 10) for the best and the second best performing methods applied in D1–D6 instances for DO and CO modes.
We found that HAntCO results are the best in most cases. Very interesting results are noticed for EA S, especially for D5 instance (in DO and CO mode) where EA S gives the best (average) solutions.
Only in one case (D3 instance DO mode), ACO gives better average solution. The results are significant in statistical meaning. The statistical significance of results for HAntCO in CO mode comes mostly from the fact that HAntCO is a method directed by a heuristic that finds the best cost-oriented solution (algorithm). Hence, the statistical significance of this method should be mostly investigated in DO mode. In this mode, the results obtained by HAntCO are statistically significant in 3 cases (D1, D2, D6), while DO-oriented results obtained by ACO are statistically significant in only one case (D3). It additionally proves the legitimacy of using proposed hybrid rather than classical ACO approach. We have also investigated results for several methods in BO mode. In this case, classical ACO approach outclassed the rest of examined methods and became the best choice in 5/6 cases. However, it caused enlarging the project schedule duration of analyzed instances and make them mostly the longest from all obtained with various methods. EA with specialized genetic operators gave the smallest project cost for BO mode. It was the best in 5/6 cases. An interesting fact is that the results obtained for ACO are completely different from the results from other methods like TS or EA. The conclusion could be that ACO searches the solution space totally different from the above-mentioned methods. Hence, combining those approaches into one could be possibly effective and potentially give promising results.
Table 10 Results of the unpaired t test between the best and the second best performing methods (for each instances D1–D6) based on Table 8 (heuristic H (Skowroński et al. 2013b) results not included as a part of HAntCO)