Performance of Selection Hyperheuristics on the Extended HyFlex Domains
Abstract
Selection hyperheuristics perform search over the space of heuristics by mixing and controlling a predefined set of low level heuristics for solving computationally hard combinatorial optimisation problems. Being reusable methods, they are expected to be applicable to multiple problem domains, hence performing well in crossdomain search. HyFlex is a general purpose heuristic search API which separates the high level search control from the domain details enabling rapid development and performance comparison of heuristic search methods, particularly hyperheuristics. In this study, the performance of six previously proposed selection hyperheuristics are evaluated on three recently introduced extended HyFlex problem domains, namely 0–1 Knapsack, Quadratic Assignment and MaxCut. The empirical results indicate the strong generalising capability of two adaptive selection hyperheuristics which perform well across the ‘unseen’ problems in addition to the six standard HyFlex problem domains.
Keywords
Metaheuristic Parameter control Adaptation Move acceptance Optimisation1 Introduction
Many combinatorial optimisation problems are computationally difficult to solve and require methods that use sufficient knowledge of the problem domain. Such methods cannot however be reused for solving problems from other domains. On the other hand, researchers have been working on designing more general solution methods that aim to work well across different problem domains. Hyperheuristics have emerged as such methodologies and can be broadly categorised into two categories; generation hyperheuristics to generate heuristics from existing components, and selection hyperheuristics to select the most appropriate heuristic from a set of low level heuristics [3]. This study focuses on selection hyperheuristics.
A selection hyperheuristic framework operates on a single solution and iteratively selects a heuristic from a set of low level heuristics and applies it to the candidate solution. Then a move acceptance method decides whether to accept or reject the newly generated solution. This process is iteratively repeated until a termination criterion is satisfied. In [5], a range of simple selection methods are introduced, including Simple Random (SR) that randomly selects a heuristic at each step, and Random Descent which works similarly to SR, but the selected low level heuristic is applied repeatedly until no additional improvement in the solution is observed. Most of the simple nonstochastic basic move acceptance methods are tested in [5]; including All Moves (AM), which accepts all moves, Only Improving (OI), which accepts only improving moves and Improving or Equal (IE), which accepts all nonworsening moves. Late acceptance [4] accepts an incumbent solution if its quality is better than a solution that was obtained a specific number of steps earlier. More on selection hyperheuristics can be found in [3].
HyFlex [14] (Hyperheuristics Flexible framework) is a crossdomain heuristic search API and HyFlex v1.0 is a software framework written in Java, providing an easytouse interface for the development of selection hyperheuristic search algorithms along with the implementation of several problem domains, each of which encapsulates problemspecific components, such as solution representation and low level heuristics. We will refer to HyFlex v1.0 as HyFlex from this point onward. HyFlex was initially developed to support the first Crossdomain Heuristic Search Challenge (CHeSC) in 2011^{1}. Initially, there were six minimisation problem domains implemented within HyFlex [14]. The HyFlex problem domains have been extended to include three more of them, including 0–1 Knapsack Problem (KP), Quadratic Assignment Problem (QAP) and MaxCut (MAC) [1]. In this study, we only consider the ‘unseen’ extended HyFlex problem domains to investigate the performance and the generality of some previously proposed well performing selection hyperheuristics.
2 Selection Hyperheuristics for the Extended HyFlex Problem Domains
In this section, we provide a description of the selection hyperheuristic methods which are investigated in this study. These hyperheuristics use different combinations of heuristic selection and move acceptance methods.
Sequencebased selection hyperheuristic (SSHH) [10] is a relatively new method which aims to discover the best performing sequences of heuristics for improving upon an initially generated solution. The hidden Markov model (HMM) is employed to learn the optimum sequence lengths of heuristics. The hidden states in HMM are replaced by the low level heuristics and the observations in HMM are replaced by the sequencebased acceptance strategies (AS). A transition probabilities matrix is utilised to determine the movement between the hidden states; and an emission probabilities matrix is employed to determine whether a particular sequence of heuristics will be applied to the candidate solution or will be coupled with another LLH. The move acceptance method used in [10] accepts all improving moves and nonimproving moves with an adaptive threshold. The SSHH showed excellent performance across CHeSC 2011 problem domains achieving better overall performance than AdapHH which was the winner of the challenge.
Dominancebased and random descent hyperheuristic (DRD) [16] is an iterated multistage hyperheuristic that hybridises a dominancebased and random descent heuristic selection strategies, and uses a naïve move acceptance method which accepts improving moves and nonimproving moves with a given probability. The dominancebased stage uses a greedylike method aiming to identify a set of ‘active’ low level heuristics considering the tradeoff between the delta change in the fitness and the number of iterations required to achieve that change. The random descent stage considers only the subset of low level heuristics recommended by the dominancebased stage. If the search stagnates, then the dominancebased stage may kick in again aiming to detect a new subset of active heuristics. The method has proven to perform relatively well in the MAXSAT and 1D binpacking problem domains as reported in [16].
Robinhood (roundrobin neighbourhood) hyperheuristic [11] is an iterated multistage hyperheuristic. Robinhood contains three selection hyperheuristics. They all share the same heuristic selection method but differ in the move acceptance. The Robinhood heuristic selection allocates equal time for each low level heuristic and applies them one at a time to the incumbent solution in a cyclic manner during that time. The three move acceptance criteria employed by Robinhood are only improving, improving or equal, and an adaptive move acceptance method. The latter method accepts all improving moves and nonimproving moves are accepted with a probability that changes adaptively throughout the search process. This selection hyperheuristic outperformed eight ‘standard’ hyperheuristics across a set of instances from HyFlex problem domains. A detailed description of the Robinhood hyperheuristic can be found in [11].
Modified choice function (MCF) [6] uses an improved version of the traditional choice function (CF) heuristic selection method used in [5] and has a better average performance than CF when compared across the CHeSC 2011 competition problems. The basic idea of a choice function hyperheuristic is to choose the best low level heuristic at each iteration. Hence, move acceptance is not needed and all moves are accepted. In the traditional CF method, each low level heuristic is assigned a score based on three factors; the recent effectiveness of the given heuristic (\(f_1\)), the recent effectiveness of consecutive pairs of heuristics (\(f_2\)), and the amount of time since the given heuristic was used (\(f_3\)) where each factor within CF is associated with a weight; \(\alpha \), \(\beta \), and \(\delta \) respectively [5]. It was also stated in the CF study that the hyperheuristic was insensitive to the parameter settings for solving Sales Summit Scheduling problems and are consequently fixed throughout the search. MCF extends upon CF by controlling the weights of each factor for improving its crossdomain performance [6]. In MCF, the weights for \(f_1\) and \(f_2\) are equal as defined by the parameter \(\phi _t\), and the weight for \(f_3\) is set to \(1  \phi _t\). \(\phi _t\) is controlled using a simple mechanism. If an improving move is made, then \(\phi _t = 0.99\). If a nonimproving move is made, then \(\phi _t = max\{\phi _{t1} 0.01, 0.01\}\).
Fuzzy late acceptancebased hyperheuristic (FLAHH) [8] was implemented for solving MAXSAT problems and showed promising results. FLAHH utilises a fitness proportionate selection mechanism (RUA1F1FPS) [7] for the heuristic selection method and uses late acceptance, whose list length is adaptively controlled using a fuzzy control system, for its move acceptance method. In RUA1F1FPS, the low level heuristics are assigned scores which are updated based on acceptance of the candidate solution as defined by the RUA1 scheme. A heuristic is chosen using a fitness proportionate (roulette wheel) selection mechanism utilising Formula 1 (F1) ranking scores (F1FPS). Each low level heuristic is ranked based on their current scores using F1 ranking and are assigned probabilities to be selected proportional to their F1 rank. The fuzzy control system, as defined in [8], adapts the list length of a late acceptance move acceptance method at the start of each phase each to promote intensification or diversification within the subsequent phase of the search based on the amount of improvement over the current phase. The F1FPS scoring mechanism used in this study is the RUA1 method as used in [7, 8]. The parameters of the fuzzy system are the same as those used in [8] with the universe of discourse of the list length fuzzy sets \(U = [10000,30000]\), the initial list length of late acceptance \(L_0 = 10000\), and the number of phases equal to 50.
3 Empirical Results

rank: rank of a hyperheuristic with respect to \(\mu _{norm}\).

\({{\mu }}_{rank}\): each algorithm is ranked based on the median objective values that they produce over 31 runs for each instance. The top algorithm is assigned to rank 1, while the worst algorithm’s rank equals to the number of algorithms being considered in ranking. In case of a tie, the ranks are shared by taking the average. The ranks are then accumulated and averaged over all instances producing \(\mu _{rank}\).
 \({{\mu }}_{norm}\): the objective function values are normalised to values in the range [0,1] based on the following formula:where o(i) is the objective function value on instance i, \(o_{best(i)}\) is the best objective function value obtained by all methods on instance i, and \(o_{worst(i)}\) is the worst objective function value obtained by all methods on instance i. \(\mu _{norm}\) is the average normalised objective function value.$$\begin{aligned} norm(o,i) = \frac{o(i)o_{best}(i)}{o_{worst}(i)o_{best}(i)} \end{aligned}$$(2)

best: is the number of instances for which the hyperheuristic achieves the best median objective function value.

worst: the number of instances for which the hyperheuristic delivers the worst median objective function value.
The performance comparison of SSHH, DRD, Robinhood, MCF, FLAHH and SRGD over 31 runs for each instance. The best median values per each instance are highlighted in bold. Based on the MannWhitneyWilcoxon test, for each pair of algorithms; SSHH versus X; SSHH > (<) X indicates that SSHH (X) is better than X (SSHH) and this performance variance is statistically significant with a confidence level of 95 %, and SSHH \(\ge \) (\(\le \)) X indicates that there is no statistical significant between SSHH and X, but SSHH (X) is better than X (SSHH) on average.
Table 1 summarises the results. On KP, SSHH delivers the best median values for 8 instances including 4 ties. Robinhood achieves the best median results in 5 instances including a tie. SRGD, FLAHH and DRD show comparable performance. On the QAP problem domain, SRGD performs the best in 6 instances and FLAHH shows promising results in this particular problem domain. This gives an indication that simple selection methods are potentially the best for solving QAP problems. SSHH ranked as the third best based on the average rank on QAP problem. On MAC, SSHH clearly outperforms all other methods, followed by SRGD and then Robinhood. The remaining hyperheuristics have relatively poor performance, with MCF being the worst of the 6 hyperheuristics. Overall, SSHH turns out to be the best with \(\mu _{norm} = 0.16\) and \(\mu _{rank} = 2.28\). SRGD also shows promising performance, scoring the second best. MCF consistently delivers weak performance in all the instances of the three problem domains. Table 1 also provides the pairwise average performance comparison of SSHH versus (DRD, Robinhood, MCF, FLAHH and SRGD) based on the MannWhitneyWilcoxon statistical test. SSHH performs significantly better than any hyperheuristic on all MAC instances, except Robinhood which performs better than SSHH on four out of ten instances. On the majority of the KP instances, SSHH is the best performing hyperheuristic. SSHH performs poorly on QAP when compared to FLAHH and SRGD and both hyperheuristics produce significantly better results than SSHH on almost all instances. SSHH performs statistically significantly better than the remaining hyperheuristics on QAP.
The performance comparison of SSHH, AdapHH, FSILS, NRFSILS, EPH, SRAM and SRIE
4 Conclusion
A hyperheuristic is a search methodology, designed with the aim of reducing the human effort in developing a solution method for multiple computationally difficult optimisation problems via automating the mixing and generation of heuristics. The goal of this study was to assess the level of generality of a set of selection hyperheuristics across three recently introduced HyFlex problem domains. The empirical results show that both AdapHH and SSHH perform better than the previously proposed algorithms across the problem domains included in the HyFlex extension set. Both adaptive algorithms embed different online learning mechanisms and indeed generalise well on the ‘unseen’ problems. It has also been observed that the choice of heuristic selection and move acceptance combination could lead to major performance differences across a diverse set of problem domains. This particular observation is aligned with previous findings in [2, 15].
Footnotes
References
 1.Adriaensen, S., Ochoa, G., Nowé, A.: A benchmark set extension and comparative study for the HyFlex framework. In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 784–791 (2015)Google Scholar
 2.Bilgin, B., Özcan, E., Korkmaz, E.E.: An experimental study on hyperheuristics and exam timetabling. In: Burke, E.K., Rudová, H. (eds.) PATAT 2006. LNCS, vol. 3867, pp. 394–412. Springer, Heidelberg (2007). doi: 10.1007/9783540773450_25 CrossRefGoogle Scholar
 3.Burke, E.K., Gendreau, M., Hyde, M., Kendall, G., Ochoa, G., Özcan, E., Qu, R.: Hyperheuristics: a survey of the state of the art. J. Oper. Res. Soc. 64(12), 1695–1724 (2013)CrossRefGoogle Scholar
 4.Burke, E.K., Bykov, Y.: A late acceptance strategy in hillclimbing for exam timetabling problems. In: Proceedings of the 7th International Conference on the Practice and Theory of Automated Timetabling (PATAT 2008) (2008)Google Scholar
 5.Cowling, P.I., Kendall, G., Soubeiga, E.: A hyperheuristic approach to scheduling a sales summit. In: Burke, E., Erben, W. (eds.) PATAT 2000. LNCS, vol. 2079, p. 176. Springer, Heidelberg (2001)CrossRefGoogle Scholar
 6.Drake, J.H., Özcan, E., Burke, E.K.: An improved choice function heuristic selection for cross domain heuristic search. In: Coello, C.A.C., Cutello, V., Deb, K., Forrest, S., Nicosia, G., Pavone, M. (eds.) PPSN 2012, Part II. LNCS, vol. 7492, pp. 307–316. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 7.Jackson, W.G., Özcan, E., Drake, J.H.: Late acceptancebased selection hyperheuristics for crossdomain heuristic search. In: 13th UK Workshop on Computational Intelligence, pp. 228–235 (2013)Google Scholar
 8.Jackson, W., Özcan, E., John, R.I.: Fuzzy adaptive parameter control of a late acceptance hyperheuristic. In: 14th UK Workshop on Computational Intelligence (UKCI), pp. 1–8 (2014)Google Scholar
 9.Kendall, G., Mohamad, M.: Channel assignment optimisation using a hyperheuristic. In: Proceedings of the IEEE Conference on Cybernetic and Intelligent Systems, pp. 790–795 (2004)Google Scholar
 10.Kheiri, A., Keedwell, E.: A sequencebased selection hyperheuristic utilising a hidden Markov model. In: Proceedings of the 2015 on Genetic and Evolutionary Computation Conference, GECCO 2015, pp. 417–424. ACM, New York (2015)Google Scholar
 11.Kheiri, A., Özcan, E.: A hyperheuristic with a round Robin neighbourhood selection. In: Middendorf, M., Blum, C. (eds.) EvoCOP 2013. LNCS, vol. 7832, pp. 1–12. Springer, Heidelberg (2013)CrossRefGoogle Scholar
 12.Meignan, D.: An evolutionary programming hyperheuristic with coevolution for CHeSC11. In: The 53rd Annual Conference of the UK Operational Research Society (OR53) (2011)Google Scholar
 13.Misir, M., Verbeeck, K., De Causmaecker, P., Vanden Berghe, G.: A new hyperheuristic implementation in HyFlex: a study on generality. In: Fowler, J., Kendall, G., McCollum, B. (eds.) Proceedings of the 5th Multidisciplinary International Scheduling Conference: Theory and Application (MISTA2011), pp. 374–393 (2011)Google Scholar
 14.Ochoa, G., Hyde, M., Curtois, T., VazquezRodriguez, J.A., Walker, J., Gendreau, M., Kendall, G., McCollum, B., Parkes, A.J., Petrovic, S., Burke, E.K.: HyFlex: a benchmark framework for crossdomain heuristic search. In: Hao, J.K., Middendorf, M. (eds.) EvoCOP 2012. LNCS, vol. 7245, pp. 136–147. Springer, Heidelberg (2012)CrossRefGoogle Scholar
 15.Özcan, E., Bilgin, B., Korkmaz, E.E.: A comprehensive analysis of hyperheuristics. Intell. Data Anal. 12(1), 3–23 (2008)Google Scholar
 16.Özcan, E., Kheiri, A.: A hyperheuristic based on random gradient, greedy and dominance. In: Gelenbe, E., Lent, R., Sakellari, G. (eds.) Computer and Information Sciences II, pp. 557–563. Springer, London (2012)Google Scholar
Copyright information
Open Access This chapter is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, duplication, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, a link is provided to the Creative Commons license and any changes made are indicated.
The images or other third party material in this chapter are included in the work’s Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work’s Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material.