Solving combinatorial bi-level optimization problems using multiple populations and migration schemes

Abstract

In many decision making cases, we may have a hierarchical situation between different optimization tasks. For instance, in production scheduling, the evaluation of the tasks assignment to a machine requires the determination of their optimal sequencing on this machine. Such situation is usually modeled as a Bi-Level Optimization Problem (BLOP). The latter consists in optimizing an upper-level (a leader) task, while having a lower-level (a follower) optimization task as a constraint. In this way, the evaluation of any upper-level solution requires finding its corresponding lower-level (near) optimal solution, which makes BLOP resolution very computationally costly. Evolutionary Algorithms (EAs) have proven their strength in solving BLOPs due to their insensitivity to the mathematical features of the objective functions such as non-linearity, non-differentiability, and high dimensionality. Moreover, EAs that are based on approximation techniques have proven their strength in solving BLOPs. Nevertheless, their application has been restricted to the continuous case as most approaches are based on approximating the lower-level optimum using classical mathematical programming and machine learning techniques. Motivated by this observation, we tackle in this paper the discrete case by proposing a Co-Evolutionary Migration-Based Algorithm, called CEMBA, that uses two populations in each level and a migration scheme; with the aim to considerably minimize the number of Function Evaluations (FEs) while ensuring good convergence towards the global optimum of the upper-level. CEMBA has been validated on a set of bi-level combinatorial production-distribution planning benchmark instances. The statistical analysis of the obtained results shows the effectiveness and efficiency of CEMBA when compared to existing state-of-the-art combinatorial bi-level EAs.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. Aiyoshi E, Shimizu K (1984) A solution method for the static constrained stackelberg problem via penalty method. IEEE Trans Autom Control 29:1111–1114

    Article  Google Scholar 

  2. Angelo JS, Barbosa HJ (2015) A study on the use of heuristics to solve a bilevel programming problem. Int Trans Oper Res 22:861–882

    Article  Google Scholar 

  3. Angelo JS, Krempser E, Barbosa HJ (2013) Differential evolution for bilevel programming. In: IEEE congress on evolutionary computation, pp 470–477

  4. Aviso KB, Tan RR, Culaba AB, Cruz JB Jr (2010) Bi-level fuzzy optimization approach for water exchange in eco-industrial parks. Process Saf Environ Prot 88:31–40

    Article  Google Scholar 

  5. Bard JF, Falk JE (1982) An explicit solution to the multi-level programming problem. Comput Oper Res 9:77–100

    Article  Google Scholar 

  6. Bard JF, Moore JT (1990) A branch and bound algorithm for the bilevel programming problem. SIAM J Sci Stat Comput 11:281–292

    Article  Google Scholar 

  7. Bhattacharjee KS, Singh HK, Ray T (2016) Multi-objective optimization with multiple spatially distributed surrogates. J Mech Des 138:091401

    Article  Google Scholar 

  8. Boggs PT, Tolle JW (1995) Sequential quadratic programming. Acta Numer 4:1–51

    Article  Google Scholar 

  9. Calvete HI, Galé C, Oliveros M-J (2011) Bilevel model for production-distribution planning solved by using ant colony optimization. Comput Oper Res 38:320–327

    Article  Google Scholar 

  10. Casas-Ramírez M-S, Camacho-Vallejo J-F, Díaz JA, Luna DE (2017) A bi-level maximal covering location problem. Oper Res 20:827–855

    Google Scholar 

  11. Chaabani A, Bechikh S, Said LB (2015) A co-evolutionary decomposition-based algorithm for bi-level combinatorial optimization. In: IEEE congress on evolutionary computation, pp 1659–1666

  12. Chaabani A, Bechikh S, Said LB (2017) A co-evolutionary decomposition-based chemical reaction algorithm for bi-level combinatorial optimization problems. Procedia Comput Sci 112:780–789

    Article  Google Scholar 

  13. Chao I-M, Golden BL, Wasil E (1993) A new heuristic for the multi-depot vehicle routing problem that improves upon best-known solutions. Am J Math Manag Sci 13:371–406

    Google Scholar 

  14. Cheng C-B, Shih H-S, Chen B (2017) Subsidy rate decisions for the printer recycling industry by bi-level optimization techniques. Oper Res Int J 17:901–919

    Article  Google Scholar 

  15. Christofides N, Eilon S (1969) An algorithm for the vehicle-dispatching problem. J Oper Res Soc 20:309–318

    Article  Google Scholar 

  16. Cohen J (1977) Statistical power analysis for the behavioral sciences, vol 490. Academic Press, Boca Raton

    Google Scholar 

  17. Colson B, Marcotte P, Savard G (2005) A trust-region method for nonlinear bilevel programming: algorithm and computational experience. Comput Optim Appl 30:211–227

    Article  Google Scholar 

  18. Cordeau J-F, Gendreau M, Laporte G (1997) A tabu search heuristic for periodic and multi-depot vehicle routing problems. Networks 30:105–119

    Article  Google Scholar 

  19. Das I, Dennis JE (1998) Normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems. SIAM J Optim 8:631–657

    Article  Google Scholar 

  20. Deb K, Goyal M (1996) A combined genetic adaptive search (geneas) for engineering design. Comput Sci Inform 26:30–45

    Google Scholar 

  21. Deb K, Sinha A (2014) Evolutionary bilevel optimization (EBO). In: Proceedings of the companion publication of the annual conference on genetic and evolutionary computation, pp 857–876

  22. Dempe S, Kalashnikov VV, Kalashnykova N (2006) Optimality conditions for bilevel programming problems. In: Optimization with multivalued mappings. Springer, pp 3–28

  23. Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1:3–18

    Article  Google Scholar 

  24. Eiben AE, Smit SK (2011) Parameter tuning for configuring and analyzing evolutionary algorithms. Swarm Evol Comput 1:19–31

    Article  Google Scholar 

  25. Feng C-M, Wen C-C (2005) A fuzzy bi-level and multi-objective model to control traffic flow into the disaster area post earthquake. J East Asia Soc Transp Stud 6:4253–4268

    Google Scholar 

  26. Fliege J, Vicente LN (2006) Multicriteria approach to bilevel optimization. J Optim Theory Appl 131:209–225

    Article  Google Scholar 

  27. Gendreau M, Marcotte P, Savard G (1996) A hybrid tabu-ascent algorithm for the linear bilevel programming problem. J Global Optim 8:217–233

    Article  Google Scholar 

  28. Gillett BE, Johnson JG (1976) Multi-terminal vehicle-dispatch algorithm. Omega 4:711–718

    Article  Google Scholar 

  29. Hansen P, Jaumard B, Savard G (1992) New branch-and-bound rules for linear bilevel programming. SIAM J Sci Stat Comput 13:1194–1217

    Article  Google Scholar 

  30. Hejazi SR, Memariani A, Jahanshahloo G, Sepehri MM (2002) Linear bilevel programming solution by genetic algorithm. Comput Oper Res 29:1913–1925

    Article  Google Scholar 

  31. Huang P-Q, Wang Y (2020) A framework for scalable bilevel optimization: Identifying and utilizing the interactions between upper-level and lower-level variables. IEEE Trans Evol Comput 24:1150–1163

    Article  Google Scholar 

  32. Islam MM, Singh HK, Ray T (2017a) A surrogate assisted approach for single-objective bilevel optimization. IEEE Trans Evol Comput 21:681–696

    Article  Google Scholar 

  33. Islam MM, Singh HK, Ray T, Sinha A (2017b) An enhanced memetic algorithm for single-objective bilevel optimization problems. Evol Comput 25:607–642

    Article  Google Scholar 

  34. Moré JJ (1983) Recent developments in algorithms and software for trust region methods. In: Mathematical programming the state of the art, pp 258–287

  35. Jeroslow RG (1985) The polynomial hierarchy and a simple model for competitive analysis. Math Program 32:146–164

    Article  Google Scholar 

  36. Jiang Y, Li X, Huang C, Wu X (2013) Application of particle swarm optimization based on CHKS smoothing function for solving nonlinear bilevel programming problem. Appl Math Comput 219:4332–4339

    Google Scholar 

  37. Kirjner-Neto C, Polak E, Der Kiureghian A (1998) An outer approximations approach to reliability-based optimal design of structures. J Optim Theory Appl 98:1–16

    Article  Google Scholar 

  38. Koh A (2007) Solving transportation bi-level programs with differential evolution. In: IEEE congress on evolutionary computation, pp 2243–2250

  39. Küçükaydın H, Aras N, Altınel İK (2010) A hybrid tabu search heuristic for a bilevel competitive facility location model. In: International workshop on hybrid metaheuristics. Springer, pp 31–45

  40. Legillon F, Liefooghe A, Talbi E-G (2012) Cobra: a cooperative coevolutionary algorithm for bi-level optimization. In: IEEE congress on evolutionary computation, pp 1–8

  41. Mathieu R, Pittard L, Anandalingam G (1994) Genetic algorithm based approach to bi-level linear programming. RAIRO-Oper Res 28:1–21

    Article  Google Scholar 

  42. Meng Q, Yang H, Bell MG (2001) An equivalent continuously differentiable model and a locally convergent algorithm for the continuous network design problem. Transp Res B Methodol 35:83–105

    Article  Google Scholar 

  43. Migdalas A (1995) Bilevel programming in traffic planning: models, methods and challenge. J Global Optim 7:381–405

    Article  Google Scholar 

  44. Oduguwa V, Roy R (2002) Bi-level optimisation using genetic algorithm. In: null. IEEE, p 322

  45. Potvin J-Y, Bengio S (1996) The vehicle routing problem with time windows part ii: genetic search. INFORMS J Comput 8:165–172

    Article  Google Scholar 

  46. Sahin KH, Ciric AR (1998) A dual temperature simulated annealing approach for solving bilevel programming problems. Comput Chem Eng 23:11–25

    Article  Google Scholar 

  47. Sakawa M, Katagiri H, Matsui T (2012) Stackelberg solutions for fuzzy random bilevel linear programming through level sets and probability maximization. Oper Res Int J 12:271–286

    Article  Google Scholar 

  48. Shepherd S, Sumalee A (2004) A genetic algorithm based approach to optimal toll level and location problems. Netw Spatial Econ 4:161–179

    Article  Google Scholar 

  49. Sinha A, Lu Z, Deb K, Malo P (2020) Bilevel optimization based on iterative approximation of multiple mappings. J Heurist 26:151–185

    Article  Google Scholar 

  50. Sinha A, Malo P, Deb K (2013a) Evolutionary bilevel optimization. In: Proceedings of the 15th annual conference companion on genetic and evolutionary computation, pp 877–892

  51. Sinha A, Malo P, Deb K (2017a) Evolutionary algorithm for bilevel optimization using approximations of the lower level optimal solution mapping. Eur J Oper Res 257:395–411

    Article  Google Scholar 

  52. Sinha A, Malo P, Deb K (2017b) A review on bilevel optimization: from classical to evolutionary approaches and applications. IEEE Trans Evol Comput 22:276–295

    Article  Google Scholar 

  53. Sinha A, Malo P, Frantsev A, Deb K (2013b) Multi-objective stackelberg game between a regulating authority and a mining company: a case study in environmental economics. In: IEEE congress on evolutionary computation, pp 478–485

  54. Sun C, Jin Y, Zeng J, Yu Y (2015) A two-layer surrogate-assisted particle swarm optimization algorithm. Soft Comput 19:1461–1475

    Article  Google Scholar 

  55. Sun H, Gao Z, Wu J (2008) A bi-level programming model and solution algorithm for the location of logistics distribution centers. Appl Math Model 32:610–616

    Article  Google Scholar 

  56. Talbi E-G (2013) Metaheuristics for bi-level optimization, vol 482. Springer, Berlin

    Google Scholar 

  57. White DJ, Anandalingam G (1993) A penalty function approach for solving bi-level linear programs. J Global Optim 3:397–419

    Article  Google Scholar 

  58. Wright M (2005) The interior-point revolution in optimization: history, recent developments, and lasting consequences. Bull Am Math Soc 42:39–56

    Article  Google Scholar 

  59. Zhang Q, Li H (2007) Moea/d: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11:712–731

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Rihab Said.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: A comparison between the Das-&-Dennis and the DSDM methods.

This first appendix illustrates the difference between the Das-&-Dennis (Das and Dennis 1998), and the DSDM (Chaabani et al. 2015) methods as shown in Fig. 6. For the case of a continuous search space, the Das-&-Dennis method could be used. However, for the case of a discrete search space, the Das-&-Dennis method is inapplicable. The DSDM method is a variant of Das-&-Dennis method and it could be used to generate a set of points in discrete search spaces. We must mention here that the Das-&-Dennis method generates a set of solutions in the objective space, while the DSDM method works in the decision space. The distribution of the reference points for the Das-&-Dennis method with 3-objective optimization problem (M = 3) and a spacing of \(\delta = 0.2\) (P = 5) is presented in Fig. 6a. Thus, 21 reference points (H = 21) are generated in a normalized hyper-plane. We mention here that the reference directions are represented by the lines, which are constructed from the origin to each of these reference points. Figure 6b illustrates the obtained results for the DSDM method with three decision variables where the domains are: \(D_{x_{1}}\)= [0,2,5,13], \(D_{x_{2}}\)= [4,7,9,17], and \(D_{x_{3}}\)= [5,8,11,16]. In order to generate the reference points with the DSDM method in a discrete space, a uniform spacing noted \(\delta _{i}\) is calculated for each decision variable as follow: \(\delta _{i} = max_{i}/P\) (i is the decision variable number, P is a fixed parameter based on the dimension of the problem). Thus, the obtained \(\delta _{i}\) for a P = 3 are: \(\delta _{1} = 13/3=4\), \(\delta _{2} = 17/3=5\), and \(\delta _{3} = 16/5=5\). After that, the range values (\(R_{i}\)) are generated for each decision variable by adding the first value in each range on a set \(R_{i}\), and determining the following \(R_{i}\) members that obey the \(\delta _{i}\). Thus, the obtained range values are as follows: \(R_{1} = [0,5,13]\), \(R_{2} = [4,9,17]\), and \(R_{3} = [5,11,16]\). The obtained solutions for this example are: (0,4,5), (0,9,11), (0,17,16), (5,4,5), (5,9,11), and (13,4,5).

Fig. 6
figure6

Illustration of the difference between Das-&-Dennis method, and DSDM method

figured

Appendix 2: Illustration of the interaction process between the two levels in a BLOP

This second appendix is devoted to illustrate in details the interaction process between the upper-level and the lower one in bi-level optimization. Indeed, these two levels are dependent on each other because: (1) the upper-level solution vector \(x = (x_{u}, x_{l})\) could not be evaluated without optimizing its corresponding solution subvector \(x_{l}\) and (2) the lower-level solution \(x_{l}\) could not be evaluated without receiving its corresponding upper-level solution subvector \(x_{u}\) as fixed parameter (Huang and Wang 2020; Sinha et al. 2020, 2017b). For this reason, to precisely evaluate any upper-level solution \(x = (x_{u}, x_{l})\), we need to effectively approximate the optimal lower-level solution \(x_{l}^{*}\) that corresponds to the upper-level solution subvector \(x_{u}\), which acts as a fixed parameter within the lower-level search process. In the following, we describe the sequence of steps required to well compute the fitness function of an upper-level solution \(x = (x_{u}, x_{l})\) as illustrated by Fig. 7. First, the subvector of upper-level decision variables \(x_{u}\) is passed as a fixed parameter to the lower-level search algorithm. Second, this latter evolves a population of lower-level solutions (i.e., lower-level decision variable vectors) for a number of generations with the aim to approximate the optimal follower solution \(x_{l}^{*}\) corresponding to the parameter \(x_{u}\). Third, once the lower-level algorithm termination criterion is met, the obtained approximation of \(x_{l}^{*}\) is sent to the upper-level and thus the upper-level solution vector becomes \(x = (x_{u}, x_{l}^{*})\). As illustrated by Fig. 7, to evaluate the upper-level solution \(x^{A} = (x_{u}^{A}, x_{l}^{A})\), \(x_{u}^{A}\) is passed as fixed parameter to the lower-level EA. The latter executes an evolutionary process to approximate the optimal lower-level solution \({x_{l}^{A}}^{*}\) corresponding to \(x_{u}^{A}\). Based, on Fig. 7, the obtained approximation is the solution vector E. Finally, E is assigned to \(x_{l}^{A}\); and thus the upper-level solution becomes \(x^{A} = (x_{u}^{A}, E)\). The fitness function of \(x^{A}\) could now be computed. It is important to note that the better the approximation E is, the higher the precision of the fitness evaluation of \(x^{A} = (x_{u}^{A}, x_{l}^{A})\) is. We conclude that the quality of any upper-level solution vector \(x = (x_{u}, x_{l})\) depends on two main factors that are: (1) the quality of the \(x_{u}\) sub-vector that depends on the values of its components (upper-level variables) and (2) the quality of its obtained corresponding optimal lower-level \(x_{l}\) solution approximation (lower-level variables). For this reason, the lower-level algorithm plays a crucial role in the precise evaluation of each upper-level solution quality (upper fitness value). Differently speaking, the better the quality of \(x_{l}^{*}\) for a particular upper-level solution \(x = (x_{u}, x_{l})\) is, the higher the precision of the fitness computation of \(x = (x_{u}, x_{l})\). This high precision allows the bi-level algorithm to compute the fitness function values of the upper population with more exactitude and thus detecting promising search directions in the upper-level search space more effectively and efficiently. This is the key characteristic of the interaction process between the upper-level algorithm and the lower-level one in bi-level optimization. It is important to note that if we consider the case of a poor lower-level search process, the upper level fitness values computations will be biased with noise; which will significantly deteriorate the search process of the upper-level algorithm. For an extremely bad case, a very poor lower level search process could make the upper-level algorithm behavior acting like a random search. For this reason, there is a need to evaluate the performance of the compared algorithms’ lower-levels search methods. To do so, Legillon et al. (2012) proposed the direct rationality and the weighted one (defined in Sect. 4.3) as two metrics to evaluate the performance of a lower-level algorithm (cf. Algorithm 4). The former expresses the mean number of times the lower-level algorithm was able to improve the solutions with respect to the population of the previous generation, while the latter evaluates the mean quantity (magnitude) of the improvements. In summary, we conclude that: (1) the lower-level optimality approximation is considered in CEMBA as a necessary constraint that appears as one of the upper-level constraints and (2) the lower-level performance should be evaluated for all compared algorithms to assess the quality of its contribution in the precise evaluation of the upper-level solutions, and hence in the effective exploration of promising search directions in the upper-level search space.

Fig. 7
figure7

A general sketch of two upper-level solutions evaluations (A and B) in a BLOP

To empirically show the importance of the optimization of the lower-level objective function (while respecting its constraints), we have conducted a set of experiments using two versions of each of the two following algorithms: CEMBA and CODBA-CRO:

  1. 1.

    CEMBA-WLO and CODBA-CRO-WLO are two variants of CEMBA and CODBA-CRO (respectively) that do not optimize the lower-level objective function (WLO means Without Lower-level Optimization); and

  2. 2.

    The original CEMBA and CODBA-CRO, which already optimize the lower-level objective function.

The obtained results on the 23 bip instances and the 10 bipr ones are presented in Table 8. We observe that the results of the original CEMBA and CODBA-CRO are extremely far better than those of CEMBA-WLO and CODBA-CRO-WLO, for all test instances. This could be explained by the importance of the optimization of the lower-level objective function. Indeed, when this latter is not optimized (only the upper-level objective function is optimized while respecting all inequality and equality constraints of both levels), the upper-level fitness function computation will not be precise at all. More specifically, based on the illustrative example of Fig. 7, the subvector \(x_{l}^{A}\) of the upper-level solution \(x^{A}\) will be assigned a lower-level solution that is much poorer than E; which means that it is far from the corresponding follower optimum. In this way, the solution \(x^{A}\) will not respect the constraint of the optimality of the lower-level objective function, which will induce an imprecise fitness computation for the upper-level solution \(x^{A} = (x_{u}^{A}, x_{l}^{A})\). By doing so for all upper-level population members, the EA will no more be able to guide the search towards the leader global optimum and its behavior will be equivalent to a random search in the upper-level search space, which is the case of CEMBA-WLO and CODBA-CRO-WLO.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Said, R., Elarbi, M., Bechikh, S. et al. Solving combinatorial bi-level optimization problems using multiple populations and migration schemes. Oper Res Int J (2021). https://doi.org/10.1007/s12351-020-00616-z

Download citation

Keywords

  • Combinatorial bi-level optimization
  • Evolutionary algorithms
  • Computational cost
  • Population decomposition
  • Migration schemes