A minireview on preference modeling and articulation in multiobjective optimization: current status and challenges
 793 Downloads
 2 Citations
Abstract
Evolutionary multiobjective optimization aims to provide a representative subset of the Pareto front to decision makers. In practice, however, decision makers are usually interested in only a particular part of the Pareto front of the multiobjective optimization problem. This is particularly true when the number of objectives becomes large. Over the past decade, preferencebased multiobjective optimization has attracted increasing attention from both academia and industry due to its significance in both theory and practice. Significant progress has been made in evolutionary multiobjective optimization and multicriteria decision communities, although many open issues still remain to be addressed. This paper provides a concise review on preferencebased multiobjective optimization, including various preference modeling methods and existing preferencebased optimization methods, as well as a brief discussion of the main future challenges.
Keywords
Multiobjective optimization Preference modeling Preference learningIntroduction
It is helpful for decision markers (DMs) to make their decisions if the whole Pareto optimal set is already known, because the whole set can provide an overall picture of the distribution of Pareto optimal solutions. To obtain the entire PF, or to be more exact, to obtain a representative subset of the PF, a huge number of algorithms and methodologies in both communities of traditional mathematical programming and evolutionary computation have been designed in recent decades. Traditional mathematical programming methods such as the weighted aggregation methods [4] cannot identify the whole PF in one single run. Evolutionary algorithms (EAs), as populationbased search methods, are believed to be well suited for solving MOPs in that they can achieve a set of nondominated solutions in one run. Multiobjective evolutionary algorithms (MOEAs) [5] have now become a mature tool to solve MOPs. Generally speaking, existing MOEAs can be divided into three categories according to their selection criteria, namely Pareto, indicator, and referencebased MOEAs [6, 7, 8], even though a number of MOEAs might fall into more than one category or employ additional selection criteria.
Paretobased MOEAs employ the Pareto dominance as their main selection methodology for convergence. Different diversity maintenance strategies are adopted in different Paretobased MOEAs, such as crowding distance in NSGAII [9] and environment selection in SPEA2 [10]. However, it was shown that Paretobased MOEAs fail to solve manyobjective optimization problems (MaOPs) that are defined to be MOPs with more than three objectives [11], mainly due to the fact that the dominance comparison becomes less effective when the number of objectives increases for a limited population size [12].
Indicatorbased MOEAs use a single indicator as the selection criterion to replace the Pareto dominance in Paretobased MOEAs. \(I_{\epsilon +}\) [13, 14], Hypervolume [15], and R2 [16] have been applied in IBEA [13], HypE [17], and MOMBI [18], respectively.
Referencebased MOEAs decompose an MOP into a set of subproblems according to the preassigned references, such as weights [19], reference points [20], reference vectors [21], and direction vectors [22, 23]. Different aggregation functions have been suggested to convert an MOP into a set of singleobjective optimization problems, including weighted sum [3], Tchebycheff approach [3], and penaltybased boundary intersection (PBI) approach [24].
Although a representative subset of the overall PF can be located using most MOEAs for two or threeobjective optimization problems, selecting a few solutions to be implemented is not trivial. The decisionmaking process will become much harder for manyobjective optimization problem, because human beings are believed to be able to handle up to seven criteria [25, 26, 27]. Therefore, articulation of preferences is essential for solving MOPs [28], which can guide optimization algorithms to find the most preferred solutions rather than the whole PF. To incorporate preferences into multiobjective optimization algorithms, the modeling and articulation of preferences must be considered. Generally, preferences can be involved in different stages of multiobjective optimization algorithms, and preferencebased optimization methods can be classified into three categories: a priori, interactive, and a posteriori methods [28]. However, it is unclear which preferences are able to effectively incorporated into MOEAs, and in many cases the user does not have a clear preference when little knowledge about the problem is available.
This paper offers a brief survey on preference modeling and articulation in multiobjective optimization. In section “Preference modeling methods”, various preference modeling methods are summarized. Section “Preferencebased optimization methods” gives an account of existing preferencebased optimization methods. Future challenges in preference modeling and preference guided multiobjective optimization are discussed in section “Challenges”. Section “Conclusion” concludes this paper.
Preference modeling methods
Various preference models have been reported in the literature [29], which can be largely classified into goals, weights, reference vectors, preference relation, utility functions, outranking, and implicit preferences.
Goals
Weights
Reference vectors
Reference vectors or points provide the expectation to or importance of the objectives. Reference vectors and weights are similar in their aggregation functionality, although they do have different physical meanings, and consequently, different influences on the search process. Usually, reference vectors represent the directions of the solution vector, whereas weights indicate the importance of different objectives. Reference vectors are in the objective space, whilst weights are in the weight space. Because of the inherent connection between reference vectors and weights, they can be converted into each other. The reference vectors in RVEA [21] and reference points in NSGAIII [20] are converted from uniformly distributed weights.
Preference relation
Relation  Meaning  Relation  Meaning 

\(\prec \)  Less important  \(\succ \)  More important 
\(\ll \)  Much less important  \(\gg \)  Much more important 
\(\approx \)  Equally important  \(\#\)  Do not care 
\(\lnot \)  Not important  !  Important 
Neither the Tchebycheff nor PBI method is suited to the PFs in different shapes [43]. Recently, different aggregation functions are proposed for both preferences in reference vectors and weights. For example, adaptive scalarizing methods in [44, 45, 46] change the aggregation function during the MOEA, the Tchebycheff method is used in a reversed form for a convex PF [47], and the PBI method is inverted based on a nadir point [48].
Preference relation
DMs have different preferences on different objectives; thus, some objectives might be not equally important during the process of decision making [49, 50, 51]. Table 1 lists the symbol representation of the importance of objectives, and as a result, objectives can be sorted with a preferred order as \(f_{1}\ge f_{2}\ge f_{3}\approx f_{4}\). With that preference relation [52], the search can be narrowed down by converting into weights, the method in [2] is one example with the binary preference. The main disadvantage is that the preference relation cannot handle nontransitivity. During the process of decision making, DMs gradually learn their preferences. Analytic hierarchy process (AHP) [53] is a measurement using pairwise comparisons to calculate priority scales based on the judgements from DMs, which might be inconsistent. Various examples have employed AHP for decision making.
Utility functions
Preferences can be characterized by utility functions [54, 55, 56], where the preference information is implicitly involved in the fitness function to rank solutions [57, 58]. Unlike preference relations, the utility function sorts solutions rather than objectives in an order. For example, there are N solutions \(\mathbf {x}_{1}\) to \(\mathbf {x}_{N}\), DMs are required to input their preferences for those solutions, \(\mathbf {x}_{1}\prec _{\mathrm{pref}}\mathbf {x}_{2}\prec _{\mathrm{pref}} \ldots \prec _{\mathrm{pref}}\mathbf {x}_{N}\), for instance. Then, an imprecisely specified multiattribute value theory (ISMAUT) formulation is employed to infer the relative importance of objectives to modify the fitness function. However, utility functions are based on a strong assumption that all attributes of the preferences are independent, thereby being unable to handle nontransitivity [59, 60].
Outranking
Outranking [61] is a different ranking for objective preferences allowing nontransitivity, which is different from the preference relation [62]. To construct an outranking [63], the preference and indifference thresholds for each objective are input by a preference ranking organization method for enrichment evaluations (PROMETHEE) [64]. Every two solutions are compared according to those thresholds. Then, a preference ranking is obtained for outrankingbased methods to search the preferred solutions [65]. However, the outrankingbased methods require too many parameter settings, which is hard for DMs when the number of objectives increases [64].
Implicit preferences
In some cases, DMs have little knowledge to articulate any sensible preferences. Nevertheless, there are some solutions on the PF that are naturally preferred, even if no problem specific preference can be proposed. Those solutions can be detected based on the curvature of PF [66]. For example, a knee point, around which a small improvement of any objective causes a large degeneration of others, is always of interest to DMs as an implicit preferred solution [67, 68, 69]. Examples include model selection in machine learning [70, 71] and sparse reconstruction [72].
There is no widely accepted definition for knee points, and specifying knee points are notoriously difficult in highdimensional objective spaces. Existing approaches to identifying knee points can be divided into two categories: angle and distancebased approaches [68]. Anglebased approaches measure the angle between a solution and its two neighbors and search the knee point according to the obtained angle [72]. Although anglebased approaches are straightforward, they can be applied to biobjective optimization problems only. Distancebased approaches can handle problems with more than two objectives, which search the knee point according to the distance to a predefined hyperplane [73].
In addition to knee points, extreme points or the nadir point can work as a special form of preferences [74]. Extreme points are the solutions with the worst objective values on the PF. A nadir point is a combination of extreme points. With extreme points or the nadir point, DMs can acquire the knowledge on the range of the PF to input their preferences more accurately [75, 76, 77].
Discussions
The above formulations of preferences share several similarities. For example, although weights and reference vectors are different concepts, weights are sometimes used as references, and vice versa. All the existing preference formulations are scalable to many objectives, but their complexity significantly increases. Although different preference models may have very different properties, they all describe the objective importance or priority in their own ways, except that utility functions sort the importance of solutions rather than objectives.
DMs might articulate preferences with uncertainty. To model uncertainty in preferences, small perturbations can be introduced into goals, weights, or reference vectorbased methods. Thus, fuzzy logic can be used as a natural means for handling uncertainty in preferences [78, 79], such as reference points [35], weights [80], preference relation [81, 82], and outranking [63]. Preference relation, utility function, and outranking are not strictly based on objective importance in values, which allow uncertainty to a certain degree. DMs might have inconsistent preferences during the search. In these such cases, goal, weight, and reference vector based methods might fail, because they focus on the previous preferences too much and may lose diversity. Also, preference relation and utility function based methods cannot handle preference inconsistency. Only outranking allows inconsistency in preferences to some degree. Furthermore, DMs can introduce inappropriate preferences, which might lead to infeasible solutions. There has not been any specific research dedicated to handling inappropriate preferences, and fuzzy preferences might provide a solution to this problem.
Preferencebased optimization methods

A priori methods In these methods, DMs need to input their preferences before optimization starts. In such methods, the main difficulty lies in the fact that DMs may have limited knowledge about the problem and their preferences may be inaccurate or even misleading.

A posteriori methods In a posteriori methods, a set of representative Pareto optimal is obtained using an optimization algorithm, from which DMs choose a small number of solutions according to their preferences. In comparison with the a priori methods, DMs are able to better understand the tradeoff relationships between the objectives in the a posteriori methods. Most existing multiobjective evolutionary algorithms (MOEAs) [83] belong to this category. It should be noted, however, that it becomes increasingly hard to obtain a representative solution set as the number of objectives increases [84].

Interactive methods Interactive methods [85, 86] enable DMs to articulate their preferences in the course of optimization. In interactive methods, DMs are allowed to modify their preferences, typically based on the domain knowledge acquired during the optimization [32, 38, 87]. With the increasing understanding of the problem as the optimization proceeds, DMs are able to fine tune their preferences according to the obtained solutions in each iteration. With the revised preference, the interactive methods search for new preferred solutions, which usually needs less computational cost compared with the a posteriori methods. In the existing interactive methods, only one single preference model is adopted, such as reference vectors [88, 89, 90, 91], weights [92, 93, 94, 95], preference relation [96, 97, 98], and utility functions [99].
Nonevolutionary preferencebased optimization methods

Parts of nondominated solutions are expected to be found.

DMs are expected to understand the problem and are able to provide reasonable preferences.

Satisfactory optimal solutions are expected to be output finally.
The aggregationbased MCDM approaches are based on weights [104]. Thus, decision making is mathematically defined by Eq. (2), where m is the number of criteria, \(\mathbf {w}\) is the weight. For those approaches, DMs need to have a clear idea about how to set the weights. However, it is very hard for human beings to provide precise quantitative importance levels for different objectives. In some cases, good solutions cannot be easily distinguished from the poor solutions by Eq. (2).
Unlike the aggregationbased MCDM approaches which are based on explicit mathematical formula as a fixed preference, the synthesizing criterionbased approaches are based on implicit rules. For example, outranking and utility function are two implicit and flexible preference models. Outranking sorts the objective preferences, and utility function sorts the solution preferences. So far, the elimination and choice expressing reality (ELECTRE) [105] and preference ranking organization method for enrichment evaluations methods (PROMETHEE) [106] are two main outranking approaches; utilities additives (UTA) methods [107] are utility functionbased approaches [108].
In addition to the above mentioned aggregation procedures and synthesizing criteria, fuzzy logic [109], decision rules [110], multiobjective mathematical programming [111], and objective classification [112] have been employed to improve the performance of the MCDM approaches.
Evolutionary preferencebased optimization methods
While nonevolutionary methods pay much attention to preference handling, most MOEAs focus on obtaining the whole solution set as the a posteriori methods. In this section, we discuss the a priori methods in MOEAs, which embed preferences into their fitness functions for narrowing down the selection [113]. So far, goals, weights reference and utility functions have been used to integrate preferences in MOEAs [29].
Challenges
Even though preferences have recently gained increasing attention and have been studied for decades, many issues remain to be addressed in the future.
Preference adaptation for various formulations
As mentioned before, different preference models have been developed and existing preferencebased MOEAs are designed according to a specific preference model. However, preferences provided by DMs might be in different forms, thus no single MOEA is able to deal with various types of preferences, making them less flexible to be used in practice. Thus, it would be very desirable if various preference models can be converted into a single preference so that they can be incorporated into a preferencebased MOEA. So far, not much work has been reported on converting one preference model into another, with a few exceptions, e.g., preference relations are converted into weights in [2] and fuzzy preferences are turned into weights in [82]. Thus, it is necessary to develop a general framework for converting different preference models so that the advantages and disadvantages of the existing methods can be properly compared in terms of their ability to handle uncertainty, conflicts, as well as the robustness in obtain preferred solutions.
Preference learning
Learning user preferences
Preferences play a very important role in MCDM. Preferences given by DMs are consistent to a certain degree, notwithstanding that the fact that DMs might change their preferences in interacting with the optimizer. Thus, the system should be able to learn the preferences of DMs based on history data. Although there are many mature techniques in machine learning [132] and data mining [133] that can help learn preferences of DMs, little attention has been paid to this research topic with a few exceptions [134], where preferences of DMs are learned by training a single or multiple surrogate models [135] using a semisupervised learning algorithm. As the work in [134] indicated, a proper learning algorithm should be chosen and attention should be paid to the fact that the learned preferences are able to be incorporated in MOEAs.
Handling preference violation
Without sufficient information about the problem, it is likely for DMs to provide less reasonable or even misleading preferences. In some cases, no solutions can be found for some preferences, for example when the Pareto front is discontinuous.
In case there are a group of DMs, it should be taken into account that the preferences given by different group members might be conflicting to each other [136]. As pointed out in [36], priority, independence, and unanimity of individual preferences need to be taken into account in using preferences from multiple DMs.
Psychological study
Decision making can be seen as a psychological construct in the selection of several alternative actions [137]. In some cases, the processing capacity of DMs is limited due to the overwhelmed results from decision making systems [138]. To ensure that decision making systems are compatible with the psychology of DMs, attention should be paid to theory of decision making in the psychological level [139]. Experiments reported in [140] indicate that the improvement of the forecasting performance can be achieved with the help of a psychological model. Therefore, we believe that a further understanding of the psychology of DMs would build a proper bridge between decision making systems and DMs, which can further improve the efficiency of the preferencebased methods [25, 26, 57, 112].
Analysis of relationships between decision variables and objectives
Relationship between objectives
The conflict between two objectives means that the improvement on one objective would deteriorate the other. The conflict might be global or local [141, 142, 143]. For locally conflicting objectives, they are conflicting with each other in some regions but not in other regions. However, the existing research on objective reduction focuses on global redundancy between objectives [144, 145, 146], but little work has been conducted on locally conflicting objectives. The search on locally redundant objectives wastes computational cost, and the results in [141] indicate that objective reduction approaches for some problems with globally conflicting objectives can still improve the performance of MOEAs on the problems with locally redundant objectives. Therefore, detecting locally conflicting objectives, reducing locally redundant objectives, and analyzing the effects of locally conflicting objectives on the PFs are of great interest. Moreover, analysis of the correlation between objectives can help group objectives into a number of groups to simplify the representation to DMs, because human beings can only handle around seven objectives.
Several approaches can be used to help DMs to understand the relationship between objectives. In [147], objectives are divided into five classes to help DMs understand the tradeoff. Selforganizing maps (SOMs) [148] have been shown to be promising in revealing the tradeoff relationships between objectives [149, 150]. Correlation is another effective tool for analyzing the relationship between objectives. Different metrics have been proposed to measure the degree of correlation (both linear and nonlinear), covariance, mutual information entropy [151], and nonlinear correlation information entropy (NCIE) [152, 153], for instance. Based on these relations, many mature data mining techniques can be employed to choose a subset of conflicting objectives to simplify the original problem, such as feature selection [146], principal component analysis (PCA) [154], and maximum variance unfolding (MVU) [155]. The Pareto corner search evolutionary algorithm (PCSEA) [145] is a newly proposed objective reduction approach. It only searches the corners on PFs. Then, it uses the obtained solutions to analyze the relationship between objectives and identify a subset of noncorrelated objectives.
Knee points show the conflicting degree and are interesting to DMs if they do not have specific preferences [68]. Knee point detection is based on the different definition for 2 or 3objective problems [73]. The definition of knee point in MaOPs is not yet wellestablished, because the conflicting degree might vary with different pairs of objectives. The sensitivity to changes in individual objectives may exist in some particular regions on the PF, which can be considered as partial knee points that are of interested to DMs.
Functional maps from decision variables to objectives
For the realworld applications, noises or uncertainties are inevitable. In such situations, DMs prefer solutions that are robust against small changes in decision variables [156, 157, 158]. There have been some discussions on robust multiobjective optimization [159, 160, 161, 162, 163, 164], but little research has studied the robustness in decision making, except for measuring attractiveness by a categorical based evaluation technique (MACBETH) [165]. The analysis of the mapping relationship from decision variables to objectives [166] helps searching robust solutions in the preferencebased methods.
To analyze the mapping relationship from decision variables to objectives, artificial neural network (ANN) [167, 168], Bayesian learning [169, 170], and the estimation of distribution algorithm (EDA) [171, 172] have been employed.
Benchmark design

Preference simulation It is necessary to simulate the preferences with artificial functions [176], where uncertainty and the response to the algorithm should also be taken into account.

Objective correlation Both global and local conflicts should be designed in the benchmark.

Ground truth The true optimal solutions should be provided for assessing the performance.
Performance assessment
Several performance indicators for measuring the performance of MOEAs have been proposed, such as generational distance (GD) [177], inverted generational distance (IGD) [173], and hypervolume [15]. However, not many performance indicators exist that are dedicated to evaluation of preferencebased methods with few exceptions [178], which considers both dominance and the distance to the preferences. In addition, an ideal metric for preferencebased methods should evaluate whether the obtained solutions truly reflect the preferences, regardless of their preference modeling types.
Visualization
Visualization plays an important role in interactions between DMs and preferencebased optimization methods. When the number of objectives equals or larger than four, visualization becomes a challenge. Existing approaches can be divided into three classes, namely parallel coordinate, mapping, and aggregation tree [179].
The approaches based on parallel coordinates provide the visualization of individual solutions by a parallel coordinate system. In that system, there are parallel axes that can describe values of all objectives. Parallel coordinates [1] use a polyline with vertices on the parallel axes, while heatmap [180] uses color to present the values on the parallel axes. Those approaches can only show the tradeoff between two adjacent objectives.
Other approaches include those adopt dimension reduction techniques that can preserve the Pareto dominance relationship among individuals in both global and local areas, such as Sammon mapping [181], neuroscale [182], radial coordinate visualization (RadViz) [183], SOM [149, 184], and Isomap [185]. These approaches are not as straightforward as the parallel coordinatebased approaches in analyzing the tradeoff relationships between the objecitves and are timeconsuming.
Approaches based on aggregation tree [186, 187] measure the harmony between objectives to visualize the relation between objectives. However, this kind of approaches cannot show individual solutions.
Most existing visualization tools are not straightforward for DMs to understand. Ideally, both dominance and preference relationship should be presented in the visualization. Moreover, DMs should be able to zoom in interesting regions to get more detailed information.
Scalable multiobjective optimization test problems. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol 1. IEEE Computer Society, pp 825–830
Conclusion
Since preferencebased multiobjective optimization is strongly motivated from the realworld applications, research interests in this area have increased in recent years. Indeed, preference modelling is also a common need in many areas of artificial intelligence in which decision making is involved [188, 189, 190]. It becomes thus clear that preference modelling and learning are important not only for decision making and evolutionary optimization, but also for artificial intelligence research.
In this paper, we provide a concise review of research on preference modelling and preferencebased optimization methods. We discuss the open issues in preference modelling and preference based optimization. It is emphasized that the importance of preferencebased multiobjective optimization is of paramount practical significance and preferences must be incorporated in manyobjective optimization, where obtaining a representative subset of the entire Pareto front is less likely.
Notes
Acknowledgements
This work was supported in part by an EPSRC Grant (No. EP/M017869/1) on “Datadriven surrogateassisted evolutionary fluid dynamic optimisation”, in part by the Joint Research Fund for Overseas Chinese, Hong Kong and Macao Scholars of the National Natural Science Foundation of China (No. 61428302), and in part by the Honda Research Institute Europe.
References
 1.Fleming P, Purshouse R, Lygoe R (2005) Manyobjective optimization: An engineering design perspective. In: Evolutionary multicriterion optimization. Springer, New York, pp 14–32Google Scholar
 2.Parmee IC, Cvetković D, Watson AH, Bonham CR (2000) Multiobjective satisfaction within an interactive evolutionary design environment. Evol Comput 8(2):197–222Google Scholar
 3.Miettinen K (1999) Nonlinear multiobjective optimization. Springer, New YorkGoogle Scholar
 4.Steuer RE (1986) Multiple criteria optimization: theory, computation, and applications. Wiley, New YorkGoogle Scholar
 5.Zhou A, BoYang Q, Li H, Zhao SZ, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol Comput 1(1):32–49Google Scholar
 6.Li B, Li J, Tang K, Yao X (2015) Manyobjective evolutionary algorithms: a survey. ACM Comput Surv 48(1):13CrossRefGoogle Scholar
 7.Wang H, Jin Y, Yao X (2017) Diversity assessment in manyobjective optimization. IEEE Trans Cybern 47(6):1510–1522CrossRefGoogle Scholar
 8.Wagner T, Beume N, Naujoks B (2007) Pareto, aggregation, and indicatorbased methods in manyobjective optimization. In: Evolutionary multicriterion optimization. Springer, New York, pp 742–756Google Scholar
 9.Deb K, Pratap A, Agarwal S, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: NSGAII. IEEE Trans Evol Comput 6(2):182–197CrossRefGoogle Scholar
 10.Zitzler E, Laumanns M, Thiele L (2001) SPEA2: Improving the strength Pareto evolutionary algorithm. In: Proceedings of EUROGEN 2001. Evolutionary methods for design, optimization and control with applications to industrial problems. Citeseer, pp 1–21Google Scholar
 11.Praditwong K, Yao X (2007) How well do multiobjective evolutionary algorithms scale to large problems. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 3959–3966Google Scholar
 12.Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary manyobjective optimization: a short review. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 2419–2426Google Scholar
 13.Zitzler E, Künzli S (2004) Indicatorbased selection in multiobjective search. In: International Conference on Parallel Problem Solving from Nature. Springer, New York, pp 832–842Google Scholar
 14.Wang H, Jiao L, Yao X (2015) Two_Arch2: an improved twoarchive algorithm for manyobjective optimization. IEEE Trans Evol Comput 19(4):524–541CrossRefGoogle Scholar
 15.Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271CrossRefGoogle Scholar
 16.Brockhoff D, Wagner T, Trautmann H (2012) On the properties of the R2 indicator. In: The genetic and evolutionary computation conference. ACM, New York, pp 465–472Google Scholar
 17.Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolumebased manyobjective optimization. Evol Comput 19(1):45–76CrossRefGoogle Scholar
 18.Gómez RH, Coello CAC (2013) MOMBI: a new metaheuristic for manyobjective optimization based on the R2 indicator. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 2488–2495Google Scholar
 19.Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731CrossRefGoogle Scholar
 20.Deb K, Jain H (2014) An evolutionary manyobjective optimization algorithm using referencepointbased nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601CrossRefGoogle Scholar
 21.Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 20(5):773–791. doi: 10.1109/TEVC.2016.2519378
 22.Jiao L, Wang H, Shang R, Liu F (2013) A coevolutionary multiobjective optimization algorithm based on direction vectors. Inf Sci 228:90–112MATHCrossRefMathSciNetGoogle Scholar
 23.Liu HL, Fangqing G, Zhang Q (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455CrossRefGoogle Scholar
 24.Dennis J, Das I (1998) Normalboundary intersection: a new method for generating Pareto optimal points in nonlinear multicriteria optimization problems. SIAM J Optim 8(3):631–657MATHCrossRefMathSciNetGoogle Scholar
 25.Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 63(2):81CrossRefGoogle Scholar
 26.Nisbett RE, Wilson TD (1977) Telling more than we can know: verbal reports on mental processes. Psychol Rev 84(3):231Google Scholar
 27.Slovic P, Lichtenstein S (1971) Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organ Behav Human Perform 6(6):649–744CrossRefGoogle Scholar
 28.Thiele L, Miettinen K, Korhonen PJ, Molina J (2009) A preferencebased evolutionary algorithm for multiobjective optimization. Evol Comput 17(3):411–436Google Scholar
 29.Hirsch C, Shukla PK, Schmeck H (2011) Variable preference modeling using multiobjective evolutionary algorithms. In: Evolutionary multicriterion optimization. Springer, New York, pp 91–105Google Scholar
 30.Gembicki FW (1974) Vector optimization for control with performance and parameter sensitivity indices. Ph.D. thesis, Ph.D. Thesis, Case Western Reserve Univ., Cleveland, OhioGoogle Scholar
 31.Wang R, Purshouse RC, Fleming PJ (2013) Preferenceinspired coevolutionary algorithms for manyobjective optimization. IEEE Trans Evol Comput 17(4):474–494Google Scholar
 32.Fonseca CM, Fleming PJ (1993) Genetic algorithms for multiobjective optimization: Formulation discussion and generalization. In: Proceedings of the International Conference on Genetic Algorithms, vol 93. Citeseer, pp 416–423Google Scholar
 33.Deb K (1999) Solving goal programming problems using multiobjective genetic algorithms. In: Proceedings of the 1999 congress on evolutionary computation, CEC 99, vol 1. IEEE, pp 77–84Google Scholar
 34.Wagner T, Trautmann H (2010) Integration of preferences in hypervolumebased multiobjective evolutionary algorithms by means of desirability functions. IEEE Trans Evol Comput 14(5):688–701CrossRefGoogle Scholar
 35.Sakawa M, Kato K (2002) An interactive fuzzy satisficing method for general multiobjective 0–1 programming problems through genetic algorithms with double strings based on a reference solution. Fuzzy Sets Syst 125(3):289–300MATHCrossRefMathSciNetGoogle Scholar
 36.Coello CAC (2000) Handling preferences in evolutionary multiobjective optimization: a survey. In: Proceedings of the Congress on Evolutionary Computation, vol 1. IEEE, pp 30–37Google Scholar
 37.Phelps S, Köksalan M (2003) An interactive evolutionary metaheuristic for multiobjective combinatorial optimization. Manage Sci 49(12):1726–1738MATHCrossRefGoogle Scholar
 38.Köksalan M, Karahan I (2010) An interactive territory defining evolutionary algorithm: iTDEA. IEEE Trans Evol Comput 14(5):702–722CrossRefGoogle Scholar
 39.Wang R, Purshouse RC, Fleming PJ (2015) Preferenceinspired coevolutionary algorithms using weight vectors. Eur J Oper Res 243(2):423–441Google Scholar
 40.Wang R, Zhou Z, Ishibuchi H, Liao T, Zhang T (2016) Localized weighted sum method for manyobjective optimization. IEEE Trans Evol Comput. doi: 10.1109/TEVC.2016.2611642
 41.Branke J, Kaußler T, Schmeck H (2001) Guidance in evolutionary multiobjective optimization. Adv Eng Softw 32(6):499–507MATHCrossRefGoogle Scholar
 42.Branke J, Deb K (2005) Integrating user preferences into evolutionary multiobjective optimization. In: Knowledge Incorporation in Evolutionary Computation. Springer, New York, pp 461–477Google Scholar
 43.Ma X, Zhang Q, Yang J, Zhu Z (2017) On Tchebycheff decomposition approaches for multiobjective evolutionary optimization. IEEE Trans Evol Comput. doi: 10.1109/TEVC.2017.2704118
 44.Ishibuchi H, Sakane Y, Tsukamoto N, Nojima Y (2009) Adaptation of scalarizing functions in MOEA/D: an adaptive scalarizing functionbased multiobjective evolutionary algorithm. In: Evolutionary multicriterion optimization. Springer, New York, pp 438–452Google Scholar
 45.Ishibuchi H, Sakane Y, Tsukamoto N, Nojima Y (2010) Simultaneous use of different scalarizing functions in MOEA/D. In: The genetic and evolutionary computation conference. ACM, New York, pp 519–526Google Scholar
 46.Wang R, Zhang Q, Zhang T (2016) Decompositionbased algorithms using Pareto adaptive scalarizing methods. IEEE Trans Evol Comput 20(6):821–837CrossRefGoogle Scholar
 47.Liu HL, Gu FQ, Cheung YM (2010) TMOEA/D: MOEA/D with objective transform in multiobjective problems. In: Information Science and Management Engineering (ISME), International Conference of, vol 2. IEEE, pp 282–285Google Scholar
 48.Sato H (2014) Inverted PBI in MOEA/D and its impact on the search performance on multi and manyobjective optimization. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 645–652Google Scholar
 49.Haimes YY, Hall WA (1974) Multiobjectives in water resource systems analysis: the surrogate worth trade off method. Water Resour Res 10(4):615–624Google Scholar
 50.Brafman RI (2011) Relational preference rules for control. Artif Intell 175(7):1180–1193Google Scholar
 51.Zitzler E, Thiele L, Bader J (2008) SPAM: set preference algorithm for multiobjective optimization. In: International Conference on Parallel Problem Solving from Nature, vol 5199. Springer, New York, pp 847–858Google Scholar
 52.Jaimes AL, Coello CAC (2009) Study of preference relations in manyobjective optimization. In: The genetic and evolutionary computation conference. ACM, New York, pp 611–618Google Scholar
 53.Thomas L (2008) Saaty. Decision making with the analytic hierarchy process. Int J Serv Sci 1(1):83–98Google Scholar
 54.Feldman AM (1989) Preferences and utility. In: Welfare economics and social choice theory. Springer, New York, pp 9–22Google Scholar
 55.Jeantet G, Spanjaard O (2011) Computing rank dependent utility in graphical models for sequential decision problems. Artif Intell 175(7):1366–1389MATHCrossRefMathSciNetGoogle Scholar
 56.Pedro LR, Takahashi R (2011) Modeling decisionmaker preferences through utility function level sets. In: Evolutionary multicriterion optimization. Springer, New York, pp 550–563Google Scholar
 57.Costa CAC (2012) Readings in multiple criteria decision aid. Springer Science & Business Media, New YorkGoogle Scholar
 58.Greenwood GW, Hu X, D’Ambrosio JG (1996) Fitness functions for multiple objective optimization problems: Combining preferences with Pareto rankings. In: FOGA, vol 96, pp 437–455Google Scholar
 59.White CC III, Sage AP, Dozono S (1984) A model of multiattribute decisionmaking and tradeoff weight determination under uncertainty. IEEE Trans Syst Man Cybern 14(2):223–229Google Scholar
 60.Cvetković D, Parmee IC (2002) Preferences and their application in evolutionary multiobjective optimization. IEEE Trans Evol Comput 6(1):42–57Google Scholar
 61.Rekiek B, De Lit P, Pellichero F, L’Eglise T, Falkenauer E, Delchambre A (2000) Dealing with user’s preferences in hybrid assembly lines design. IFAC Proc Vol 33(17):989–994Google Scholar
 62.Waegeman W, De Baets B (2011) On the ERA ranking representability of pairwise bipartite ranking functions. Artif Intell 175(7):1223–1250MATHCrossRefMathSciNetGoogle Scholar
 63.Siskos J, Lombard J, Oudiz A (1986) The use of multicriteria outranking methods in the comparison of control options against a chemical pollutant. J Oper Res Soc 37(4):357–371Google Scholar
 64.Brans JP, Vincke P, Mareschal B (1986) How to select and how to rank projects: the PROMETHEE method. Eur J Oper Res 24(2):228–238Google Scholar
 65.Massebeuf S, Fonteix C, Kiss LN, Marc I, Pla F, Zaras K (1999) Multicriteria optimization and decision engineering of an extrusion process aided by a diploid genetic algorithm. In: Proceedings of the 1999 congress on evolutionary computation, CEC 99, vol 1. IEEE, pp 14–21Google Scholar
 66.Shukla PK, Emmerich M, Deutz A (2013) A theoretical analysis of curvature based preference models. In: International Conference on Evolutionary MultiCriterion Optimization. Springer, New York, pp 367–382Google Scholar
 67.Rachmawati L, Srinivasan D (2009) Multiobjective evolutionary algorithm with controllable focus on the knees of the Pareto front. IEEE Trans Evol Comput 13(4):810–824CrossRefGoogle Scholar
 68.Branke J, Deb K, Dierolf H, Osswald M (2004) Finding knees in multiobjective optimization. In: International Conference on Parallel Problem Solving from Nature. Springer, New York, pp 722–731Google Scholar
 69.Deb K, Gupta S (2011) Understanding knee points in bicriteria problems and their implications as preferred solution principles. Eng Optim 43(11):1175–1204CrossRefMathSciNetGoogle Scholar
 70.Jin Y, Bernhard S (2008) Paretobased multiobjective machine learning: An overview and case studies. IEEE Trans Syst Man Cybern Part C Appl Rev 38(3):397–415CrossRefGoogle Scholar
 71.Smith C, Jin Y (2014) Evolutionary multiobjective generation of recurrent neural network ensembles for time series prediction. Neurocomputing 143:302–311CrossRefGoogle Scholar
 72.Li L, Yao X, Stolkin R, Gong M, He S (2014) An evolutionary multiobjective approach to sparse reconstruction. IEEE Trans Evol Comput 18(6):827–845CrossRefGoogle Scholar
 73.Zhang X, Tian Y, Jin Y (2015) A knee pointdriven evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 19(6):761–776CrossRefGoogle Scholar
 74.Wang H, He S, Yao X (2017) Nadir point estimation for manyobjective optimization problems based on emphasized critical regions. Soft Comput 21(9):2283–2295CrossRefGoogle Scholar
 75.Branke J, Deb K, Miettinen K, Slowiński R (2008) Multiobjective optimization. Interactive and evolutionary approaches, vol 5252. Springer, New YorkGoogle Scholar
 76.Deb K, Kumar A (2007) Interactive evolutionary multiobjective optimization and decisionmaking using reference direction method. In: The genetic and evolutionary computation conference. ACM, New York, pp 781–788Google Scholar
 77.Amiri M, Ekhtiari M, Yazdani M (2011) Nadir compromise programming: a model for optimization of multiobjective portfolio problem. Expert Syst Appl 38(6):7222–7226CrossRefGoogle Scholar
 78.Voget S, Kolonko M (1998) Multidimensional optimization with a fuzzy genetic algorithm. J Heuristics 4(3):221–244MATHCrossRefGoogle Scholar
 79.Hadjali A, Mokhtari A, Pivert O (2012) Expressing and processing complex preferences in route planning queries: towards a fuzzysetbased approach. Fuzzy Sets Syst 196:82–104CrossRefMathSciNetGoogle Scholar
 80.Pirjanian P (1998) Multiple objective action selection and behavior FPcsion using voting. Ph.D. thesis, Department of Medical Informatics and Image Analysis, Institute of Electronic Systems, Aalborg University, Aalborg, DenmarkGoogle Scholar
 81.Fodor JC, Roubens MR (1994) Fuzzy preference modelling and multicriteria decision support, vol 14. Springer Science & Business Media, New YorkGoogle Scholar
 82.Jin Y, Sendhoff B (2002) Fuzzy preference incorporation into evolutionary multiobjective optimization. In: Proceedings of the 4th AsiaPacific Conference on Simulated Evolution and Learning, vol 1, pp 26–30Google Scholar
 83.Abraham A, Jain L (2005) Evolutionary multiobjective optimization. Springer, New YorkGoogle Scholar
 84.Khare V, Yao X, Deb K (2003) Performance scaling of multiobjective evolutionary algorithms. In: Evolutionary MultiCriterion Optimization. Springer, New York, pp 376–390Google Scholar
 85.Miettinen K, Hakanen J, Podkopaev D (2016) Interactive nonlinear multiobjective optimization methods. In: Multiple criteria decision analysis. Springer, New York, pp 927–976Google Scholar
 86.Said LB, Bechikh S, Ghédira K (2010) The rdominance: a new dominance relation for interactive evolutionary multicriteria decision making. IEEE Trans Evol Comput 14(5):801–818Google Scholar
 87.Deb K, Chaudhuri S (2005) IEMO: an interactive evolutionary multiobjective optimization tool. In: Pattern Recognition and Machine Intelligence. Springer, New York, pp 690–695Google Scholar
 88.Miettinen K, Podkopaev D, Ruiz F, Luque M (2015) A new preference handling technique for interactive multiobjective optimization without tradingoff. J Global Optim 63(4):633–652MATHCrossRefMathSciNetGoogle Scholar
 89.Miettinen K, Ruiz F (2016) NAUTILUS framework: towards tradeofffree interaction in multiobjective optimization. J Bus Econ 86(1–2):5–21CrossRefGoogle Scholar
 90.Deb K, Chaudhuri S (2007) IMODE: an interactive multiobjective optimization and decisionmaking using evolutionary methods. In: Evolutionary MultiCriterion Optimization. Springer, New York, pp 788–802Google Scholar
 91.Sindhya K, Ruiz AB, Miettinen K (2011) A preference based interactive evolutionary algorithm for multiobjective optimization: PIE. In: Evolutionary MultiCriterion Optimization. Springer, New York, pp 212–225Google Scholar
 92.Gong M, Liu F, Zhang W, Jiao L, Zhang Q (2011) Interactive MOEA/D for multiobjective decision making. In: The genetic and evolutionary computation conference. ACM, New York, pp 721–728Google Scholar
 93.Ruiz AB, Luque M, Miettinen K, Saborido R (2015) An interactive evolutionary multiobjective optimization method: Interactive WASFGA. In: Evolutionary multicriterion optimization. Springer, New York, pp 249–263Google Scholar
 94.Ruiz AB, Luque M, Ruiz F, Saborido R (2015) A combined interactive procedure using preferencebased evolutionary multiobjective optimization. application to the efficiency improvement of the auxiliary services of power plants. Expert Syst Appl 42(21):7466–7482Google Scholar
 95.Liu R, Wang R, Feng W, Huang J, Jiao L (2016) Interactive reference region based multiobjective evolutionary algorithm through decomposition. IEEE Access 4:7331–7346CrossRefGoogle Scholar
 96.Battiti R, Passerini A (2010) Braincomputer evolutionary multiobjective optimization: a genetic algorithm adapting to the decision maker. IEEE Trans Evol Comput 14(5):671–687CrossRefGoogle Scholar
 97.Branke J, Greco S, Słowiński R, Zielniewicz P (2009) Interactive evolutionary multiobjective optimization using robust ordinal regression. In: Evolutionary multicriterion optimization. Springer, New York, pp 554–568Google Scholar
 98.Deb K, Sinha A, Korhonen PJ, Wallenius J (2010) An interactive evolutionary multiobjective optimization method based on progressively approximated value functions. IEEE Trans Evol Comput 14(5):723–739Google Scholar
 99.Sinha A, Korhonen P, Wallenius J, Deb K (2014) An interactive evolutionary multiobjective optimization algorithm with a limited number of decision maker calls. Eur J Oper Res 233(3):674–688MATHCrossRefMathSciNetGoogle Scholar
 100.Chankong V, Haimes YY (1983) Multiobjective decision making: theory and methodology. North Holland, New YorkGoogle Scholar
 101.Hwang CL, Masud ASM (1979) Multiple objective decision makingmethods and applications. Springer, New YorkGoogle Scholar
 102.Torokhti A, Howlett P (1985) Theory of multiobjective optimization, vol 176. Elsevier, AmsterdamGoogle Scholar
 103.Roy B (2005) Paradigms and challenges. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 3–24Google Scholar
 104.Bouyssou D (1986) Some remarks on the notion of compensation in MCDM. Eur J Oper Res 26(1):150–160MATHCrossRefMathSciNetGoogle Scholar
 105.Figueira J, Mousseau V, Roy B (2005) ELECTRE methods. In: Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, New York, pp 133–153Google Scholar
 106.Brans JP, Mareschal B (2005) PROMETHEE methods. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 163–186Google Scholar
 107.Siskos Y, Grigoroudis E, Matsatsinis NF (2005) UTA methods. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 297–334Google Scholar
 108.Dyer JS (2005) MAUTmultiattribute utility theory. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 265–292Google Scholar
 109.Meyer P, Roubens M (2005) Choice, ranking and sorting in fuzzy multiple criteria decision aid. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 471–503Google Scholar
 110.Greco S, Matarazzo B, Słowinński R (2005) Decision rule approach. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 507–555Google Scholar
 111.Ehrgott M, Wiecek MM (2005) Mutiobjective programming. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 667–708Google Scholar
 112.Larichev OI (1992) Cognitive validity in design of decisionaiding techniques. J Multi Criteria Decis Anal 1(3):127–138MATHCrossRefGoogle Scholar
 113.Rachmawati L, Srinivasan D (2006) Preference incorporation in multiobjective evolutionary algorithms: a survey. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 962–968Google Scholar
 114.Fonseca CM, Fleming PJ (1998) Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. a unified formulation. IEEE Trans Syst Man Cybern Part A Syst Hum 28(1):26–37Google Scholar
 115.Tan KC, Khor EF, Lee TH, Sathikannan R (2003) An evolutionary algorithm with advanced goal and priority specification for multiobjective optimization. J Artif Intell Res 18:183–215Google Scholar
 116.Yang X, Gen M (1994) Evolution program for bicriteria transportation problem. Comput Ind Eng 27(1):481–484CrossRefGoogle Scholar
 117.Wilson PB, Macleod MD (1993) Low implementation cost IIR digital filter design using genetic algorithms. IEE/IEEE Workshop Nat Algorithms Signal Process 1:1–4Google Scholar
 118.Wienke D, Lucasius C, Kateman G (1992) Multicriteria target vector optimization of analytical procedures using a genetic algorithm: part I. theory, numerical simulations and application to atomic emission spectroscopy. Anal Chim Acta 265(2):211–225CrossRefGoogle Scholar
 119.Quagliarella D, Vicini A (1997) Coupling genetic algorithms and gradient based optimization techniques. In: Genetic algorithms and evolution strategy in engineering and computer science: recent advances and industrial applications. Wiley, Hoboken, pp 289–309Google Scholar
 120.Wang R, Purshouse RC, Giagkiozis I, Fleming PJ (2015) The iPICEAg: a new hybrid evolutionary multicriteria decision making approach using the brushing technique. Eur J Oper Res 243(2):442–453Google Scholar
 121.Cheng R, Rodemann T, Fischer M, Olhofer M, Jin Y (2017) Evolutionary manyobjective optimization of hybrid electric vehicle control: from general optimization to preference articulation. IEEE Trans Emerg Top Comput Intell 1(2):97–111CrossRefGoogle Scholar
 122.Wierzbicki AP (1980) The use of reference objectives in multiobjective optimization. In: Multiple criteria decision making theory and application. Springer, New York, pp 468–486Google Scholar
 123.Korhonen PJ, Laakso J (1986) A visual interactive method for solving the multiple criteria problem. Eur J Oper Res 24(2):277–287Google Scholar
 124.Nikulin Y, Miettinen K, Mäkelä MM (2012) A new achievement scalarizing function based on parameterization in multiobjective optimization. OR Spectr 34(1):69–87Google Scholar
 125.Jaszkiewicz A, Słowiński R (1999) The ‘light beam search’ approachan overview of methodology applications. Eur J Oper Res 113(2):300–314MATHCrossRefGoogle Scholar
 126.Liu R, Wang X, Liu J, Fang L, Jiao L (2013) A preference multiobjective optimization based on adaptive rank clone and differential evolution. Nat Comput 12(1):109–132MATHCrossRefMathSciNetGoogle Scholar
 127.Deb K, Kumar A (2007) Light beam search based multiobjective optimization using evolutionary algorithms. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 2125–2132Google Scholar
 128.Molina J, Santana LV, HernándezDíaz AG, Coello CAC, Caballero R (2009) gdominance: Reference point based dominance for multiobjective metaheuristics. Eur J Oper Res 197(2):685–692Google Scholar
 129.Ishibuchi H, Tsukamoto N, Sakane Y, Nojima Y (2009) Hypervolume approximation using achievement scalarizing functions for evolutionary manyobjective optimization. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 530–537Google Scholar
 130.Ishibuchi H, Tsukamoto N, Sakane Y, Nojima Y (2010) Indicatorbased evolutionary algorithm with hypervolume approximation by achievement scalarizing functions. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 527–534Google Scholar
 131.Chugh T, Sindhya K, Hakanen J, Miettinen K (2015) An interactive simple indicatorbased evolutionary algorithm (ISIBEA) for multiobjective optimization problems. In: Evolutionary multicriterion optimization. Springer, New York, pp 277–291Google Scholar
 132.Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkGoogle Scholar
 133.Han J, Kamber M, Pei J (2011) Data mining: concepts and techniques: concepts and techniques. Elsevier, AmsterdamGoogle Scholar
 134.Sun X, Gong D, Jin Y, Chen S (2013) A new surrogateassisted interactive genetic algorithm with weighted semisupervised learning. IEEE Trans Cybern 43(2):85–698Google Scholar
 135.Jin Y (2011) Surrogateassisted evolutionary computation: Recent advances and future challenges. Swarm Evol Comput 1(2):61–70CrossRefGoogle Scholar
 136.Arrow KJ (2012) Social choice and individual values, vol 12. Yale University Press, New HavenGoogle Scholar
 137.Janis IL, Mann L (1977) Decision making: a psychological analysis of conflict, choice, and commitment. Free Press, New YorkGoogle Scholar
 138.Ackoff RL (1967) Management misinformation system. Manag Syst 14(4):B147Google Scholar
 139.Edwards W (1954) The theory of decision making. Psychol Bull 51(4):380CrossRefGoogle Scholar
 140.Hoch SJ, Schkade DA (1996) A psychological approach to decision support systems. Manag Sci 42(1):51–64Google Scholar
 141.Wang H, Yao X (2016) Objective reduction based on nonlinear correlation information entropy. Soft Comput 20(6):2393–2407CrossRefGoogle Scholar
 142.Freitas AR, Fleming PJ, Guimaraes F (2013) A nonparametric harmonybased objective reduction method for manyobjective optimization. In: IEEE International Conference on Systems, Man, and Cybernetics. IEEE, pp 651–656Google Scholar
 143.de Freitas ARR, Fleming PJ, Guimarães FG (2015) Aggregation trees for visualization and dimension reduction in manyobjective optimization. Inf Sci 298:288–314Google Scholar
 144.Brockhoff D, Zitzler E (2009) Objective reduction in evolutionary multiobjective optimization: theory and applications. Evol Comput 17(2):135–166CrossRefGoogle Scholar
 145.Singh HK, Isaacs A, Ray T (2011) A Pareto corner search evolutionary algorithm and dimensionality reduction in manyobjective optimization problems. IEEE Transactions on Evol Comput 15(4):539–556Google Scholar
 146.Jaimes AL, Coello CAC, Chakraborty D (2008) Objective reduction using a feature selection technique. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 673–680Google Scholar
 147.Hämäläinen JP, Miettinen K, Tarvainen P, Toivanen J (2003) Interactive solution approach to a multiobjective optimization problem in a paper machine headbox design. J Optim Theory Appl 116(2):265–281MATHCrossRefMathSciNetGoogle Scholar
 148.Kangas JA, Kohonen Tk, Laaksonen JT (1990) Variants of selforganizing maps. IEEE Trans Neural Netw 1(1):93–99Google Scholar
 149.Obayashi S, Sasaki D (2003) Visualization and data mining of Pareto solutions using selforganizing map. In: Evolutionary multicriterion optimization. Springer, New York, pp 796–809Google Scholar
 150.Nekolny B (2010) Contextual selforganizing maps for visual design space exploration. Master’s thesis, Iowa State UniversityGoogle Scholar
 151.Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P (1997) Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging 16(2):187–198CrossRefGoogle Scholar
 152.Wang Q, Shen Y, Zhang JQ (2005) A nonlinear correlation measure for multivariable data set. Phys D 200(3):287–295MATHCrossRefMathSciNetGoogle Scholar
 153.Wang H, Jin Y (2017) Efficient nonlinear correlation detection for decomposed search in evolutionary multiobjective optimization. In: Proceedings of the congress on evolutionary computation, pp 649–656Google Scholar
 154.Deb K, Saxena DK (2005) On finding Paretooptimal solutions through dimensionality reduction for certain largedimensional multiobjective optimization problems. Technical report, Indian Institute of Technology KanpurGoogle Scholar
 155.Saxena DK, Duro JA, Tiwari A, Deb K, Zhang Q (2013) Objective reduction in manyobjective optimization: linear and nonlinear algorithms. IEEE Trans Evol Comput 17(1):77–99Google Scholar
 156.Deb K, Gupta H (2006) Introducing robustness in multiobjective optimization. Evol Comput 14(4):463–494CrossRefGoogle Scholar
 157.Jin Y, Sendhoff B (2003) Tradeoff between performance and robustness: an evolutionary multiobjective approach. In: Fonseca CM, Fleming PJ, Zitzler E, Thiele L, Deb K (eds) Evolutionary multicriterion optimization, EMO 2003. Lecture Notes in Computer Science, vol 2632. Springer, Berlin, pp 237–251Google Scholar
 158.Jin Y, Tang K, Xin Y, Sendhoff B, Yao X (2013) A framework for finding robust optimal solutions over time. Memet Comput 5(1):3–18CrossRefGoogle Scholar
 159.Deb K, Gupta H (2005) Searching for robust Paretooptimal solutions in multiobjective optimization. Lect Notes Comput Sci 3410:150–164MATHCrossRefGoogle Scholar
 160.Sülflow A, Drechsler N, Drechsler R (2007) Robust multiobjective optimization in high dimensional spaces. In: Evolutionary multicriterion optimization. Springer, New York, pp 715–726Google Scholar
 161.Gunawan S, Azarm S (2005) Multiobjective robust optimization using a sensitivity region concept. Struct Multidiscip Optim 29(1):50–60CrossRefGoogle Scholar
 162.Li M, Azarm S, Aute V (2005) A multiobjective genetic algorithm for robust design optimization. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 771–778Google Scholar
 163.Lim D, Ong YS, Jin Y, Sendhoff B, Lee BS (2006) Inverse multiobjective robust evolutionary optimization. Genet Progr Evol Mach 7(4):383–404CrossRefGoogle Scholar
 164.Greco S, Słowiński R, Figueira JR, Mousseau V (2010) Robust ordinal regression. In: Trends in multiple criteria decision analysis. Springer, New York, pp 241–283Google Scholar
 165.Bana CA, De Corte JM, Vansnick JC et al (2005) On the mathematical foundation of MACBETH. Springer, New YorkGoogle Scholar
 166.Wang H, Jiao L, Shang R, He S, Liu F (2015) A memetic optimization strategy based on dimension reduction in decision space. Evol Comput 23(1):69–100CrossRefGoogle Scholar
 167.Adra SF, Dodd TJ, Griffin IA, Fleming PJ (2009) Convergence acceleration operator for multiobjective optimization. IEEE Trans Evol Comput 13(4):825–847Google Scholar
 168.GasparCunha A, Vieira A (2004) A hybrid multiobjective evolutionary algorithm using an inverse neural network. In: Hybrid Metaheuristics, pp 25–30Google Scholar
 169.Laumanns M, Ocenasek J (2002) Bayesian optimization algorithms for multiobjective optimization. In: International Conference on Parallel Problem Solving from Nature. Springer, New York, pp 298–307Google Scholar
 170.Khan N, Goldberg DE, Pelikan M (2002) Multiobjective Bayesian optimization algorithm. Urbana 51:684–684Google Scholar
 171.Larranaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation, vol 2. Springer Science & Business Media, New YorkGoogle Scholar
 172.Hauschild M, Pelikan M (2011) An introduction and survey of estimation of distribution algorithms. Swarm Evol Comput 1(3):111–128CrossRefGoogle Scholar
 173.Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary algorithms: Empirical results. Evol Comput 8(2):173–195CrossRefGoogle Scholar
 174.Deb K, Thiele L, Laumanns M, Zitzler E (2002) Scalable multiobjective optimization test problems. In: Proceedings of the 2002 congress on evolutionary computation, CEC 2002, vol 1. IEEE Computer Society, pp 825–830Google Scholar
 175.Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506MATHCrossRefGoogle Scholar
 176.LópezIbáñez M, Knowles J (2015) Machine decision makers as a laboratory for interactive EMO. In: Evolutionary MultiCriterion Optimization. Springer, New York, pp 295–309Google Scholar
 177.Van Veldhuizen DA (1999) Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. Technical report, DTIC DocumentGoogle Scholar
 178.Yu G, Zheng J, Li X (2015) An improved performance metric for multiobjective evolutionary algorithms with user preferences. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 908–915Google Scholar
 179.He Z, Yen GG (2016) Visualization and performance metric in manyobjective optimization. IEEE Trans Evol Comput 20(3):386–402Google Scholar
 180.Pryke A, Mostaghim S, Nazemi A (2007) Heatmap visualization of population based multi objective algorithms. In: Evolutionary MultiCriterion Optimization. Springer, New York, pp 361–375Google Scholar
 181.Valdés JJ, Barton AJ (2007) Visualizing high dimensional objective spaces for multiobjective optimization: a virtual reality approach. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 4199–4206Google Scholar
 182.Lowe D, Tipping M (1996) Feedforward neural networks and topographic mappings for exploratory data analysis. Neural Comput Appl 4(2):83–95CrossRefGoogle Scholar
 183.Hoffman P, Grinstein G, Marx K, Grosse I, Stanley E (1997) DNA visual and analytic data mining. In: Proceedings of Visualization. IEEE, pp 437–441Google Scholar
 184.Chen S, Amid D, Shir OM, Limonad L, Boaz D, AnabyTavor A, Schreck T (2013) Selforganizing maps for multiobjective Pareto frontiers. In: Visualization Symposium (PacificVis), IEEE Pacific. IEEE, pp 153–160Google Scholar
 185.Tenenbaum JB, De Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323Google Scholar
 186.Silva R, Salimi A, Li M, Freitas ARR, Guimarães FG, Lowther DA (2016) Visualization and analysis of tradeoffs in manyobjective optimization: a case study on the interior permanent magnet motor design. IEEE Trans Magn 52(3):1–4Google Scholar
 187.Freitas ARR, Silva RCP, Guimarães FG (2014) On the visualization of tradeoffs and reducibility in manyobjective optimization. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 1091–1098Google Scholar
 188.Pigozzi G, Tsoukiàs A, Viappiani P (2016) Preferences in artificial intelligence. Ann Math Artif Intell 77(3–4):361–401MATHCrossRefMathSciNetGoogle Scholar
 189.Goldsmith J, Junker U (2009) Preference handling for artificial intelligence. AI Mag 29(4):9CrossRefGoogle Scholar
 190.Domshlak C, Hüllermeier E, Kaci S, Prade H (2011) Preferences in AI: an overview. Artif Intell 175(7):1037–1052CrossRefMathSciNetGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.