Advertisement

Complex & Intelligent Systems

, Volume 3, Issue 4, pp 233–245 | Cite as

A mini-review on preference modeling and articulation in multi-objective optimization: current status and challenges

  • Handing Wang
  • Markus Olhofer
  • Yaochu Jin
Open Access
Survey and State of the Art

Abstract

Evolutionary multi-objective optimization aims to provide a representative subset of the Pareto front to decision makers. In practice, however, decision makers are usually interested in only a particular part of the Pareto front of the multi-objective optimization problem. This is particularly true when the number of objectives becomes large. Over the past decade, preference-based multi-objective optimization has attracted increasing attention from both academia and industry due to its significance in both theory and practice. Significant progress has been made in evolutionary multi-objective optimization and multi-criteria decision communities, although many open issues still remain to be addressed. This paper provides a concise review on preference-based multi-objective optimization, including various preference modeling methods and existing preference-based optimization methods, as well as a brief discussion of the main future challenges.

Keywords

Multi-objective optimization Preference modeling Preference learning 

Introduction

Most real-world optimization problems in science, engineering and even daily life need to take into account multiple and often conflicting criteria [1, 2]. Such problems are known as multi-objective optimization problems (MOPs), which can be formulated mathematically as follows:
$$\begin{aligned} \min _{x\in \Omega } F(\mathbf {x})=\{f_{1}(\mathbf {x}),\ldots ,f_{m}(\mathbf {x})\}, \end{aligned}$$
(1)
where m is the number of objectives, \(\mathbf {x}\) is an n-dimensional decision vector, and the feasible region is defined by \(\Omega \). For two arbitrary solutions \(\mathbf {x}^{1},\mathbf {x}^{2}\in \Omega \), \(\mathbf {x}^{1}\) is said to dominate \(\mathbf {x}^{2}\) (notated as \(\mathbf {x}^{1}\preceq \mathbf {x}^{2}\)), if \(f_{i}(\mathbf {x}^{1})\le f_{i}(\mathbf {x}^{2})\), and \(F(\mathbf {x}^{1})\ne F(\mathbf {x}^{2})\). A solution \(\mathbf {x}^{*}\in \Omega \) that cannot be dominated by any other feasible solutions in \(\Omega \) is then called a Pareto optimal solution [3]. Typically, a set of Pareto optimal solutions exists for the MOP is descried in Eq. (1), the set of \(\mathbf {x}^{*}\) is called as Pareto set (PS), and their corresponding objective vectors \(F(\mathbf {x^{*}})\) are called as Pareto front (PF).

It is helpful for decision markers (DMs) to make their decisions if the whole Pareto optimal set is already known, because the whole set can provide an overall picture of the distribution of Pareto optimal solutions. To obtain the entire PF, or to be more exact, to obtain a representative subset of the PF, a huge number of algorithms and methodologies in both communities of traditional mathematical programming and evolutionary computation have been designed in recent decades. Traditional mathematical programming methods such as the weighted aggregation methods [4] cannot identify the whole PF in one single run. Evolutionary algorithms (EAs), as population-based search methods, are believed to be well suited for solving MOPs in that they can achieve a set of non-dominated solutions in one run. Multi-objective evolutionary algorithms (MOEAs) [5] have now become a mature tool to solve MOPs. Generally speaking, existing MOEAs can be divided into three categories according to their selection criteria, namely Pareto-, indicator-, and reference-based MOEAs [6, 7, 8], even though a number of MOEAs might fall into more than one category or employ additional selection criteria.

Pareto-based MOEAs employ the Pareto dominance as their main selection methodology for convergence. Different diversity maintenance strategies are adopted in different Pareto-based MOEAs, such as crowding distance in NSGA-II [9] and environment selection in SPEA2 [10]. However, it was shown that Pareto-based MOEAs fail to solve many-objective optimization problems (MaOPs) that are defined to be MOPs with more than three objectives [11], mainly due to the fact that the dominance comparison becomes less effective when the number of objectives increases for a limited population size [12].

Indicator-based MOEAs use a single indicator as the selection criterion to replace the Pareto dominance in Pareto-based MOEAs. \(I_{\epsilon +}\) [13, 14], Hypervolume [15], and R2 [16] have been applied in IBEA [13], HypE [17], and MOMBI [18], respectively.

Reference-based MOEAs decompose an MOP into a set of sub-problems according to the pre-assigned references, such as weights [19], reference points [20], reference vectors [21], and direction vectors [22, 23]. Different aggregation functions have been suggested to convert an MOP into a set of single-objective optimization problems, including weighted sum [3], Tchebycheff approach [3], and penalty-based boundary intersection (PBI) approach [24].

Although a representative subset of the overall PF can be located using most MOEAs for two- or three-objective optimization problems, selecting a few solutions to be implemented is not trivial. The decision-making process will become much harder for many-objective optimization problem, because human beings are believed to be able to handle up to seven criteria [25, 26, 27]. Therefore, articulation of preferences is essential for solving MOPs [28], which can guide optimization algorithms to find the most preferred solutions rather than the whole PF. To incorporate preferences into multi-objective optimization algorithms, the modeling and articulation of preferences must be considered. Generally, preferences can be involved in different stages of multi-objective optimization algorithms, and preference-based optimization methods can be classified into three categories: a priori, interactive, and a posteriori methods [28]. However, it is unclear which preferences are able to effectively incorporated into MOEAs, and in many cases the user does not have a clear preference when little knowledge about the problem is available.

This paper offers a brief survey on preference modeling and articulation in multi-objective optimization. In section “Preference modeling methods”, various preference modeling methods are summarized. Section “Preference-based optimization methods” gives an account of existing preference-based optimization methods. Future challenges in preference modeling and preference guided multi-objective optimization are discussed in section “Challenges”. Section “Conclusion” concludes this paper.

Preference modeling methods

Various preference models have been reported in the literature [29], which can be largely classified into goals, weights, reference vectors, preference relation, utility functions, outranking, and implicit preferences.

Goals

The most straightforward way to articulate preference is to provide goal information [30], as shown in Fig. 1. Usually, users have some targets for different objectives [31, 32]. Thus, goals act as the additional criteria in multi-objective optimization to provide ranking with the preference information [33, 34]. In interactive approaches [35], DMs need to provide a goal point for a tentative efficient solution in each iteration. However, when DMs have no priori knowledge about the problems, they might set unreasonable goals which may mislead the search process [36].
Fig. 1

Modeling preferences in terms of goals. In the figure, the star denotes a goal specified by the DM, whereas the points illustrate the optimal solutions that may be found by an optimization algorithm based on the goal

Weights

DMs can assign different levels of importance to different criteria by using weights \(\mathbf {w}=\{w_1,...w_m\}\), which are a vector in the weight space as Fig. 2 shows. With the weights, multiple objectives can be converted into a single-objective function using an aggregation function [37, 38, 39]. Two most popular aggregation functions are the weighted sum [40] as shown in Eq. (2) and the Tchebycheff approach [3], as described in Eq. (3), where \(f_{i}\) is the i-th objective and \(w_{i}\) is the i-th weight. In Fig. 2, the dotted line is the contour of the aggregation function g, which indicates the convergence tendency of the search of g on the specific weight \(\mathbf {w}\). Authors in [41, 42] have modified the dominance by a weight vector. However, similar to goals, it is hard for DMs to provide accurate weights without a full understanding of the characteristics of the problem.
$$\begin{aligned} g^{ws}=\sum _{i=1}^{m}w_{i}f_{i} \end{aligned}$$
(2)
$$\begin{aligned} g^{te}=\max _{1 \le i \le m}\{w_{i}f_{i}\} \end{aligned}$$
(3)
Fig. 2

Modeling preferences using weights, where \(\mathbf {w}\) is a weight vector and the dotted lines are contour lines of the aggregation function g. The contour lines show the convergence tendency of an optimization algorithm on the aggregation function g

Reference vectors

Reference vectors or points provide the expectation to or importance of the objectives. Reference vectors and weights are similar in their aggregation functionality, although they do have different physical meanings, and consequently, different influences on the search process. Usually, reference vectors represent the directions of the solution vector, whereas weights indicate the importance of different objectives. Reference vectors are in the objective space, whilst weights are in the weight space. Because of the inherent connection between reference vectors and weights, they can be converted into each other. The reference vectors in RVEA [21] and reference points in NSGA-III [20] are converted from uniformly distributed weights.

Taking the PBI approach [24] in Fig. 3 as an example, which is the fitness function in NSGA-III [20], the relationship between a solution and a reference vector v is described by two distances, where \(d_{1}\) is the projection distance and \(d_{2}\) is the perpendicular distance to a reference vector v. With \(d_{1}\) accounting for convergence and \(d_{2}\) promoting diversity, PBI selects solutions based on Eq. (4), where \(\theta \) is the penalty factor. The recently proposed angle penalized distance (APD) in RVEA [21] adopts the acute angle between the reference vector and solution vectors to replace the Euclidean distance as shown in Eq. (5), where \(p(\alpha )\) is a penalty function related to the angle \(\alpha \). It has been shown that angles provide a more scalable measure for diversity in high-dimensional spaces.
$$\begin{aligned} g^{\mathrm{PBI}}=d_{1}+\theta d_{2} \end{aligned}$$
(4)
$$\begin{aligned} g^{\mathrm{APD}}=(1+p(\alpha ))|d| \end{aligned}$$
(5)
Fig. 3

The PBI approach decomposes distance |d| to two orthogonal distance \(d_1\) and \(d_2\), while the APD approach penalizes replaces the distance with an angle \(\alpha \)

Table 1

Preference relation

Relation

Meaning

Relation

Meaning

\(\prec \)

Less important

\(\succ \)

More important

\(\ll \)

Much less important

\(\gg \)

Much more important

\(\approx \)

Equally important

\(\#\)

Do not care

\(\lnot \)

Not important

!

Important

Neither the Tchebycheff nor PBI method is suited to the PFs in different shapes [43]. Recently, different aggregation functions are proposed for both preferences in reference vectors and weights. For example, adaptive scalarizing methods in [44, 45, 46] change the aggregation function during the MOEA, the Tchebycheff method is used in a reversed form for a convex PF [47], and the PBI method is inverted based on a nadir point [48].

Preference relation

DMs have different preferences on different objectives; thus, some objectives might be not equally important during the process of decision making [49, 50, 51]. Table 1 lists the symbol representation of the importance of objectives, and as a result, objectives can be sorted with a preferred order as \(f_{1}\ge f_{2}\ge f_{3}\approx f_{4}\). With that preference relation [52], the search can be narrowed down by converting into weights, the method in [2] is one example with the binary preference. The main disadvantage is that the preference relation cannot handle non-transitivity. During the process of decision making, DMs gradually learn their preferences. Analytic hierarchy process (AHP) [53] is a measurement using pairwise comparisons to calculate priority scales based on the judgements from DMs, which might be inconsistent. Various examples have employed AHP for decision making.

Utility functions

Preferences can be characterized by utility functions [54, 55, 56], where the preference information is implicitly involved in the fitness function to rank solutions [57, 58]. Unlike preference relations, the utility function sorts solutions rather than objectives in an order. For example, there are N solutions \(\mathbf {x}_{1}\) to \(\mathbf {x}_{N}\), DMs are required to input their preferences for those solutions, \(\mathbf {x}_{1}\prec _{\mathrm{pref}}\mathbf {x}_{2}\prec _{\mathrm{pref}} \ldots \prec _{\mathrm{pref}}\mathbf {x}_{N}\), for instance. Then, an imprecisely specified multi-attribute value theory (ISMAUT) formulation is employed to infer the relative importance of objectives to modify the fitness function. However, utility functions are based on a strong assumption that all attributes of the preferences are independent, thereby being unable to handle non-transitivity [59, 60].

Outranking

Outranking [61] is a different ranking for objective preferences allowing non-transitivity, which is different from the preference relation [62]. To construct an outranking [63], the preference and indifference thresholds for each objective are input by a preference ranking organization method for enrichment evaluations (PROMETHEE) [64]. Every two solutions are compared according to those thresholds. Then, a preference ranking is obtained for outranking-based methods to search the preferred solutions [65]. However, the outranking-based methods require too many parameter settings, which is hard for DMs when the number of objectives increases [64].

Implicit preferences

In some cases, DMs have little knowledge to articulate any sensible preferences. Nevertheless, there are some solutions on the PF that are naturally preferred, even if no problem specific preference can be proposed. Those solutions can be detected based on the curvature of PF [66]. For example, a knee point, around which a small improvement of any objective causes a large degeneration of others, is always of interest to DMs as an implicit preferred solution [67, 68, 69]. Examples include model selection in machine learning [70, 71] and sparse reconstruction [72].

There is no widely accepted definition for knee points, and specifying knee points are notoriously difficult in high-dimensional objective spaces. Existing approaches to identifying knee points can be divided into two categories: angle- and distance-based approaches [68]. Angle-based approaches measure the angle between a solution and its two neighbors and search the knee point according to the obtained angle [72]. Although angle-based approaches are straightforward, they can be applied to bi-objective optimization problems only. Distance-based approaches can handle problems with more than two objectives, which search the knee point according to the distance to a pre-defined hyperplane [73].

In addition to knee points, extreme points or the nadir point can work as a special form of preferences [74]. Extreme points are the solutions with the worst objective values on the PF. A nadir point is a combination of extreme points. With extreme points or the nadir point, DMs can acquire the knowledge on the range of the PF to input their preferences more accurately [75, 76, 77].

Discussions

The above formulations of preferences share several similarities. For example, although weights and reference vectors are different concepts, weights are sometimes used as references, and vice versa. All the existing preference formulations are scalable to many objectives, but their complexity significantly increases. Although different preference models may have very different properties, they all describe the objective importance or priority in their own ways, except that utility functions sort the importance of solutions rather than objectives.

DMs might articulate preferences with uncertainty. To model uncertainty in preferences, small perturbations can be introduced into goals, weights, or reference vector-based methods. Thus, fuzzy logic can be used as a natural means for handling uncertainty in preferences [78, 79], such as reference points [35], weights [80], preference relation [81, 82], and outranking [63]. Preference relation, utility function, and outranking are not strictly based on objective importance in values, which allow uncertainty to a certain degree. DMs might have inconsistent preferences during the search. In these such cases, goal-, weight-, and reference vector based methods might fail, because they focus on the previous preferences too much and may lose diversity. Also, preference relation and utility function based methods cannot handle preference inconsistency. Only outranking allows inconsistency in preferences to some degree. Furthermore, DMs can introduce inappropriate preferences, which might lead to infeasible solutions. There has not been any specific research dedicated to handling inappropriate preferences, and fuzzy preferences might provide a solution to this problem.

Preference-based optimization methods

The existing preference-based optimization methods can be classified into three categories according to the time when preferences are incoporated, i.e. a priori, interactive, and a posteriori methods [28]:
  • A priori methods  In these methods, DMs need to input their preferences before optimization starts. In such methods, the main difficulty lies in the fact that DMs may have limited knowledge about the problem and their preferences may be inaccurate or even misleading.

  • A posteriori methods  In a posteriori methods, a set of representative Pareto optimal is obtained using an optimization algorithm, from which DMs choose a small number of solutions according to their preferences. In comparison with the a priori methods, DMs are able to better understand the trade-off relationships between the objectives in the a posteriori methods. Most existing multi-objective evolutionary algorithms (MOEAs) [83] belong to this category. It should be noted, however, that it becomes increasingly hard to obtain a representative solution set as the number of objectives increases [84].

  • Interactive methods  Interactive methods [85, 86] enable DMs to articulate their preferences in the course of optimization. In interactive methods, DMs are allowed to modify their preferences, typically based on the domain knowledge acquired during the optimization [32, 38, 87]. With the increasing understanding of the problem as the optimization proceeds, DMs are able to fine tune their preferences according to the obtained solutions in each iteration. With the revised preference, the interactive methods search for new preferred solutions, which usually needs less computational cost compared with the a posteriori methods. In the existing interactive methods, only one single preference model is adopted, such as reference vectors [88, 89, 90, 91], weights [92, 93, 94, 95], preference relation [96, 97, 98], and utility functions [99].

Non-evolutionary preference-based optimization methods

Traditional multiple criteria decision making (MCDM) methodologies are non-evolutionary and usually involve in a certain type of preference information. During the MCDM processes, the following assumptions hold [4, 100, 101, 102]:
  • Parts of non-dominated solutions are expected to be found.

  • DMs are expected to understand the problem and are able to provide reasonable preferences.

  • Satisfactory optimal solutions are expected to be output finally.

According to [103], classical MCDM approaches can be divided into two types: aggregation procedures and synthesizing criteria.

The aggregation-based MCDM approaches are based on weights [104]. Thus, decision making is mathematically defined by Eq. (2), where m is the number of criteria, \(\mathbf {w}\) is the weight. For those approaches, DMs need to have a clear idea about how to set the weights. However, it is very hard for human beings to provide precise quantitative importance levels for different objectives. In some cases, good solutions cannot be easily distinguished from the poor solutions by Eq. (2).

Unlike the aggregation-based MCDM approaches which are based on explicit mathematical formula as a fixed preference, the synthesizing criterion-based approaches are based on implicit rules. For example, outranking and utility function are two implicit and flexible preference models. Outranking sorts the objective preferences, and utility function sorts the solution preferences. So far, the elimination and choice expressing reality (ELECTRE) [105] and preference ranking organization method for enrichment evaluations methods (PROMETHEE) [106] are two main outranking approaches; utilities additives (UTA) methods [107] are utility function-based approaches [108].

In addition to the above mentioned aggregation procedures and synthesizing criteria, fuzzy logic [109], decision rules [110], multi-objective mathematical programming [111], and objective classification [112] have been employed to improve the performance of the MCDM approaches.

Evolutionary preference-based optimization methods

While non-evolutionary methods pay much attention to preference handling, most MOEAs focus on obtaining the whole solution set as the a posteriori methods. In this section, we discuss the a priori methods in MOEAs, which embed preferences into their fitness functions for narrowing down the selection [113]. So far, goals, weights reference and utility functions have been used to integrate preferences in MOEAs [29].

As mentioned in section “Preference modeling methods”, goals are straightforward preferences for MOEAs. Different formulations have been used to incorporate preferences in existing MOEAs. For example, the algorithm in [114] considers goal preferences as constraints by Eq. (6), where \(g_i\) is the goal for the i-th objective. One issue with this approach is that no solution can be achieved if the goals are set unreasonably by DMs for MOPs with a discontinuous PF. To address this issue, an algorithm is proposed in [115] that divides the constraints into hard and soft constraints according to the priority of objectives.
$$\begin{aligned} f_i(\mathbf{{x}})<g_i \end{aligned}$$
(6)
In fact, existing reference-based MOEAs can naturally be seen as preference-based MOEAs, which assign preferences uniformly distributed in the whole objective space and decompose one MOP into a number of single-objective optimization problems. Preferences in those algorithms are presented in different models, such as weights in MOEA/D [19], direction vectors in DVCMOA [22] and MOEA/D-M2M [23], reference vectors in RVEA [21], and reference points in NSGA-III [20]. So far, a majority of preference-based MOEAs are based on weights [37, 38, 116, 117, 118, 119, 120]. The second most widely-used preference model in MOEAs is reference vectors. The algorithms reported in [20, 21, 76] model preferences by reference vectors or points. Most recently, preference articulation methods based on reference points, reference vectors, and weights have been examined and compared on a hybrid electric vehicle control problem [121].
Another popular type of preferences adopted in MOEAs is the achievement scalarizing function (ASF) [28, 122, 123, 124]. The formulation of ASF is shown in Eq. (7), where \(\mathbf {w}\) is a weight and \(\mathbf {z}\) is a reference point. The light beam search [125, 126] projects a beam of light from a reference point onto the PF, resulting in a small neighborhood on the PF. To increase the robustness of the preference-based MOEAs to DMs, the light beam is used [127] to replace the reference point in the achievement scalarizing function [128]. Moreover, the achievement scalarizing function has been employed to approximate hypervolume [34, 129, 130]. Based on ASF, an interactive MOEA termed I-SIBEA [131] is proposed by selecting new solutions according to a weighted hypervolume.
$$\begin{aligned} {g^{\mathrm{ASF}}} = \mathop {\max }\limits _{1 \le i \le m} ({w_i}{f_i} - {z_i}) \end{aligned}$$
(7)
Several utility function-based MOEAs have also been developed. For example, an algorithm was presented in [58], which might be the first MOEA that implicitly involves the preferences in the fitness function using a utility function. To guide MOEAs toward preferred solutions, robust ordinal regression is employed to approximate the utility in [97].

Challenges

Even though preferences have recently gained increasing attention and have been studied for decades, many issues remain to be addressed in the future.

Preference adaptation for various formulations

As mentioned before, different preference models have been developed and existing preference-based MOEAs are designed according to a specific preference model. However, preferences provided by DMs might be in different forms, thus no single MOEA is able to deal with various types of preferences, making them less flexible to be used in practice. Thus, it would be very desirable if various preference models can be converted into a single preference so that they can be incorporated into a preference-based MOEA. So far, not much work has been reported on converting one preference model into another, with a few exceptions, e.g., preference relations are converted into weights in [2] and fuzzy preferences are turned into weights in [82]. Thus, it is necessary to develop a general framework for converting different preference models so that the advantages and disadvantages of the existing methods can be properly compared in terms of their ability to handle uncertainty, conflicts, as well as the robustness in obtain preferred solutions.

Preference learning

Learning user preferences

Preferences play a very important role in MCDM. Preferences given by DMs are consistent to a certain degree, notwithstanding that the fact that DMs might change their preferences in interacting with the optimizer. Thus, the system should be able to learn the preferences of DMs based on history data. Although there are many mature techniques in machine learning [132] and data mining [133] that can help learn preferences of DMs, little attention has been paid to this research topic with a few exceptions [134], where preferences of DMs are learned by training a single or multiple surrogate models [135] using a semi-supervised learning algorithm. As the work in [134] indicated, a proper learning algorithm should be chosen and attention should be paid to the fact that the learned preferences are able to be incorporated in MOEAs.

Handling preference violation

Without sufficient information about the problem, it is likely for DMs to provide less reasonable or even misleading preferences. In some cases, no solutions can be found for some preferences, for example when the Pareto front is discontinuous.

In case there are a group of DMs, it should be taken into account that the preferences given by different group members might be conflicting to each other [136]. As pointed out in [36], priority, independence, and unanimity of individual preferences need to be taken into account in using preferences from multiple DMs.

Psychological study

Decision making can be seen as a psychological construct in the selection of several alternative actions [137]. In some cases, the processing capacity of DMs is limited due to the overwhelmed results from decision making systems [138]. To ensure that decision making systems are compatible with the psychology of DMs, attention should be paid to theory of decision making in the psychological level [139]. Experiments reported in [140] indicate that the improvement of the forecasting performance can be achieved with the help of a psychological model. Therefore, we believe that a further understanding of the psychology of DMs would build a proper bridge between decision making systems and DMs, which can further improve the efficiency of the preference-based methods [25, 26, 57, 112].

Analysis of relationships between decision variables and objectives

Relationship between objectives

The conflict between two objectives means that the improvement on one objective would deteriorate the other. The conflict might be global or local [141, 142, 143]. For locally conflicting objectives, they are conflicting with each other in some regions but not in other regions. However, the existing research on objective reduction focuses on global redundancy between objectives [144, 145, 146], but little work has been conducted on locally conflicting objectives. The search on locally redundant objectives wastes computational cost, and the results in [141] indicate that objective reduction approaches for some problems with globally conflicting objectives can still improve the performance of MOEAs on the problems with locally redundant objectives. Therefore, detecting locally conflicting objectives, reducing locally redundant objectives, and analyzing the effects of locally conflicting objectives on the PFs are of great interest. Moreover, analysis of the correlation between objectives can help group objectives into a number of groups to simplify the representation to DMs, because human beings can only handle around seven objectives.

Several approaches can be used to help DMs to understand the relationship between objectives. In [147], objectives are divided into five classes to help DMs understand the trade-off. Self-organizing maps (SOMs) [148] have been shown to be promising in revealing the tradeoff relationships between objectives [149, 150]. Correlation is another effective tool for analyzing the relationship between objectives. Different metrics have been proposed to measure the degree of correlation (both linear and non-linear), covariance, mutual information entropy [151], and non-linear correlation information entropy (NCIE) [152, 153], for instance. Based on these relations, many mature data mining techniques can be employed to choose a subset of conflicting objectives to simplify the original problem, such as feature selection [146], principal component analysis (PCA) [154], and maximum variance unfolding (MVU) [155]. The Pareto corner search evolutionary algorithm (PCSEA) [145] is a newly proposed objective reduction approach. It only searches the corners on PFs. Then, it uses the obtained solutions to analyze the relationship between objectives and identify a subset of non-correlated objectives.

Knee points show the conflicting degree and are interesting to DMs if they do not have specific preferences [68]. Knee point detection is based on the different definition for 2 or 3-objective problems [73]. The definition of knee point in MaOPs is not yet well-established, because the conflicting degree might vary with different pairs of objectives. The sensitivity to changes in individual objectives may exist in some particular regions on the PF, which can be considered as partial knee points that are of interested to DMs.

Functional maps from decision variables to objectives

For the real-world applications, noises or uncertainties are inevitable. In such situations, DMs prefer solutions that are robust against small changes in decision variables [156, 157, 158]. There have been some discussions on robust multi-objective optimization [159, 160, 161, 162, 163, 164], but little research has studied the robustness in decision making, except for measuring attractiveness by a categorical based evaluation technique (MACBETH) [165]. The analysis of the mapping relationship from decision variables to objectives [166] helps searching robust solutions in the preference-based methods.

To analyze the mapping relationship from decision variables to objectives, artificial neural network (ANN) [167, 168], Bayesian learning [169, 170], and the estimation of distribution algorithm (EDA) [171, 172] have been employed.

Benchmark design

So far, several MOP test suites have been proposed, such as ZDT [173], DTLZ [174], and WFG [175] problems. However, no benchmark problems have been proposed to test preference-based optimization methods. Thus, it is very desirable to design MOP test suites tailored for evaluating the performance of preference based MOEAs. To design preference-based MOP benchmarks, following aspects need to be considered.
  • Preference simulation  It is necessary to simulate the preferences with artificial functions [176], where uncertainty and the response to the algorithm should also be taken into account.

  • Objective correlation  Both global and local conflicts should be designed in the benchmark.

  • Ground truth  The true optimal solutions should be provided for assessing the performance.

Performance assessment

Several performance indicators for measuring the performance of MOEAs have been proposed, such as generational distance (GD) [177], inverted generational distance (IGD) [173], and hypervolume [15]. However, not many performance indicators exist that are dedicated to evaluation of preference-based methods with few exceptions [178], which considers both dominance and the distance to the preferences. In addition, an ideal metric for preference-based methods should evaluate whether the obtained solutions truly reflect the preferences, regardless of their preference modeling types.

Visualization

Visualization plays an important role in interactions between DMs and preference-based optimization methods. When the number of objectives equals or larger than four, visualization becomes a challenge. Existing approaches can be divided into three classes, namely parallel coordinate, mapping, and aggregation tree [179].

The approaches based on parallel coordinates provide the visualization of individual solutions by a parallel coordinate system. In that system, there are parallel axes that can describe values of all objectives. Parallel coordinates [1] use a polyline with vertices on the parallel axes, while heatmap [180] uses color to present the values on the parallel axes. Those approaches can only show the trade-off between two adjacent objectives.

Other approaches include those adopt dimension reduction techniques that can preserve the Pareto dominance relationship among individuals in both global and local areas, such as Sammon mapping [181], neuroscale [182], radial coordinate visualization (RadViz) [183], SOM [149, 184], and Isomap [185]. These approaches are not as straightforward as the parallel coordinate-based approaches in analyzing the tradeoff relationships between the objecitves and are time-consuming.

Approaches based on aggregation tree [186, 187] measure the harmony between objectives to visualize the relation between objectives. However, this kind of approaches cannot show individual solutions.

Most existing visualization tools are not straightforward for DMs to understand. Ideally, both dominance and preference relationship should be presented in the visualization. Moreover, DMs should be able to zoom in interesting regions to get more detailed information.

Scalable multi-objective optimization test problems. In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC 2002, vol 1. IEEE Computer Society, pp 825–830

Conclusion

Since preference-based multi-objective optimization is strongly motivated from the real-world applications, research interests in this area have increased in recent years. Indeed, preference modelling is also a common need in many areas of artificial intelligence in which decision making is involved [188, 189, 190]. It becomes thus clear that preference modelling and learning are important not only for decision making and evolutionary optimization, but also for artificial intelligence research.

In this paper, we provide a concise review of research on preference modelling and preference-based optimization methods. We discuss the open issues in preference modelling and preference based optimization. It is emphasized that the importance of preference-based multi-objective optimization is of paramount practical significance and preferences must be incorporated in many-objective optimization, where obtaining a representative subset of the entire Pareto front is less likely.

Notes

Acknowledgements

This work was supported in part by an EPSRC Grant (No. EP/M017869/1) on “Data-driven surrogate-assisted evolutionary fluid dynamic optimisation”, in part by the Joint Research Fund for Overseas Chinese, Hong Kong and Macao Scholars of the National Natural Science Foundation of China (No. 61428302), and in part by the Honda Research Institute Europe.

References

  1. 1.
    Fleming P, Purshouse R, Lygoe R (2005) Many-objective optimization: An engineering design perspective. In: Evolutionary multi-criterion optimization. Springer, New York, pp 14–32Google Scholar
  2. 2.
    Parmee IC, Cvetković D, Watson AH, Bonham CR (2000) Multiobjective satisfaction within an interactive evolutionary design environment. Evol Comput 8(2):197–222Google Scholar
  3. 3.
    Miettinen K (1999) Nonlinear multiobjective optimization. Springer, New YorkGoogle Scholar
  4. 4.
    Steuer RE (1986) Multiple criteria optimization: theory, computation, and applications. Wiley, New YorkGoogle Scholar
  5. 5.
    Zhou A, Bo-Yang Q, Li H, Zhao S-Z, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a survey of the state of the art. Swarm Evol Comput 1(1):32–49Google Scholar
  6. 6.
    Li B, Li J, Tang K, Yao X (2015) Many-objective evolutionary algorithms: a survey. ACM Comput Surv 48(1):13CrossRefGoogle Scholar
  7. 7.
    Wang H, Jin Y, Yao X (2017) Diversity assessment in many-objective optimization. IEEE Trans Cybern 47(6):1510–1522CrossRefGoogle Scholar
  8. 8.
    Wagner T, Beume N, Naujoks B (2007) Pareto-, aggregation-, and indicator-based methods in many-objective optimization. In: Evolutionary multi-criterion optimization. Springer, New York, pp 742–756Google Scholar
  9. 9.
    Deb K, Pratap A, Agarwal S, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197CrossRefGoogle Scholar
  10. 10.
    Zitzler E, Laumanns M, Thiele L (2001) SPEA2: Improving the strength Pareto evolutionary algorithm. In: Proceedings of EUROGEN 2001. Evolutionary methods for design, optimization and control with applications to industrial problems. Citeseer, pp 1–21Google Scholar
  11. 11.
    Praditwong K, Yao X (2007) How well do multi-objective evolutionary algorithms scale to large problems. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 3959–3966Google Scholar
  12. 12.
    Ishibuchi H, Tsukamoto N, Nojima Y (2008) Evolutionary many-objective optimization: a short review. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 2419–2426Google Scholar
  13. 13.
    Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: International Conference on Parallel Problem Solving from Nature. Springer, New York, pp 832–842Google Scholar
  14. 14.
    Wang H, Jiao L, Yao X (2015) Two_Arch2: an improved two-archive algorithm for many-objective optimization. IEEE Trans Evol Comput 19(4):524–541CrossRefGoogle Scholar
  15. 15.
    Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271CrossRefGoogle Scholar
  16. 16.
    Brockhoff D, Wagner T, Trautmann H (2012) On the properties of the R2 indicator. In: The genetic and evolutionary computation conference. ACM, New York, pp 465–472Google Scholar
  17. 17.
    Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolume-based many-objective optimization. Evol Comput 19(1):45–76CrossRefGoogle Scholar
  18. 18.
    Gómez RH, Coello CAC (2013) MOMBI: a new metaheuristic for many-objective optimization based on the R2 indicator. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 2488–2495Google Scholar
  19. 19.
    Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731CrossRefGoogle Scholar
  20. 20.
    Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601CrossRefGoogle Scholar
  21. 21.
    Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 20(5):773–791. doi: 10.1109/TEVC.2016.2519378
  22. 22.
    Jiao L, Wang H, Shang R, Liu F (2013) A co-evolutionary multi-objective optimization algorithm based on direction vectors. Inf Sci 228:90–112zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Liu H-L, Fangqing G, Zhang Q (2014) Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems. IEEE Trans Evol Comput 18(3):450–455CrossRefGoogle Scholar
  24. 24.
    Dennis J, Das I (1998) Normal-boundary intersection: a new method for generating Pareto optimal points in nonlinear multicriteria optimization problems. SIAM J Optim 8(3):631–657zbMATHCrossRefMathSciNetGoogle Scholar
  25. 25.
    Miller GA (1956) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol Rev 63(2):81CrossRefGoogle Scholar
  26. 26.
    Nisbett RE, Wilson TD (1977) Telling more than we can know: verbal reports on mental processes. Psychol Rev 84(3):231Google Scholar
  27. 27.
    Slovic P, Lichtenstein S (1971) Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organ Behav Human Perform 6(6):649–744CrossRefGoogle Scholar
  28. 28.
    Thiele L, Miettinen K, Korhonen PJ, Molina J (2009) A preference-based evolutionary algorithm for multi-objective optimization. Evol Comput 17(3):411–436Google Scholar
  29. 29.
    Hirsch C, Shukla PK, Schmeck H (2011) Variable preference modeling using multi-objective evolutionary algorithms. In: Evolutionary multi-criterion optimization. Springer, New York, pp 91–105Google Scholar
  30. 30.
    Gembicki FW (1974) Vector optimization for control with performance and parameter sensitivity indices. Ph.D. thesis, Ph.D. Thesis, Case Western Reserve Univ., Cleveland, OhioGoogle Scholar
  31. 31.
    Wang R, Purshouse RC, Fleming PJ (2013) Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans Evol Comput 17(4):474–494Google Scholar
  32. 32.
    Fonseca CM, Fleming PJ (1993) Genetic algorithms for multiobjective optimization: Formulation discussion and generalization. In: Proceedings of the International Conference on Genetic Algorithms, vol 93. Citeseer, pp 416–423Google Scholar
  33. 33.
    Deb K (1999) Solving goal programming problems using multi-objective genetic algorithms. In: Proceedings of the 1999 congress on evolutionary computation, CEC 99, vol 1. IEEE, pp 77–84Google Scholar
  34. 34.
    Wagner T, Trautmann H (2010) Integration of preferences in hypervolume-based multiobjective evolutionary algorithms by means of desirability functions. IEEE Trans Evol Comput 14(5):688–701CrossRefGoogle Scholar
  35. 35.
    Sakawa M, Kato K (2002) An interactive fuzzy satisficing method for general multiobjective 0–1 programming problems through genetic algorithms with double strings based on a reference solution. Fuzzy Sets Syst 125(3):289–300zbMATHCrossRefMathSciNetGoogle Scholar
  36. 36.
    Coello CAC (2000) Handling preferences in evolutionary multiobjective optimization: a survey. In: Proceedings of the Congress on Evolutionary Computation, vol 1. IEEE, pp 30–37Google Scholar
  37. 37.
    Phelps S, Köksalan M (2003) An interactive evolutionary metaheuristic for multiobjective combinatorial optimization. Manage Sci 49(12):1726–1738zbMATHCrossRefGoogle Scholar
  38. 38.
    Köksalan M, Karahan I (2010) An interactive territory defining evolutionary algorithm: iTDEA. IEEE Trans Evol Comput 14(5):702–722CrossRefGoogle Scholar
  39. 39.
    Wang R, Purshouse RC, Fleming PJ (2015) Preference-inspired co-evolutionary algorithms using weight vectors. Eur J Oper Res 243(2):423–441Google Scholar
  40. 40.
    Wang R, Zhou Z, Ishibuchi H, Liao T, Zhang T (2016) Localized weighted sum method for many-objective optimization. IEEE Trans Evol Comput. doi: 10.1109/TEVC.2016.2611642
  41. 41.
    Branke J, Kaußler T, Schmeck H (2001) Guidance in evolutionary multi-objective optimization. Adv Eng Softw 32(6):499–507zbMATHCrossRefGoogle Scholar
  42. 42.
    Branke J, Deb K (2005) Integrating user preferences into evolutionary multi-objective optimization. In: Knowledge Incorporation in Evolutionary Computation. Springer, New York, pp 461–477Google Scholar
  43. 43.
    Ma X, Zhang Q, Yang J, Zhu Z (2017) On Tchebycheff decomposition approaches for multi-objective evolutionary optimization. IEEE Trans Evol Comput. doi: 10.1109/TEVC.2017.2704118
  44. 44.
    Ishibuchi H, Sakane Y, Tsukamoto N, Nojima Y (2009) Adaptation of scalarizing functions in MOEA/D: an adaptive scalarizing function-based multiobjective evolutionary algorithm. In: Evolutionary multi-criterion optimization. Springer, New York, pp 438–452Google Scholar
  45. 45.
    Ishibuchi H, Sakane Y, Tsukamoto N, Nojima Y (2010) Simultaneous use of different scalarizing functions in MOEA/D. In: The genetic and evolutionary computation conference. ACM, New York, pp 519–526Google Scholar
  46. 46.
    Wang R, Zhang Q, Zhang T (2016) Decomposition-based algorithms using Pareto adaptive scalarizing methods. IEEE Trans Evol Comput 20(6):821–837CrossRefGoogle Scholar
  47. 47.
    Liu HL, Gu FQ, Cheung YM (2010) T-MOEA/D: MOEA/D with objective transform in multi-objective problems. In: Information Science and Management Engineering (ISME), International Conference of, vol 2. IEEE, pp 282–285Google Scholar
  48. 48.
    Sato H (2014) Inverted PBI in MOEA/D and its impact on the search performance on multi and many-objective optimization. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 645–652Google Scholar
  49. 49.
    Haimes YY, Hall WA (1974) Multiobjectives in water resource systems analysis: the surrogate worth trade off method. Water Resour Res 10(4):615–624Google Scholar
  50. 50.
    Brafman RI (2011) Relational preference rules for control. Artif Intell 175(7):1180–1193Google Scholar
  51. 51.
    Zitzler E, Thiele L, Bader J (2008) SPAM: set preference algorithm for multiobjective optimization. In: International Conference on Parallel Problem Solving from Nature, vol 5199. Springer, New York, pp 847–858Google Scholar
  52. 52.
    Jaimes AL, Coello CAC (2009) Study of preference relations in many-objective optimization. In: The genetic and evolutionary computation conference. ACM, New York, pp 611–618Google Scholar
  53. 53.
    Thomas L (2008) Saaty. Decision making with the analytic hierarchy process. Int J Serv Sci 1(1):83–98Google Scholar
  54. 54.
    Feldman AM (1989) Preferences and utility. In: Welfare economics and social choice theory. Springer, New York, pp 9–22Google Scholar
  55. 55.
    Jeantet G, Spanjaard O (2011) Computing rank dependent utility in graphical models for sequential decision problems. Artif Intell 175(7):1366–1389zbMATHCrossRefMathSciNetGoogle Scholar
  56. 56.
    Pedro LR, Takahashi R (2011) Modeling decision-maker preferences through utility function level sets. In: Evolutionary multi-criterion optimization. Springer, New York, pp 550–563Google Scholar
  57. 57.
    Costa CAC (2012) Readings in multiple criteria decision aid. Springer Science & Business Media, New YorkGoogle Scholar
  58. 58.
    Greenwood GW, Hu X, D’Ambrosio JG (1996) Fitness functions for multiple objective optimization problems: Combining preferences with Pareto rankings. In: FOGA, vol 96, pp 437–455Google Scholar
  59. 59.
    White CC III, Sage AP, Dozono S (1984) A model of multiattribute decisionmaking and trade-off weight determination under uncertainty. IEEE Trans Syst Man Cybern 14(2):223–229Google Scholar
  60. 60.
    Cvetković D, Parmee IC (2002) Preferences and their application in evolutionary multiobjective optimization. IEEE Trans Evol Comput 6(1):42–57Google Scholar
  61. 61.
    Rekiek B, De Lit P, Pellichero F, L’Eglise T, Falkenauer E, Delchambre A (2000) Dealing with user’s preferences in hybrid assembly lines design. IFAC Proc Vol 33(17):989–994Google Scholar
  62. 62.
    Waegeman W, De Baets B (2011) On the ERA ranking representability of pairwise bipartite ranking functions. Artif Intell 175(7):1223–1250zbMATHCrossRefMathSciNetGoogle Scholar
  63. 63.
    Siskos J, Lombard J, Oudiz A (1986) The use of multicriteria outranking methods in the comparison of control options against a chemical pollutant. J Oper Res Soc 37(4):357–371Google Scholar
  64. 64.
    Brans JP, Vincke P, Mareschal B (1986) How to select and how to rank projects: the PROMETHEE method. Eur J Oper Res 24(2):228–238Google Scholar
  65. 65.
    Massebeuf S, Fonteix C, Kiss LN, Marc I, Pla F, Zaras K (1999) Multicriteria optimization and decision engineering of an extrusion process aided by a diploid genetic algorithm. In: Proceedings of the 1999 congress on evolutionary computation, CEC 99, vol 1. IEEE, pp 14–21Google Scholar
  66. 66.
    Shukla PK, Emmerich M, Deutz A (2013) A theoretical analysis of curvature based preference models. In: International Conference on Evolutionary Multi-Criterion Optimization. Springer, New York, pp 367–382Google Scholar
  67. 67.
    Rachmawati L, Srinivasan D (2009) Multiobjective evolutionary algorithm with controllable focus on the knees of the Pareto front. IEEE Trans Evol Comput 13(4):810–824CrossRefGoogle Scholar
  68. 68.
    Branke J, Deb K, Dierolf H, Osswald M (2004) Finding knees in multi-objective optimization. In: International Conference on Parallel Problem Solving from Nature. Springer, New York, pp 722–731Google Scholar
  69. 69.
    Deb K, Gupta S (2011) Understanding knee points in bicriteria problems and their implications as preferred solution principles. Eng Optim 43(11):1175–1204CrossRefMathSciNetGoogle Scholar
  70. 70.
    Jin Y, Bernhard S (2008) Pareto-based multi-objective machine learning: An overview and case studies. IEEE Trans Syst Man Cybern Part C Appl Rev 38(3):397–415CrossRefGoogle Scholar
  71. 71.
    Smith C, Jin Y (2014) Evolutionary multi-objective generation of recurrent neural network ensembles for time series prediction. Neurocomputing 143:302–311CrossRefGoogle Scholar
  72. 72.
    Li L, Yao X, Stolkin R, Gong M, He S (2014) An evolutionary multiobjective approach to sparse reconstruction. IEEE Trans Evol Comput 18(6):827–845CrossRefGoogle Scholar
  73. 73.
    Zhang X, Tian Y, Jin Y (2015) A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans Evol Comput 19(6):761–776CrossRefGoogle Scholar
  74. 74.
    Wang H, He S, Yao X (2017) Nadir point estimation for many-objective optimization problems based on emphasized critical regions. Soft Comput 21(9):2283–2295CrossRefGoogle Scholar
  75. 75.
    Branke J, Deb K, Miettinen K, Slowiński R (2008) Multiobjective optimization. Interactive and evolutionary approaches, vol 5252. Springer, New YorkGoogle Scholar
  76. 76.
    Deb K, Kumar A (2007) Interactive evolutionary multi-objective optimization and decision-making using reference direction method. In: The genetic and evolutionary computation conference. ACM, New York, pp 781–788Google Scholar
  77. 77.
    Amiri M, Ekhtiari M, Yazdani M (2011) Nadir compromise programming: a model for optimization of multi-objective portfolio problem. Expert Syst Appl 38(6):7222–7226CrossRefGoogle Scholar
  78. 78.
    Voget S, Kolonko M (1998) Multidimensional optimization with a fuzzy genetic algorithm. J Heuristics 4(3):221–244zbMATHCrossRefGoogle Scholar
  79. 79.
    Hadjali A, Mokhtari A, Pivert O (2012) Expressing and processing complex preferences in route planning queries: towards a fuzzy-set-based approach. Fuzzy Sets Syst 196:82–104CrossRefMathSciNetGoogle Scholar
  80. 80.
    Pirjanian P (1998) Multiple objective action selection and behavior FPcsion using voting. Ph.D. thesis, Department of Medical Informatics and Image Analysis, Institute of Electronic Systems, Aalborg University, Aalborg, DenmarkGoogle Scholar
  81. 81.
    Fodor JC, Roubens MR (1994) Fuzzy preference modelling and multicriteria decision support, vol 14. Springer Science & Business Media, New YorkGoogle Scholar
  82. 82.
    Jin Y, Sendhoff B (2002) Fuzzy preference incorporation into evolutionary multi-objective optimization. In: Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution and Learning, vol 1, pp 26–30Google Scholar
  83. 83.
    Abraham A, Jain L (2005) Evolutionary multiobjective optimization. Springer, New YorkGoogle Scholar
  84. 84.
    Khare V, Yao X, Deb K (2003) Performance scaling of multi-objective evolutionary algorithms. In: Evolutionary Multi-Criterion Optimization. Springer, New York, pp 376–390Google Scholar
  85. 85.
    Miettinen K, Hakanen J, Podkopaev D (2016) Interactive nonlinear multiobjective optimization methods. In: Multiple criteria decision analysis. Springer, New York, pp 927–976Google Scholar
  86. 86.
    Said LB, Bechikh S, Ghédira K (2010) The r-dominance: a new dominance relation for interactive evolutionary multicriteria decision making. IEEE Trans Evol Comput 14(5):801–818Google Scholar
  87. 87.
    Deb K, Chaudhuri S (2005) I-EMO: an interactive evolutionary multi-objective optimization tool. In: Pattern Recognition and Machine Intelligence. Springer, New York, pp 690–695Google Scholar
  88. 88.
    Miettinen K, Podkopaev D, Ruiz F, Luque M (2015) A new preference handling technique for interactive multiobjective optimization without trading-off. J Global Optim 63(4):633–652zbMATHCrossRefMathSciNetGoogle Scholar
  89. 89.
    Miettinen K, Ruiz F (2016) NAUTILUS framework: towards trade-off-free interaction in multiobjective optimization. J Bus Econ 86(1–2):5–21CrossRefGoogle Scholar
  90. 90.
    Deb K, Chaudhuri S (2007) I-MODE: an interactive multi-objective optimization and decision-making using evolutionary methods. In: Evolutionary Multi-Criterion Optimization. Springer, New York, pp 788–802Google Scholar
  91. 91.
    Sindhya K, Ruiz AB, Miettinen K (2011) A preference based interactive evolutionary algorithm for multi-objective optimization: PIE. In: Evolutionary Multi-Criterion Optimization. Springer, New York, pp 212–225Google Scholar
  92. 92.
    Gong M, Liu F, Zhang W, Jiao L, Zhang Q (2011) Interactive MOEA/D for multi-objective decision making. In: The genetic and evolutionary computation conference. ACM, New York, pp 721–728Google Scholar
  93. 93.
    Ruiz AB, Luque M, Miettinen K, Saborido R (2015) An interactive evolutionary multiobjective optimization method: Interactive WASF-GA. In: Evolutionary multi-criterion optimization. Springer, New York, pp 249–263Google Scholar
  94. 94.
    Ruiz AB, Luque M, Ruiz F, Saborido R (2015) A combined interactive procedure using preference-based evolutionary multiobjective optimization. application to the efficiency improvement of the auxiliary services of power plants. Expert Syst Appl 42(21):7466–7482Google Scholar
  95. 95.
    Liu R, Wang R, Feng W, Huang J, Jiao L (2016) Interactive reference region based multi-objective evolutionary algorithm through decomposition. IEEE Access 4:7331–7346CrossRefGoogle Scholar
  96. 96.
    Battiti R, Passerini A (2010) Brain-computer evolutionary multiobjective optimization: a genetic algorithm adapting to the decision maker. IEEE Trans Evol Comput 14(5):671–687CrossRefGoogle Scholar
  97. 97.
    Branke J, Greco S, Słowiński R, Zielniewicz P (2009) Interactive evolutionary multiobjective optimization using robust ordinal regression. In: Evolutionary multi-criterion optimization. Springer, New York, pp 554–568Google Scholar
  98. 98.
    Deb K, Sinha A, Korhonen PJ, Wallenius J (2010) An interactive evolutionary multiobjective optimization method based on progressively approximated value functions. IEEE Trans Evol Comput 14(5):723–739Google Scholar
  99. 99.
    Sinha A, Korhonen P, Wallenius J, Deb K (2014) An interactive evolutionary multi-objective optimization algorithm with a limited number of decision maker calls. Eur J Oper Res 233(3):674–688zbMATHCrossRefMathSciNetGoogle Scholar
  100. 100.
    Chankong V, Haimes YY (1983) Multiobjective decision making: theory and methodology. North Holland, New YorkGoogle Scholar
  101. 101.
    Hwang CL, Masud ASM (1979) Multiple objective decision making-methods and applications. Springer, New YorkGoogle Scholar
  102. 102.
    Torokhti A, Howlett P (1985) Theory of multiobjective optimization, vol 176. Elsevier, AmsterdamGoogle Scholar
  103. 103.
    Roy B (2005) Paradigms and challenges. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 3–24Google Scholar
  104. 104.
    Bouyssou D (1986) Some remarks on the notion of compensation in MCDM. Eur J Oper Res 26(1):150–160zbMATHCrossRefMathSciNetGoogle Scholar
  105. 105.
    Figueira J, Mousseau V, Roy B (2005) ELECTRE methods. In: Multiple Criteria Decision Analysis: State of the Art Surveys. Springer, New York, pp 133–153Google Scholar
  106. 106.
    Brans JP, Mareschal B (2005) PROMETHEE methods. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 163–186Google Scholar
  107. 107.
    Siskos Y, Grigoroudis E, Matsatsinis NF (2005) UTA methods. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 297–334Google Scholar
  108. 108.
    Dyer JS (2005) MAUT-multiattribute utility theory. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 265–292Google Scholar
  109. 109.
    Meyer P, Roubens M (2005) Choice, ranking and sorting in fuzzy multiple criteria decision aid. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 471–503Google Scholar
  110. 110.
    Greco S, Matarazzo B, Słowinński R (2005) Decision rule approach. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 507–555Google Scholar
  111. 111.
    Ehrgott M, Wiecek MM (2005) Mutiobjective programming. In: Multiple criteria decision analysis: state of the art surveys. Springer, New York, pp 667–708Google Scholar
  112. 112.
    Larichev OI (1992) Cognitive validity in design of decision-aiding techniques. J Multi Criteria Decis Anal 1(3):127–138zbMATHCrossRefGoogle Scholar
  113. 113.
    Rachmawati L, Srinivasan D (2006) Preference incorporation in multi-objective evolutionary algorithms: a survey. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 962–968Google Scholar
  114. 114.
    Fonseca CM, Fleming PJ (1998) Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. a unified formulation. IEEE Trans Syst Man Cybern Part A Syst Hum 28(1):26–37Google Scholar
  115. 115.
    Tan KC, Khor EF, Lee TH, Sathikannan R (2003) An evolutionary algorithm with advanced goal and priority specification for multi-objective optimization. J Artif Intell Res 18:183–215Google Scholar
  116. 116.
    Yang X, Gen M (1994) Evolution program for bicriteria transportation problem. Comput Ind Eng 27(1):481–484CrossRefGoogle Scholar
  117. 117.
    Wilson PB, Macleod MD (1993) Low implementation cost IIR digital filter design using genetic algorithms. IEE/IEEE Workshop Nat Algorithms Signal Process 1:1–4Google Scholar
  118. 118.
    Wienke D, Lucasius C, Kateman G (1992) Multicriteria target vector optimization of analytical procedures using a genetic algorithm: part I. theory, numerical simulations and application to atomic emission spectroscopy. Anal Chim Acta 265(2):211–225CrossRefGoogle Scholar
  119. 119.
    Quagliarella D, Vicini A (1997) Coupling genetic algorithms and gradient based optimization techniques. In: Genetic algorithms and evolution strategy in engineering and computer science: recent advances and industrial applications. Wiley, Hoboken, pp 289–309Google Scholar
  120. 120.
    Wang R, Purshouse RC, Giagkiozis I, Fleming PJ (2015) The iPICEA-g: a new hybrid evolutionary multi-criteria decision making approach using the brushing technique. Eur J Oper Res 243(2):442–453Google Scholar
  121. 121.
    Cheng R, Rodemann T, Fischer M, Olhofer M, Jin Y (2017) Evolutionary many-objective optimization of hybrid electric vehicle control: from general optimization to preference articulation. IEEE Trans Emerg Top Comput Intell 1(2):97–111CrossRefGoogle Scholar
  122. 122.
    Wierzbicki AP (1980) The use of reference objectives in multiobjective optimization. In: Multiple criteria decision making theory and application. Springer, New York, pp 468–486Google Scholar
  123. 123.
    Korhonen PJ, Laakso J (1986) A visual interactive method for solving the multiple criteria problem. Eur J Oper Res 24(2):277–287Google Scholar
  124. 124.
    Nikulin Y, Miettinen K, Mäkelä MM (2012) A new achievement scalarizing function based on parameterization in multiobjective optimization. OR Spectr 34(1):69–87Google Scholar
  125. 125.
    Jaszkiewicz A, Słowiński R (1999) The ‘light beam search’ approach-an overview of methodology applications. Eur J Oper Res 113(2):300–314zbMATHCrossRefGoogle Scholar
  126. 126.
    Liu R, Wang X, Liu J, Fang L, Jiao L (2013) A preference multi-objective optimization based on adaptive rank clone and differential evolution. Nat Comput 12(1):109–132zbMATHCrossRefMathSciNetGoogle Scholar
  127. 127.
    Deb K, Kumar A (2007) Light beam search based multi-objective optimization using evolutionary algorithms. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 2125–2132Google Scholar
  128. 128.
    Molina J, Santana LV, Hernández-Díaz AG, Coello CAC, Caballero R (2009) g-dominance: Reference point based dominance for multiobjective metaheuristics. Eur J Oper Res 197(2):685–692Google Scholar
  129. 129.
    Ishibuchi H, Tsukamoto N, Sakane Y, Nojima Y (2009) Hypervolume approximation using achievement scalarizing functions for evolutionary many-objective optimization. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 530–537Google Scholar
  130. 130.
    Ishibuchi H, Tsukamoto N, Sakane Y, Nojima Y (2010) Indicator-based evolutionary algorithm with hypervolume approximation by achievement scalarizing functions. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 527–534Google Scholar
  131. 131.
    Chugh T, Sindhya K, Hakanen J, Miettinen K (2015) An interactive simple indicator-based evolutionary algorithm (I-SIBEA) for multiobjective optimization problems. In: Evolutionary multi-criterion optimization. Springer, New York, pp 277–291Google Scholar
  132. 132.
    Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkGoogle Scholar
  133. 133.
    Han J, Kamber M, Pei J (2011) Data mining: concepts and techniques: concepts and techniques. Elsevier, AmsterdamGoogle Scholar
  134. 134.
    Sun X, Gong D, Jin Y, Chen S (2013) A new surrogate-assisted interactive genetic algorithm with weighted semi-supervised learning. IEEE Trans Cybern 43(2):85–698Google Scholar
  135. 135.
    Jin Y (2011) Surrogate-assisted evolutionary computation: Recent advances and future challenges. Swarm Evol Comput 1(2):61–70CrossRefGoogle Scholar
  136. 136.
    Arrow KJ (2012) Social choice and individual values, vol 12. Yale University Press, New HavenGoogle Scholar
  137. 137.
    Janis IL, Mann L (1977) Decision making: a psychological analysis of conflict, choice, and commitment. Free Press, New YorkGoogle Scholar
  138. 138.
    Ackoff RL (1967) Management misinformation system. Manag Syst 14(4):B-147Google Scholar
  139. 139.
    Edwards W (1954) The theory of decision making. Psychol Bull 51(4):380CrossRefGoogle Scholar
  140. 140.
    Hoch SJ, Schkade DA (1996) A psychological approach to decision support systems. Manag Sci 42(1):51–64Google Scholar
  141. 141.
    Wang H, Yao X (2016) Objective reduction based on nonlinear correlation information entropy. Soft Comput 20(6):2393–2407CrossRefGoogle Scholar
  142. 142.
    Freitas AR, Fleming PJ, Guimaraes F (2013) A non-parametric harmony-based objective reduction method for many-objective optimization. In: IEEE International Conference on Systems, Man, and Cybernetics. IEEE, pp 651–656Google Scholar
  143. 143.
    de Freitas ARR, Fleming PJ, Guimarães FG (2015) Aggregation trees for visualization and dimension reduction in many-objective optimization. Inf Sci 298:288–314Google Scholar
  144. 144.
    Brockhoff D, Zitzler E (2009) Objective reduction in evolutionary multiobjective optimization: theory and applications. Evol Comput 17(2):135–166CrossRefGoogle Scholar
  145. 145.
    Singh HK, Isaacs A, Ray T (2011) A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems. IEEE Transactions on Evol Comput 15(4):539–556Google Scholar
  146. 146.
    Jaimes AL, Coello CAC, Chakraborty D (2008) Objective reduction using a feature selection technique. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 673–680Google Scholar
  147. 147.
    Hämäläinen JP, Miettinen K, Tarvainen P, Toivanen J (2003) Interactive solution approach to a multiobjective optimization problem in a paper machine headbox design. J Optim Theory Appl 116(2):265–281zbMATHCrossRefMathSciNetGoogle Scholar
  148. 148.
    Kangas JA, Kohonen Tk, Laaksonen JT (1990) Variants of self-organizing maps. IEEE Trans Neural Netw 1(1):93–99Google Scholar
  149. 149.
    Obayashi S, Sasaki D (2003) Visualization and data mining of Pareto solutions using self-organizing map. In: Evolutionary multi-criterion optimization. Springer, New York, pp 796–809Google Scholar
  150. 150.
    Nekolny B (2010) Contextual self-organizing maps for visual design space exploration. Master’s thesis, Iowa State UniversityGoogle Scholar
  151. 151.
    Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P (1997) Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging 16(2):187–198CrossRefGoogle Scholar
  152. 152.
    Wang Q, Shen Y, Zhang JQ (2005) A nonlinear correlation measure for multivariable data set. Phys D 200(3):287–295zbMATHCrossRefMathSciNetGoogle Scholar
  153. 153.
    Wang H, Jin Y (2017) Efficient nonlinear correlation detection for decomposed search in evolutionary multi-objective optimization. In: Proceedings of the congress on evolutionary computation, pp 649–656Google Scholar
  154. 154.
    Deb K, Saxena DK (2005) On finding Pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems. Technical report, Indian Institute of Technology KanpurGoogle Scholar
  155. 155.
    Saxena DK, Duro JA, Tiwari A, Deb K, Zhang Q (2013) Objective reduction in many-objective optimization: linear and nonlinear algorithms. IEEE Trans Evol Comput 17(1):77–99Google Scholar
  156. 156.
    Deb K, Gupta H (2006) Introducing robustness in multi-objective optimization. Evol Comput 14(4):463–494CrossRefGoogle Scholar
  157. 157.
    Jin Y, Sendhoff B (2003) Trade-off between performance and robustness: an evolutionary multiobjective approach. In: Fonseca CM, Fleming PJ, Zitzler E, Thiele L, Deb K (eds) Evolutionary multi-criterion optimization, EMO 2003. Lecture Notes in Computer Science, vol 2632. Springer, Berlin, pp 237–251Google Scholar
  158. 158.
    Jin Y, Tang K, Xin Y, Sendhoff B, Yao X (2013) A framework for finding robust optimal solutions over time. Memet Comput 5(1):3–18CrossRefGoogle Scholar
  159. 159.
    Deb K, Gupta H (2005) Searching for robust Pareto-optimal solutions in multi-objective optimization. Lect Notes Comput Sci 3410:150–164zbMATHCrossRefGoogle Scholar
  160. 160.
    Sülflow A, Drechsler N, Drechsler R (2007) Robust multi-objective optimization in high dimensional spaces. In: Evolutionary multi-criterion optimization. Springer, New York, pp 715–726Google Scholar
  161. 161.
    Gunawan S, Azarm S (2005) Multi-objective robust optimization using a sensitivity region concept. Struct Multidiscip Optim 29(1):50–60CrossRefGoogle Scholar
  162. 162.
    Li M, Azarm S, Aute V (2005) A multi-objective genetic algorithm for robust design optimization. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 771–778Google Scholar
  163. 163.
    Lim D, Ong Y-S, Jin Y, Sendhoff B, Lee BS (2006) Inverse multi-objective robust evolutionary optimization. Genet Progr Evol Mach 7(4):383–404CrossRefGoogle Scholar
  164. 164.
    Greco S, Słowiński R, Figueira JR, Mousseau V (2010) Robust ordinal regression. In: Trends in multiple criteria decision analysis. Springer, New York, pp 241–283Google Scholar
  165. 165.
    Bana CA, De Corte JM, Vansnick JC et al (2005) On the mathematical foundation of MACBETH. Springer, New YorkGoogle Scholar
  166. 166.
    Wang H, Jiao L, Shang R, He S, Liu F (2015) A memetic optimization strategy based on dimension reduction in decision space. Evol Comput 23(1):69–100CrossRefGoogle Scholar
  167. 167.
    Adra SF, Dodd TJ, Griffin IA, Fleming PJ (2009) Convergence acceleration operator for multiobjective optimization. IEEE Trans Evol Comput 13(4):825–847Google Scholar
  168. 168.
    Gaspar-Cunha A, Vieira A (2004) A hybrid multi-objective evolutionary algorithm using an inverse neural network. In: Hybrid Metaheuristics, pp 25–30Google Scholar
  169. 169.
    Laumanns M, Ocenasek J (2002) Bayesian optimization algorithms for multi-objective optimization. In: International Conference on Parallel Problem Solving from Nature. Springer, New York, pp 298–307Google Scholar
  170. 170.
    Khan N, Goldberg DE, Pelikan M (2002) Multi-objective Bayesian optimization algorithm. Urbana 51:684–684Google Scholar
  171. 171.
    Larranaga P, Lozano JA (2002) Estimation of distribution algorithms: a new tool for evolutionary computation, vol 2. Springer Science & Business Media, New YorkGoogle Scholar
  172. 172.
    Hauschild M, Pelikan M (2011) An introduction and survey of estimation of distribution algorithms. Swarm Evol Comput 1(3):111–128CrossRefGoogle Scholar
  173. 173.
    Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary algorithms: Empirical results. Evol Comput 8(2):173–195CrossRefGoogle Scholar
  174. 174.
    Deb K, Thiele L, Laumanns M, Zitzler E (2002) Scalable multi-objective optimization test problems. In: Proceedings of the 2002 congress on evolutionary computation, CEC 2002, vol 1. IEEE Computer Society, pp 825–830Google Scholar
  175. 175.
    Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506zbMATHCrossRefGoogle Scholar
  176. 176.
    López-Ibáñez M, Knowles J (2015) Machine decision makers as a laboratory for interactive EMO. In: Evolutionary Multi-Criterion Optimization. Springer, New York, pp 295–309Google Scholar
  177. 177.
    Van Veldhuizen DA (1999) Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. Technical report, DTIC DocumentGoogle Scholar
  178. 178.
    Yu G, Zheng J, Li X (2015) An improved performance metric for multiobjective evolutionary algorithms with user preferences. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 908–915Google Scholar
  179. 179.
    He Z, Yen GG (2016) Visualization and performance metric in many-objective optimization. IEEE Trans Evol Comput 20(3):386–402Google Scholar
  180. 180.
    Pryke A, Mostaghim S, Nazemi A (2007) Heatmap visualization of population based multi objective algorithms. In: Evolutionary Multi-Criterion Optimization. Springer, New York, pp 361–375Google Scholar
  181. 181.
    Valdés JJ, Barton AJ (2007) Visualizing high dimensional objective spaces for multi-objective optimization: a virtual reality approach. In: Proceedings of the Congress on Evolutionary Computation. IEEE, pp 4199–4206Google Scholar
  182. 182.
    Lowe D, Tipping M (1996) Feed-forward neural networks and topographic mappings for exploratory data analysis. Neural Comput Appl 4(2):83–95CrossRefGoogle Scholar
  183. 183.
    Hoffman P, Grinstein G, Marx K, Grosse I, Stanley E (1997) DNA visual and analytic data mining. In: Proceedings of Visualization. IEEE, pp 437–441Google Scholar
  184. 184.
    Chen S, Amid D, Shir OM, Limonad L, Boaz D, Anaby-Tavor A, Schreck T (2013) Self-organizing maps for multi-objective Pareto frontiers. In: Visualization Symposium (PacificVis), IEEE Pacific. IEEE, pp 153–160Google Scholar
  185. 185.
    Tenenbaum JB, De Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323Google Scholar
  186. 186.
    Silva R, Salimi A, Li M, Freitas ARR, Guimarães FG, Lowther DA (2016) Visualization and analysis of tradeoffs in many-objective optimization: a case study on the interior permanent magnet motor design. IEEE Trans Magn 52(3):1–4Google Scholar
  187. 187.
    Freitas ARR, Silva RCP, Guimarães FG (2014) On the visualization of trade-offs and reducibility in many-objective optimization. In: The Genetic and Evolutionary Computation Conference. ACM, New York, pp 1091–1098Google Scholar
  188. 188.
    Pigozzi G, Tsoukiàs A, Viappiani P (2016) Preferences in artificial intelligence. Ann Math Artif Intell 77(3–4):361–401zbMATHCrossRefMathSciNetGoogle Scholar
  189. 189.
    Goldsmith J, Junker U (2009) Preference handling for artificial intelligence. AI Mag 29(4):9CrossRefGoogle Scholar
  190. 190.
    Domshlak C, Hüllermeier E, Kaci S, Prade H (2011) Preferences in AI: an overview. Artif Intell 175(7):1037–1052CrossRefMathSciNetGoogle Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of SurreyGuildfordUK
  2. 2.Honda Research Institute EuropeOffenbachGermany

Personalised recommendations