Abstract
The Dominancebased Rough Set Approach (DRSA) is an innovative preference learning approach. It takes as input a set of objects (learning set) described with respect to a collection of condition and decision attributes. It generates a set of ifthen decision rules. Initial versions of dominance based rough set approximation methods assume a single decision maker. Furthermore, the proposed extensions to group decision making mainly use an input oriented aggregation strategy, which requires a high level of agreement between the decision makers. In this paper, we propose an output oriented aggregation strategy to coherently combine different sets of decision rules obtained from different decision makers. The proposed aggregation algorithm is illustrated by using realworld data relative to a business school admission where two decision makers are involved. Results show that aggregation algorithm is able to reproduce the individual assignments of students with a very limited preferential information loss.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
 Rough set approximation
 Decision rule
 Group decision making
 Rules aggregation
 Output aggregation strategy
1 Introduction
The Dominancebased Rough Set Approach (DRSA) [7] is an extension of the Rough Sets Theory [10] intended to deal with multicriteria sorting problems. The DRSA takes a set of assignment examples (learning set) and generates a collection of ifthen decision rules as output. The conventional DRSA assumes a single decision maker while several realworld decision problems need to take into account the presence of multiple decision makers. Different group decision making extensions to DRSA have been proposed in the literature, including [1,2,3,4,5,6, 8, 11, 12].
For instance, the authors in [1] and [8] extend the concepts of the DRSA to deal with decision tables having multiple decision attributes, thus allowing comprehensive collective decision rules to be generated. In [4] we introduced an aggregation algorithm, based on the majority principle and supporting the veto effect, allowing consensual decision rules to be inferred. A more advanced version of the aggregation algorithm of [4] is proposed in [3]. In [5, 6], the authors use the DempsterShafer theory of evidence to combine individual rules provided by the DRSA.
However, all these approaches rely on an input oriented aggregation strategy, which requires a high level of agreement between the decision makers. In this paper, we propose an output oriented aggregation strategy to coherently combine different sets of decision rules obtained from different decision makers. The proposed aggregation algorithm is illustrated by using realworld data relative to business school admission where two decision makers are involved. Results show that aggregation algorithm is able to reproduce the individual assignments of students with a very low preferential information loss.
The rest of the paper is structured as follows. Section 2 sets the background. Section 3 deals with rules matching and overleaping. Section 4 details the aggregation algorithm. Section 5 provides an illustrative application. Section 6 concludes the paper.
2 Background
2.1 Notations and Basic Assumptions
Information about decision objects is often represented in terms of an information table where rows correspond to objects and columns to attributes. The information table S is a 4tuple \({<}U,Q,V,f{>}\) where: U is a finite set of objects, Q is a finite set of attributes, \(V=\bigcup _{q\in Q} V_q\), where \(V_q\) is a domain of the attribute q, and \(f: U \times Q \rightarrow V\) an information function defined such that \(f(x,q) \in V_q, \forall q\in Q, \forall x\in U\). The set of attributes Q is often divided into a subset C of condition attributes and a subset D of decision attributes. In this case, S is called decision table.
The domain of condition attributes is supposed to be ordered in decreasing or increasing preference. Such attributes are called criteria. We assume that the preference is increasing with value of \(f(\cdot ,q)\) for every \(q\in C\). We also assume that the set of decision attributes D is a singleton \(\{d\}\). Decision attribute d makes a partition of U into a finite number of decision classes \(\mathbf{Cl} =\{Cl_t, t\in T\}\), \(T=\{0,\cdots ,n\}\), such that each \(x\in U\) belongs to one and only one class in \(\mathbf{Cl} \). Further, we assume that the classes are preferenceordered, i.e. for all \(r,s\in T\), such that \(r>s\), the objects from \(Cl_r\) are preferred to the objects from \(Cl_s\).
2.2 Rough Approximation
In DRSA the represented knowledge is a collection of upward unions \(Cl_t^\ge \) and downward unions \(Cl_t^\le \) of classes defined as follows: \(Cl_t^\ge =\cup _{s\ge t} Cl_s\) and \(Cl_t^\le =\cup _{s\le t}Cl_s\). The assertion “\(x\in Cl_t^{\ge }\)” means that “x belongs to at least class \(Cl_t\)” while assertion “\(x\in Cl_t^{\le }\)” means that “x belongs to at most class \(Cl_t\)”. The basic idea of DRSA is to replace the indiscernibility relation used in the conventional Rough Set Theory with a dominance relation. Let \(P\subseteq C\) be a subset of condition attributes. The dominance relation \(\varDelta _P\) associated with P is defined for each pair of objects x and y as follows:
In the definition above, the symbol “\(\succeq \)” should be replaced with “\(\preceq \)” for condition attributes which are ordered according to decreasing preferences. To each object \(x\in U\), we associate two sets: (i) the Pdominating set \(\varDelta _P^+(x)=\{y\in U: y\varDelta _P x\}\) containing the objects that dominate x, and (ii) the Pdominated set \(\varDelta _P^(x)=\{y\in U:x\varDelta _P y\}\) containing the objects dominated by x.
Then, the Plower and Pupper approximations of \(Cl_t^\ge \) with respect to P are defined as follows:

\(\underline{P} (Cl_t^{\ge })=\{x \in U:\varDelta _{P}^{+}(x)\subseteq Cl_t^{\ge }\}\),

\(\bar{P}(Cl_t^{\ge })= \{x\in U: \varDelta _{P}^{}(x)\cap Cl_{t}^{\ge }\ne \emptyset \}\).
Analogously, the Plower and Pupper approximations of \(Cl_t^\le \) with respect to P are defined as follows:

\(\underline{P} (Cl_t^{\le })=\{x \in U:\varDelta _{P}^{}(x)\subseteq Cl_t^{\le }\}\),

\(\bar{P}(Cl_t^{\le })= \{x\in U:\varDelta _{P}^{+}(x)\cap Cl_{t}^{\le }\ne \emptyset \}\).
The lower approximations group the objects which certainly belong to class unions \(Cl_t^{\ge }\) (resp. \(Cl_t^{\le }\)). The upper approximations group the objects which could belong to \(Cl_t^{\ge }\) (resp. \(Cl_t^{\le }\)).
The Pboundaries of \(Cl_t^\ge \) and \(Cl_t^\le \) are defined as follows:

\(Bn_P(Cl_t^{\ge })=\bar{P}(Cl_t^{\ge })\underline{P}(Cl_{t}^{\ge })\),

\(Bn_P(Cl_t^{\le })=\bar{P}(Cl_t^{\le })\underline{P}(Cl_t^{\le })\).
The boundaries group objects that can neither be ruled in nor out as members of class \(Cl_{t}\).
2.3 Decision Rules
The approximations of upward and downward unions of classes can serve to induce a set of ifthen decision rules relating condition and decision attributes. There are five basic types of decision rules:

Certain decision rules. These rules are generated from the lower approximation of the union of classes \(Cl_t^{\le }\) or \(Cl_t^{\ge }\). A decision rule from this type has one of the following structures:

Type 1: if \(f(x, q_1)\le r_1\wedge \cdots \wedge f(x, q_m)\le r_m\) then \(x \in Cl_t^\le \)

Type 2: if \(f(x, q_1)\ge r_1\wedge \cdots \wedge f(x, q_m)\ge r_m\) then \(x \in Cl_t^\ge \)
where \((r_1,\cdots ,r_m) \in (V_{q_1}\times \cdots \times V_{q_m})\).


Possible decision rules. These rules are generated from the upper approximation of the union of classes \(Cl_t^{\le }\) or \(Cl_t^{\ge }\). A decision rule from this type has one of the following structures:

Type 3: if \(f(x, q_1)\le r_1\wedge \cdots \wedge f(x, q_m)\le r_m\) then \(x \text{ could } \text{ belong } \text{ to } Cl_t^\le \)

Type 4: if \(f(x, q_1)\ge r_1\wedge \cdots \wedge f(x, q_m)\ge r_m\) then \(x \text{ could } \text{ belong } \text{ to } Cl_t^\ge \)
where \((r_1,\cdots ,r_m) \in (V_{q_1}\times \cdots \times V_{q_m})\).


Approximate rules. These rules are generated from the boundaries. A decision rule from this type has the following structure:

Type 5: if \(f(x, q_1)\le r_1\wedge \cdots \wedge f(x, q_m)\le r_m\wedge f(x, q_{m+1})\le r_{m+1}\wedge \cdots \wedge f(x,r_{p})\le r_{p}\) then \(x \in Cl_s\cup Cl_{s+1}\cup \cdots \cup Cl_t\)
where \((r_1,\cdots ,r_p) \in (V_{q_1}\times \cdots \times V_{q_p})\).

Only the two first types are considered in the rest of this paper.
The most popular rule induction algorithm for DRSA is DOMLEM [9], which generates a minimal set of rules.
3 Decision Rules Matching and Overlapping
3.1 Basic Definitions
A decision rule R is defined as a collection of elementary conditions and a conclusion. Let R.C denotes the set of conditions of rule R and \(R.C_i\) denote each member of this set. Let R.N denote the cardinality of this set. Let R.D denotes the conclusion associated with rule R. Each decision rule R is characterized by its type R.T.
An elementary condition \(C_i\) is defined by an attribute Q, an operator O and a righthand member H. Let \(C_i.Q\), \(C_i.O\) and \(C_i.H\) denote these three elements.
3.2 Conditions Matching
Definition 1
(Conditions equality). Let \(C_k\) and \(C_l\) be two conditions of the same type. Then, \(C_k\) is equal to \(C_l\) (denoted \(C_k=C_l\)) iff:
Definition 2
(Type 1 conditions inclusion). Let \(C_k\) and \(C_l\) be two conditions of Type 1 decision rules. Then, \(C_k\) is included in \(C_l\) (denoted \(C_k\subseteq C_l\)) iff:
Definition 3
(Type 2 conditions inclusion). Let \(C_k\) and \(C_l\) be two conditions of Type 2 decision rules. Then, \(C_k\) includes \(C_l\) (denoted \(C_k\supseteq C_l\)) iff:
3.3 Decision Rules Matching
The equality between two decision rules is defined as follows.
Definition 4
(Decision rules equality). Let \(R_i\) and \(R_j\) be two decision rules of the same type. Then, \(R_i\) is equal to \(R_j\) (denoted \(R_i=R_j\)) iff:
Definition 5
(Decision rules Type 1 full inclusion). Let \(R_i\) and \(R_j\) be two decision rules of Type 1. Then, \(R_i\) is fully included in \(R_j\) (denoted \(R_i\subseteq R_j\)) iff:
This definition implicity ensures that rules \(R_i\) and \(R_j\) must have the same cardinality, i.e., \(R_i.N=R_j.N\).
Definition 6
(Decision rules Type 1 Partial inclusion). Let \(R_i\) and \(R_j\) be two decision rules of Type 1. Then, \(R_i\) is partially included in \(R_j\) (denoted \(R_i\subset R_j\)) iff:
The last condition (i.e., \(R_i.N<R_j.N\)) in this definition ensures that the cardinality of rule \(R_i\) must be strictly less that the one of \(R_j\).
Definition 7
(Decision rules Type 2 full inclusion). Let \(R_i\) and \(R_j\) be two decision rules of Type 2. Then, \(R_i\) is fully included in \(R_j\) (denoted \(R_i\supseteq R_j\)) iff:
This definition implicity ensures that rules \(R_i\) and \(R_j\) must have the same cardinality, i.e., \(R_i.N=R_j.N\).
Definition 8
(Decision rules Type 2 partial inclusion). Let \(R_i\) and \(R_j\) be two decision rules of Type 2. Then, \(R_i\) is partially included in \(R_j\) (denoted \(R_i\supset R_j\)) iff:
The last condition (i.e., \(R_i.N<R_j.N\)) in this definition ensures that the cardinality of rule \(R_i\) must be strictly less that the one of \(R_j\).
3.4 Overlapping Decision Rules
Let \(R_i\) be a Type 1 decision rule and \(R_j\) be a Type 2 decision rule. Although these rules are of different types, they may or not share some parts of their conditions and/or decisions. Four basic cases can be distinguished: (i) \(R_i\) and \(R_j\) are fully disjoint; (ii) \(R_i\) and \(R_j\) have overlapped conditions but their decisions are disjoint; (iii) \(R_i\) and \(R_j\) have overlapped decisions but their conditions are disjoint; and (iv) \(R_i\) and \(R_j\) have overlapped conditions and overlapped decisions.
Definition 9
(Decision rules with overlapped conditions). Let \(R_i\) be a Type 1 decision rule and \(R_j\) be a Type 2 decision rule. Decision rules \(R_i\) and \(R_j\) have overlapped conditions, denoted \((R_i.C\cap R_j.C)\ne \emptyset \), iff:
Definition 10
(Decision rules with overlapped decisions). Let \(R_i\) be a Type 1 decision rule and \(R_j\) be a Type 2 decision rule. Decision rules \(R_i\) and \(R_j\) have overlapped decisions, denoted \((R_i.D\cap R_j.D)\ne \emptyset \), iff only if: \(R_i.D\preceq R_j.D\).
Definition 11
(Fully overlapped decision rules). Let \(R_i\) be a Type 1 decision rule and \(R_j\) be a Type 2 decision rule. Decision rules \(R_i\) and \(R_j\) are fully overlapped, denoted \((R_i.C\cap R_j.C)\ne \emptyset \), iff:
Definition 12
(Disjoint decision rules). Let \(R_i\) be a Type 1 decision rule and \(R_j\) be a Type 2 decision rule. Decision rules \(R_i\) and \(R_j\) are disjoint, denoted \((R_i\cap R_j)=\emptyset \), iff:
4 Decision Rules Aggregation
Let \(H=\{1,\cdots ,i,\cdots ,h\}\) be a set of decision makers and \(\varPi _i\) the set decision rules obtained by decision maker \(i\in H\). Let \(\varPi \) be the union of all decision rules of the h decision makers: \(\varPi =\cup _{i=1}^h \varPi _i\).
The aggregation algorithm contains two steps: (i) transformation of overlapping rules, and (ii) elimination of redundant decision rules. We should mention that steps (i) and (ii) may be inverted without affecting the final result. However, in this paper, we maintain the order given above for several reasons. First, the other solution (i.e. proceeding by computing the minimal cover and then transformation of overlapping rules) requires an additional step to compute the minimal cover after the transformation operation. Indeed, the latter may lead to new redundant rules. Second, as a consequence of the first point, the computing time will automatically increase. The only shortcoming of the solution adopted in this paper is that in step (i) both redundant and nonredundant rules are considered. This may have minor effects on the overall computing time.
The aggregation algorithm takes the set \(\varPi \) of all decision rules and generates a minimal set of nonredundant decision rules.
4.1 Step 1: Transformation of Overlapping Decision Rules
Case 1. \({{\varvec{R}}}_i\) and \({\varvec{R}}_j\) Are Disjoint Decision Rules. This situation is graphically illustrated by Fig. 1. In this figure, we assumed that all conditions attributes have the same scale and that \(1<\alpha<\beta <n\). As it is shown in Fig. 1, the constraints defined by the conditions of rules \(R_i\) and \(R_j\) are totaly disjoint. For instance, condition \(C_2\) says that \(f(x,A_s)\le r_\beta \) and condition \(C_{2'}\) says that \(f(x,A_s)\ge r_n\). It is easy to see that there is no any intersection between the two constraints as defined by \(C_{2}\) and \(C_{2'}\). The same remark holds for the other conditions and for the decision of \(R_i\) and \(R_j\).
In this situation, there is no overlap between decision rules \(R_i\) and \(R_j\) and it is reasonable to maintain both of them (if they are not overlapped by other rules).
Case 2. \({\varvec{R}}_i\) and \({\varvec{R}}_j\) Have Overlapped Conditions and Decisions This situation is graphically illustrated by Fig. 2. In this figure, we assumed that all conditions attributes have the same scale and that \(1<\alpha<\beta <n\). As it is shown in Fig. 2, the constraints defined by the conditions of rules \(R_i\) and \(R_j\) overlap. For instance, condition \(C_2\) says that \(f(x,A_s)\le r_n\) and condition \(C_{2'}\) says that \(f(x,A_s)\ge r_\beta \). It is easy to see that there is an intersection between the two constraints defined by \(C_{2}\) and \(C_{2'}\) (since \(\beta <n\)). The same remark holds for the other conditions. The same remark holds for decisions of \(R_i\) and \(R_j\).
To reduce the intervalbased assignments of decision objects, we propose to replace rules \(R_i\) and \(R_j\) by three fully disjoint decision rules as follows:

\(R_a\): with the same structure and type as rule \(R_i\) but the Right Hand Side (RHS) of conditions and the decision are those of rule \(R_j\);

\(R_b\): with the same structure and type as rule \(R_j\) but the RHS of conditions and the decision are those of rule \(R_i\);

\(R_c\): the RHS of conditions are of the form \([R_j.C_k.H,R_i.C_k.H]\) and the decision if of the form \([R_j.D,R_i.D]\).
The last decision rule is of composite type since RHS of the conditions and the decision are intervalbased.
Case 3. \({\varvec{R}}_i\) and \({\varvec{R}}_j\) Have Overlapped Conditions This situation is graphically illustrated by Fig. 3. In this figure, we assume that all conditions attributes have the same scale and that \(1<\alpha<\beta <n\). As it is shown in Fig. 3, the constraints defined by the conditions of rules \(R_i\) and \(R_j\) overlap but the decisions do not. For instance, condition \(C_2\) says that \(f(x,A_s)\le r_n\) are condition \(C_{2'}\) says that \(f(x,A_s)\ge r_\beta \). It is easy to see that there is any intersection between the two constraints defined by \(C_{2}\) and \(C_{2'}\) (since \(\beta <n\)). The same remark holds for the other conditions. On the contrary, the decision parts of rules \(R_i\) and \(R_j\) are totaly disjoint.
To reduce the intervalbased assignments of decision objects, we propose to replace rules \(R_i\) and \(R_j\) with two certain and more precise decision rules as follows:

\(R_a\): with the same structure and type as rule \(R_i\) but the RHS of conditions and the decision are those of rule \(R_j\);

\(R_b\): with the same structure and type as rule \(R_j\) but the RHS of conditions and the decision are those of rule \(R_i\);
We mention that we may identify a third situation concerning the assignment of objects where the RHS of the different conditions are in the range \([r_\alpha ,r_\beta ]\) (see Fig. 3). In this case, there is a contradiction with the initial decision:

by \(R_i\), objects with RHS in \([r_\alpha ,r_\beta ]\) should be assigned to \(Cl_k^\le \), and

by \(R_i\), objects with RHS in \([r_\alpha ,r_\beta ]\) should be assigned to \(Cl_t^\ge \),
Since \(k<t\), we should assign objects either to \(Cl_k^\le \) or \(Cl_t^\ge \). However, to avoid conflict assignments, we opted out not to include an additional rule as in the previous case.
Case 4. \({\varvec{R}}_i\) and \({\varvec{R}}_j\) Have Overlapped Decisions This situation is graphically illustrated by Fig. 4. In this figure, we assumed that all conditions attributes have the same scale and that \(1<\alpha<\beta <n\). As shown in Fig. 4 the constraints defined by the conditions of rules \(R_i\) and \(R_j\) are totaly disjoint while decisions overlap.
To reduce the intervalbased assignments of decision objects, we propose to replace rules \(R_i\) and \(R_j\) with two certain and more precise decision rules as follows:

\(R_a\): like rule \(R_i\) but the decision is as rule \(R_j\);

\(R_b\): like rule \(R_j\) but the decision is as rule \(R_i\);
4.2 Step 2: Elimination of Redundant Decision Rules
The objective of this step is to eliminate (i) redundant decision rules; and (ii) rules fully included in other rules. In the second case, two options are possible: either we remove the more general rule or the less general rule. Both solutions may lead to preferential information loss. To minimize the loss of preferential information, we can rely on some measures. Let \(R_a\) and \(R_b\) be two redundant decision rules. Let \([[R_a]]\) and \([[R_b]]\) be the sets of decision objects supporting decision rules \(R_a\) and \(R_b\), respectively. Then, we define the following two measures:

Information loss:
$$\begin{aligned} IL(R_a,R_b)= & {} {\left\{ \begin{array}{ll} 0,&{} \hbox {if }R_a\subseteq R_b, \\ \frac{[[R_b]]\setminus [[R_a]]}{[[R_b]]}, &{} \hbox {otherwise.} \\ \end{array}\right. } \end{aligned}$$(2)\(IL(R_a,R_b)\) measures the information loss when decision rule \(R_b\) is removed.

Precision loss:
$$\begin{aligned} PL(R_a,R_b)= & {} 1  IL(R_a,R_b). \end{aligned}$$(3)
These two measures vary in different directions and can be used to decide which of decision rules \(R_a\) and \(R_b\) should be removed. It consists of a tradeoff between information loss and precision loss.
5 Application
To partially illustrate the proposed algorithm, we consider a realworld data relative to a business school admission where two decision makers (designed by DM1 and DM2 in the rest of the paper) are involved. The learning set is composed of 175 objects (students in this case). A randomly selected extract from the learning set is given in Table 1. In this table, the decisions ‘A’ and ‘R’ stand for ‘accepted’ and ‘rejected’, respectively. The comparison of the individual assignments shows that the decision makers disagree on 40 (22.86%) students.
We then applied the DRSA two times to approximate this learning set using the assignments given by DM1 and DM2. The application of rule induction algorithm DOMLEM on the obtained approximations leads to two collections of decision rules, which are given Table 2 (for DM1) and Table 3 (for DM2).
This illustrative example uses only two decision classes. Accordingly, there is no overlap between decision rules of different types. Then, only the second step will applied to aggregate the decision rules. A careful examination of Table 2 and Table 3 shows that there are three cases of redundancy: (i) Rule 1.9 and Rule 2.18; (ii) Rule 1.12 and Rule 2.11; and (iii) Rule 1.13 and Rule 2.16. The result of the application of Equations (1) and (2) on these pairs of decision rules is summarized in Table 4. Based on these results and to reduce information loss, decision rules 1.18, 2.11 and 1.13 should be removed.
We then applied the remaining decision rules to classify all the students. Results show that the obtained collective assignments match with the initial assignments of DM1 for about 96.2% of students and with the initial assignments of DM2 for about 92.3% of students. Thus, DM1 and DM2 need to discuss only a very limited number of conflicting situations (instead on 40 conflicting situations initially).
6 Conclusion
We proposed an output oriented aggregation strategy to coherently combine different sets of decision rules obtained from different decision makers. The proposed aggregation algorithm is illustrated by using realworld data relative to business school admission. An important aspect of the proposed approach is that the consensus between decision makers [13] is computed using objective preference information. In the future, we intend first to apply the proposed aggregation algorithms to other datasets, especially those nonbinary decision attributes and with more decision makers. We also intend to study the behavior of the aggregation algorithm with large datasets. Finally, we will intend to design new measures to evaluate information loss, precision loss and information redundancy.
References
Bi, W.J., Chen, X.H.: An extended dominancebased rough set approach to group decision. In: Guizani, M., Chen, H.H., Zhang, X. (eds.) Proceedings of the International Conference on Wireless Communications, Networking and Mobile Computing (WiCom 2007), Shanghai, China, pp. 5753–5756, 21–25 September 2007
Blaszczynski, J., Greco, S., Slowinski, R.: Multicriteria classification  a new scheme for application of dominancebased decision rules. Eur. J. Oper. Res. 181(3), 1030–1044 (2007)
Chakhar, S., Ishizaka, A., Labib, A., Saad, I.: Dominancebased rough set approach for group decisions. Eur. J. Oper. Res. 251(1), 206–224 (2016)
Chakhar, S., Saad, I.: Dominancebased rough set approach for groups in multicriteria classification. Decis. Support Syst. 54(1), 372–380 (2012)
Chen, Y., Hipel, K., Kilgour, D.: A decision rule aggregation approach to multiple criteria group decision support. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2008), pp. 2514–2518. Institute of Electrical and Electronics Engineers (IEEE), Singapore, 12–15 October 2008
Chen, Y., Kilgour, D., Hipel, K.: A decision rule aggregation approach to multiple criteriamultiple participant sorting. Group Decis. Negot. 21, 727–745 (2012)
Greco, S., Matarazzo, B., Slowinski, R.: Rough sets theory for multicriteria decision analysis. Eur. J. Oper. Res. 129(1), 1–47 (2001)
Greco, S., Matarazzo, B., Słowiński, R.: Dominancebased rough set approach to decision involving multiple decision makers. In: Greco, S., et al. (eds.) RSCTC 2006. LNCS (LNAI), vol. 4259, pp. 306–317. Springer, Heidelberg (2006). https://doi.org/10.1007/11908029_33
Greco, S., Matarazzo, B., Slowinski, R., Stefanowski, J.: An algorithm for induction of decision rules consistent with the dominance principle. In: Ziarko, W., Yao, Y. (eds.) RSCTC 2000. LNCS (LNAI), vol. 2005, pp. 304–313. Springer, Heidelberg (2001). https://doi.org/10.1007/354045554X_37
Pawlak, Z.: Rough Set. Theoretical Aspects of Reasoning About Data. Kluwer Academic Publishers, Dordrecht (1990)
Saad, I., Chakhar, S.: Multicriteria methodology based on majority principle for collective identification of company’s valuable knowledge. Knowl. Manag. Res. Practice 10(4), 380–391 (2012)
Xu, Z.: Multipleattribute group decision making with different formats of preference information on attributes. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 37(6), 1500–1511 (2007)
Zhang, H., Dong, Y., Chiclana, F., Yu, S.: Consensus efficiency in group decision making: a comprehensive comparative study and its optimal design. Eur. J. Oper. Res. 275(2), 580–598 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Saad, I., Chakhar, S. (2020). Decision Rule Aggregation Approach to Support Group Decision Making. In: Morais, D., Fang, L., Horita, M. (eds) Group Decision and Negotiation: A Multidisciplinary Perspective. GDN 2020. Lecture Notes in Business Information Processing, vol 388. Springer, Cham. https://doi.org/10.1007/9783030486419_12
Download citation
DOI: https://doi.org/10.1007/9783030486419_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783030486402
Online ISBN: 9783030486419
eBook Packages: Computer ScienceComputer Science (R0)