## 1 Introduction

The Dominance-based Rough Set Approach (DRSA) [7] is an extension of the Rough Sets Theory [10] intended to deal with multicriteria sorting problems. The DRSA takes a set of assignment examples (learning set) and generates a collection of if-then decision rules as output. The conventional DRSA assumes a single decision maker while several real-world decision problems need to take into account the presence of multiple decision makers. Different group decision making extensions to DRSA have been proposed in the literature, including [1,2,3,4,5,6, 8, 11, 12].

For instance, the authors in [1] and [8] extend the concepts of the DRSA to deal with decision tables having multiple decision attributes, thus allowing comprehensive collective decision rules to be generated. In [4] we introduced an aggregation algorithm, based on the majority principle and supporting the veto effect, allowing consensual decision rules to be inferred. A more advanced version of the aggregation algorithm of [4] is proposed in [3]. In [5, 6], the authors use the Dempster-Shafer theory of evidence to combine individual rules provided by the DRSA.

However, all these approaches rely on an input oriented aggregation strategy, which requires a high level of agreement between the decision makers. In this paper, we propose an output oriented aggregation strategy to coherently combine different sets of decision rules obtained from different decision makers. The proposed aggregation algorithm is illustrated by using real-world data relative to business school admission where two decision makers are involved. Results show that aggregation algorithm is able to reproduce the individual assignments of students with a very low preferential information loss.

The rest of the paper is structured as follows. Section 2 sets the background. Section 3 deals with rules matching and overleaping. Section 4 details the aggregation algorithm. Section 5 provides an illustrative application. Section 6 concludes the paper.

## 2 Background

### 2.1 Notations and Basic Assumptions

Information about decision objects is often represented in terms of an information table where rows correspond to objects and columns to attributes. The information table S is a 4-tuple $${<}U,Q,V,f{>}$$ where: U is a finite set of objects, Q is a finite set of attributes, $$V=\bigcup _{q\in Q} V_q$$, where $$V_q$$ is a domain of the attribute q, and $$f: U \times Q \rightarrow V$$ an information function defined such that $$f(x,q) \in V_q, \forall q\in Q, \forall x\in U$$. The set of attributes Q is often divided into a sub-set C of condition attributes and a sub-set D of decision attributes. In this case, S is called decision table.

The domain of condition attributes is supposed to be ordered in decreasing or increasing preference. Such attributes are called criteria. We assume that the preference is increasing with value of $$f(\cdot ,q)$$ for every $$q\in C$$. We also assume that the set of decision attributes D is a singleton $$\{d\}$$. Decision attribute d makes a partition of U into a finite number of decision classes $$\mathbf{Cl} =\{Cl_t, t\in T\}$$, $$T=\{0,\cdots ,n\}$$, such that each $$x\in U$$ belongs to one and only one class in $$\mathbf{Cl}$$. Further, we assume that the classes are preference-ordered, i.e. for all $$r,s\in T$$, such that $$r>s$$, the objects from $$Cl_r$$ are preferred to the objects from $$Cl_s$$.

### 2.2 Rough Approximation

In DRSA the represented knowledge is a collection of upward unions $$Cl_t^\ge$$ and downward unions $$Cl_t^\le$$ of classes defined as follows: $$Cl_t^\ge =\cup _{s\ge t} Cl_s$$ and $$Cl_t^\le =\cup _{s\le t}Cl_s$$. The assertion “$$x\in Cl_t^{\ge }$$” means that “x belongs to at least class $$Cl_t$$” while assertion “$$x\in Cl_t^{\le }$$” means that “x belongs to at most class $$Cl_t$$”. The basic idea of DRSA is to replace the indiscernibility relation used in the conventional Rough Set Theory with a dominance relation. Let $$P\subseteq C$$ be a subset of condition attributes. The dominance relation $$\varDelta _P$$ associated with P is defined for each pair of objects x and y as follows:

\begin{aligned} x\varDelta _P y\Leftrightarrow f(x,q)\succeq f(y,q), \forall q\in P. \end{aligned}
(1)

In the definition above, the symbol “$$\succeq$$” should be replaced with “$$\preceq$$” for condition attributes which are ordered according to decreasing preferences. To each object $$x\in U$$, we associate two sets: (i) the P-dominating set $$\varDelta _P^+(x)=\{y\in U: y\varDelta _P x\}$$ containing the objects that dominate x, and (ii) the P-dominated set $$\varDelta _P^-(x)=\{y\in U:x\varDelta _P y\}$$ containing the objects dominated by x.

Then, the P-lower and P-upper approximations of $$Cl_t^\ge$$ with respect to P are defined as follows:

• $$\underline{P} (Cl_t^{\ge })=\{x \in U:\varDelta _{P}^{+}(x)\subseteq Cl_t^{\ge }\}$$,

• $$\bar{P}(Cl_t^{\ge })= \{x\in U: \varDelta _{P}^{-}(x)\cap Cl_{t}^{\ge }\ne \emptyset \}$$.

Analogously, the P-lower and P-upper approximations of $$Cl_t^\le$$ with respect to P are defined as follows:

• $$\underline{P} (Cl_t^{\le })=\{x \in U:\varDelta _{P}^{-}(x)\subseteq Cl_t^{\le }\}$$,

• $$\bar{P}(Cl_t^{\le })= \{x\in U:\varDelta _{P}^{+}(x)\cap Cl_{t}^{\le }\ne \emptyset \}$$.

The lower approximations group the objects which certainly belong to class unions $$Cl_t^{\ge }$$ (resp. $$Cl_t^{\le }$$). The upper approximations group the objects which could belong to $$Cl_t^{\ge }$$ (resp. $$Cl_t^{\le }$$).

The P-boundaries of $$Cl_t^\ge$$ and $$Cl_t^\le$$ are defined as follows:

• $$Bn_P(Cl_t^{\ge })=\bar{P}(Cl_t^{\ge })-\underline{P}(Cl_{t}^{\ge })$$,

• $$Bn_P(Cl_t^{\le })=\bar{P}(Cl_t^{\le })-\underline{P}(Cl_t^{\le })$$.

The boundaries group objects that can neither be ruled in nor out as members of class $$Cl_{t}$$.

### 2.3 Decision Rules

The approximations of upward and downward unions of classes can serve to induce a set of if-then decision rules relating condition and decision attributes. There are five basic types of decision rules:

• Certain decision rules. These rules are generated from the lower approximation of the union of classes $$Cl_t^{\le }$$ or $$Cl_t^{\ge }$$. A decision rule from this type has one of the following structures:

• Type 1: if $$f(x, q_1)\le r_1\wedge \cdots \wedge f(x, q_m)\le r_m$$ then $$x \in Cl_t^\le$$

• Type 2: if $$f(x, q_1)\ge r_1\wedge \cdots \wedge f(x, q_m)\ge r_m$$ then $$x \in Cl_t^\ge$$

where $$(r_1,\cdots ,r_m) \in (V_{q_1}\times \cdots \times V_{q_m})$$.

• Possible decision rules. These rules are generated from the upper approximation of the union of classes $$Cl_t^{\le }$$ or $$Cl_t^{\ge }$$. A decision rule from this type has one of the following structures:

• Type 3: if $$f(x, q_1)\le r_1\wedge \cdots \wedge f(x, q_m)\le r_m$$ then $$x \text{ could } \text{ belong } \text{ to } Cl_t^\le$$

• Type 4: if $$f(x, q_1)\ge r_1\wedge \cdots \wedge f(x, q_m)\ge r_m$$ then $$x \text{ could } \text{ belong } \text{ to } Cl_t^\ge$$

where $$(r_1,\cdots ,r_m) \in (V_{q_1}\times \cdots \times V_{q_m})$$.

• Approximate rules. These rules are generated from the boundaries. A decision rule from this type has the following structure:

• Type 5: if $$f(x, q_1)\le r_1\wedge \cdots \wedge f(x, q_m)\le r_m\wedge f(x, q_{m+1})\le r_{m+1}\wedge \cdots \wedge f(x,r_{p})\le r_{p}$$ then $$x \in Cl_s\cup Cl_{s+1}\cup \cdots \cup Cl_t$$

where $$(r_1,\cdots ,r_p) \in (V_{q_1}\times \cdots \times V_{q_p})$$.

Only the two first types are considered in the rest of this paper.

The most popular rule induction algorithm for DRSA is DOMLEM [9], which generates a minimal set of rules.

## 3 Decision Rules Matching and Overlapping

### 3.1 Basic Definitions

A decision rule R is defined as a collection of elementary conditions and a conclusion. Let R.C denotes the set of conditions of rule R and $$R.C_i$$ denote each member of this set. Let R.N denote the cardinality of this set. Let R.D denotes the conclusion associated with rule R. Each decision rule R is characterized by its type R.T.

An elementary condition $$C_i$$ is defined by an attribute Q, an operator O and a right-hand member H. Let $$C_i.Q$$, $$C_i.O$$ and $$C_i.H$$ denote these three elements.

### Definition 1

(Conditions equality). Let $$C_k$$ and $$C_l$$ be two conditions of the same type. Then, $$C_k$$ is equal to $$C_l$$ (denoted $$C_k=C_l$$) iff:

$$\left\{ \begin{array}{l} C_k.Q=C_l.Q\\ C_k.O=C_l.O\\ C_k.H=C_l.H\\ \end{array} \right\} \Rightarrow (C_k=C_l)$$

### Definition 2

(Type 1 conditions inclusion). Let $$C_k$$ and $$C_l$$ be two conditions of Type 1 decision rules. Then, $$C_k$$ is included in $$C_l$$ (denoted $$C_k\subseteq C_l$$) iff:

$$\left\{ \begin{array}{l} C_k.Q=C_l.Q\\ C_k.O=C_l.O\\ C_k.H\preceq C_l.H\\ \end{array} \right\} \Rightarrow (C_k\subseteq C_l)$$

### Definition 3

(Type 2 conditions inclusion). Let $$C_k$$ and $$C_l$$ be two conditions of Type 2 decision rules. Then, $$C_k$$ includes $$C_l$$ (denoted $$C_k\supseteq C_l$$) iff:

$$\left\{ \begin{array}{l} C_k.Q=C_l.Q\\ C_k.O=C_l.O\\ C_k.H\succeq C_l.H\\ \end{array} \right\} \Rightarrow (C_k\supseteq C_l)$$

### 3.3 Decision Rules Matching

The equality between two decision rules is defined as follows.

### Definition 4

(Decision rules equality). Let $$R_i$$ and $$R_j$$ be two decision rules of the same type. Then, $$R_i$$ is equal to $$R_j$$ (denoted $$R_i=R_j$$) iff:

$$\left\{ \begin{array}{l} R_i.T=R_j.T\\ \forall k\exists l (R_i.C_k=R_j.C_l)\quad 1\le k\le R_i.N\\ \forall m\exists n (R_j.C_m=R_i.C_n)\quad 1\le m\le R_j.N\\ R_i.D=R_j.D \end{array}\right\} \Rightarrow (R_i=R_j)$$

### Definition 5

(Decision rules Type 1 full inclusion). Let $$R_i$$ and $$R_j$$ be two decision rules of Type 1. Then, $$R_i$$ is fully included in $$R_j$$ (denoted $$R_i\subseteq R_j$$) iff:

$$\left\{ \begin{array}{l} R_i.T=R_j.T=Type~1\\ \forall k\exists l (R_i.C_k\subseteq R_j.C_l)\quad 1\le k\le R_i.N\\ \forall m\exists n (R_i.C_n\subseteq R_j.C_m)\quad 1\le m\le R_i.N\\ R_i.D\preceq R_j.D \end{array}\right\} \Rightarrow (R_i\subseteq R_j)$$

This definition implicity ensures that rules $$R_i$$ and $$R_j$$ must have the same cardinality, i.e., $$R_i.N=R_j.N$$.

### Definition 6

(Decision rules Type 1 Partial inclusion). Let $$R_i$$ and $$R_j$$ be two decision rules of Type 1. Then, $$R_i$$ is partially included in $$R_j$$ (denoted $$R_i\subset R_j$$) iff:

$$\left\{ \begin{array}{l} R_i.T=R_j.T=Type~1\\ \forall k\exists l (R_i.C_k\subseteq R_j.C_l)\quad 1\le k\le R_i.N\\ R_i.D\preceq R_j.D\\ R_i.N< R_j.N\\ \end{array} \right\} \Rightarrow (R_i\subset R_j)$$

The last condition (i.e., $$R_i.N<R_j.N$$) in this definition ensures that the cardinality of rule $$R_i$$ must be strictly less that the one of $$R_j$$.

### Definition 7

(Decision rules Type 2 full inclusion). Let $$R_i$$ and $$R_j$$ be two decision rules of Type 2. Then, $$R_i$$ is fully included in $$R_j$$ (denoted $$R_i\supseteq R_j$$) iff:

$$\left\{ \begin{array}{l} R_i.T=R_j.T=Type~2\\ \forall k\exists l (R_i.C_k\supseteq R_j.C_l)\quad 1\le k\le R_i.N\\ \forall m\exists n (R_i.C_n\subseteq R_j.C_m)\quad 1\le m\le R_i.N\\ R_i.D\succeq R_j.D\\ \end{array} \right\} \Rightarrow (R_i\supseteq R_j)$$

This definition implicity ensures that rules $$R_i$$ and $$R_j$$ must have the same cardinality, i.e., $$R_i.N=R_j.N$$.

### Definition 8

(Decision rules Type 2 partial inclusion). Let $$R_i$$ and $$R_j$$ be two decision rules of Type 2. Then, $$R_i$$ is partially included in $$R_j$$ (denoted $$R_i\supset R_j$$) iff:

$$\left\{ \begin{array}{l} R_i.T=R_j.T=Type~2\\ \forall k\exists l (R_i.C_k\supseteq R_j.C_l)\quad 1\le k\le R_i.N\\ R_i.D\succeq R_j.D\\ R_i.N< R_j.N\\ \end{array} \right\} \Rightarrow (R_i\supset R_j)$$

The last condition (i.e., $$R_i.N<R_j.N$$) in this definition ensures that the cardinality of rule $$R_i$$ must be strictly less that the one of $$R_j$$.

### 3.4 Overlapping Decision Rules

Let $$R_i$$ be a Type 1 decision rule and $$R_j$$ be a Type 2 decision rule. Although these rules are of different types, they may or not share some parts of their conditions and/or decisions. Four basic cases can be distinguished: (i) $$R_i$$ and $$R_j$$ are fully disjoint; (ii) $$R_i$$ and $$R_j$$ have overlapped conditions but their decisions are disjoint; (iii) $$R_i$$ and $$R_j$$ have overlapped decisions but their conditions are disjoint; and (iv) $$R_i$$ and $$R_j$$ have overlapped conditions and overlapped decisions.

### Definition 9

(Decision rules with overlapped conditions). Let $$R_i$$ be a Type 1 decision rule and $$R_j$$ be a Type 2 decision rule. Decision rules $$R_i$$ and $$R_j$$ have overlapped conditions, denoted $$(R_i.C\cap R_j.C)\ne \emptyset$$, iff:

$$\left\{ \begin{array}{l} \forall C_k\in R_i.C, \exists C_l\in R_j.C (C_k.Q=C_l.Q)\wedge \\ \quad (C_k.O\ne C_l.O)\wedge (C_k.H\ge C_l.H) 1\le k\le R_i.N\\ \forall C_m\in R_j.C, \exists C_n\in R_i.C (C_n.Q=C_m.Q)\wedge \\ \quad (C_n.O\ne C_m.O)\wedge (C_m.H\le C_n.H) 1\le m\le R_j.N\\ \end{array} \right\} \Rightarrow (R_i.C\cap R_j.C)\ne \emptyset$$

### Definition 10

(Decision rules with overlapped decisions). Let $$R_i$$ be a Type 1 decision rule and $$R_j$$ be a Type 2 decision rule. Decision rules $$R_i$$ and $$R_j$$ have overlapped decisions, denoted $$(R_i.D\cap R_j.D)\ne \emptyset$$, iff only if: $$R_i.D\preceq R_j.D$$.

### Definition 11

(Fully overlapped decision rules). Let $$R_i$$ be a Type 1 decision rule and $$R_j$$ be a Type 2 decision rule. Decision rules $$R_i$$ and $$R_j$$ are fully overlapped, denoted $$(R_i.C\cap R_j.C)\ne \emptyset$$, iff:

$$\left\{ \begin{array}{l} \forall C_k\in R_i.C, \exists C_l\in R_j.C (C_k.Q=C_l.Q)\wedge \\ \quad \quad \quad (C_k.O\ne C_l.O)\wedge (C_k.H\ge C_l.H)\\ \forall C_m\in R_j.C, \exists C_n\in R_i.C (C_n.Q=C_m.Q)\wedge \\ \quad \quad \quad (C_n.O\ne C_m.O)\wedge (C_m.H\le C_n.H)\\ R_i.D\succeq R_j.D \end{array} \right\} \Rightarrow (R_i\cap R_j)\ne \emptyset$$

### Definition 12

(Disjoint decision rules). Let $$R_i$$ be a Type 1 decision rule and $$R_j$$ be a Type 2 decision rule. Decision rules $$R_i$$ and $$R_j$$ are disjoint, denoted $$(R_i\cap R_j)=\emptyset$$, iff:

$$\left\{ \begin{array}{l} \forall C_k\in R_i.C, \exists C_l\in R_j (C_k.Q=C_l.Q)\wedge \\ \quad \quad \quad (C_k.O\ne C_l.O)\wedge (C_k.H<C_l.H)\\ \forall C_l\in R_j.C, \exists C_k\in R_i (C_l.Q=C_k.Q)\wedge \\ \quad \quad \quad (C_l.O\ne C_k.O)\wedge (C_l.H> C_k.H)\\ R_i.D\prec R_j.R \\ \end{array} \right\} \Rightarrow (R_i\cap R_j)=\emptyset$$

## 4 Decision Rules Aggregation

Let $$H=\{1,\cdots ,i,\cdots ,h\}$$ be a set of decision makers and $$\varPi _i$$ the set decision rules obtained by decision maker $$i\in H$$. Let $$\varPi$$ be the union of all decision rules of the h decision makers: $$\varPi =\cup _{i=1}^h \varPi _i$$.

The aggregation algorithm contains two steps: (i) transformation of overlapping rules, and (ii) elimination of redundant decision rules. We should mention that steps (i) and (ii) may be inverted without affecting the final result. However, in this paper, we maintain the order given above for several reasons. First, the other solution (i.e. proceeding by computing the minimal cover and then transformation of overlapping rules) requires an additional step to compute the minimal cover after the transformation operation. Indeed, the latter may lead to new redundant rules. Second, as a consequence of the first point, the computing time will automatically increase. The only shortcoming of the solution adopted in this paper is that in step (i) both redundant and non-redundant rules are considered. This may have minor effects on the overall computing time.

The aggregation algorithm takes the set $$\varPi$$ of all decision rules and generates a minimal set of non-redundant decision rules.

### 4.1 Step 1: Transformation of Overlapping Decision Rules

Case 1. $${{\varvec{R}}}_i$$ and $${\varvec{R}}_j$$ Are Disjoint Decision Rules. This situation is graphically illustrated by Fig. 1. In this figure, we assumed that all conditions attributes have the same scale and that $$1<\alpha<\beta <n$$. As it is shown in Fig. 1, the constraints defined by the conditions of rules $$R_i$$ and $$R_j$$ are totaly disjoint. For instance, condition $$C_2$$ says that $$f(x,A_s)\le r_\beta$$ and condition $$C_{2'}$$ says that $$f(x,A_s)\ge r_n$$. It is easy to see that there is no any intersection between the two constraints as defined by $$C_{2}$$ and $$C_{2'}$$. The same remark holds for the other conditions and for the decision of $$R_i$$ and $$R_j$$.

In this situation, there is no overlap between decision rules $$R_i$$ and $$R_j$$ and it is reasonable to maintain both of them (if they are not overlapped by other rules).

Case 2. $${\varvec{R}}_i$$ and $${\varvec{R}}_j$$ Have Overlapped Conditions and Decisions This situation is graphically illustrated by Fig. 2. In this figure, we assumed that all conditions attributes have the same scale and that $$1<\alpha<\beta <n$$. As it is shown in Fig. 2, the constraints defined by the conditions of rules $$R_i$$ and $$R_j$$ overlap. For instance, condition $$C_2$$ says that $$f(x,A_s)\le r_n$$ and condition $$C_{2'}$$ says that $$f(x,A_s)\ge r_\beta$$. It is easy to see that there is an intersection between the two constraints defined by $$C_{2}$$ and $$C_{2'}$$ (since $$\beta <n$$). The same remark holds for the other conditions. The same remark holds for decisions of $$R_i$$ and $$R_j$$.

To reduce the interval-based assignments of decision objects, we propose to replace rules $$R_i$$ and $$R_j$$ by three fully disjoint decision rules as follows:

• $$R_a$$: with the same structure and type as rule $$R_i$$ but the Right Hand Side (RHS) of conditions and the decision are those of rule $$R_j$$;

• $$R_b$$: with the same structure and type as rule $$R_j$$ but the RHS of conditions and the decision are those of rule $$R_i$$;

• $$R_c$$: the RHS of conditions are of the form $$[R_j.C_k.H,R_i.C_k.H]$$ and the decision if of the form $$[R_j.D,R_i.D]$$.

The last decision rule is of composite type since RHS of the conditions and the decision are interval-based.

Case 3. $${\varvec{R}}_i$$ and $${\varvec{R}}_j$$ Have Overlapped Conditions This situation is graphically illustrated by Fig. 3. In this figure, we assume that all conditions attributes have the same scale and that $$1<\alpha<\beta <n$$. As it is shown in Fig. 3, the constraints defined by the conditions of rules $$R_i$$ and $$R_j$$ overlap but the decisions do not. For instance, condition $$C_2$$ says that $$f(x,A_s)\le r_n$$ are condition $$C_{2'}$$ says that $$f(x,A_s)\ge r_\beta$$. It is easy to see that there is any intersection between the two constraints defined by $$C_{2}$$ and $$C_{2'}$$ (since $$\beta <n$$). The same remark holds for the other conditions. On the contrary, the decision parts of rules $$R_i$$ and $$R_j$$ are totaly disjoint.

To reduce the interval-based assignments of decision objects, we propose to replace rules $$R_i$$ and $$R_j$$ with two certain and more precise decision rules as follows:

• $$R_a$$: with the same structure and type as rule $$R_i$$ but the RHS of conditions and the decision are those of rule $$R_j$$;

• $$R_b$$: with the same structure and type as rule $$R_j$$ but the RHS of conditions and the decision are those of rule $$R_i$$;

We mention that we may identify a third situation concerning the assignment of objects where the RHS of the different conditions are in the range $$[r_\alpha ,r_\beta ]$$ (see Fig. 3). In this case, there is a contradiction with the initial decision:

• by $$R_i$$, objects with RHS in $$[r_\alpha ,r_\beta ]$$ should be assigned to $$Cl_k^\le$$, and

• by $$R_i$$, objects with RHS in $$[r_\alpha ,r_\beta ]$$ should be assigned to $$Cl_t^\ge$$,

Since $$k<t$$, we should assign objects either to $$Cl_k^\le$$ or $$Cl_t^\ge$$. However, to avoid conflict assignments, we opted out not to include an additional rule as in the previous case.

Case 4. $${\varvec{R}}_i$$ and $${\varvec{R}}_j$$ Have Overlapped Decisions This situation is graphically illustrated by Fig. 4. In this figure, we assumed that all conditions attributes have the same scale and that $$1<\alpha<\beta <n$$. As shown in Fig. 4 the constraints defined by the conditions of rules $$R_i$$ and $$R_j$$ are totaly disjoint while decisions overlap.

To reduce the interval-based assignments of decision objects, we propose to replace rules $$R_i$$ and $$R_j$$ with two certain and more precise decision rules as follows:

• $$R_a$$: like rule $$R_i$$ but the decision is as rule $$R_j$$;

• $$R_b$$: like rule $$R_j$$ but the decision is as rule $$R_i$$;

### 4.2 Step 2: Elimination of Redundant Decision Rules

The objective of this step is to eliminate (i) redundant decision rules; and (ii) rules fully included in other rules. In the second case, two options are possible: either we remove the more general rule or the less general rule. Both solutions may lead to preferential information loss. To minimize the loss of preferential information, we can rely on some measures. Let $$R_a$$ and $$R_b$$ be two redundant decision rules. Let $$[[R_a]]$$ and $$[[R_b]]$$ be the sets of decision objects supporting decision rules $$R_a$$ and $$R_b$$, respectively. Then, we define the following two measures:

• Information loss:

\begin{aligned} IL(R_a,R_b)= & {} {\left\{ \begin{array}{ll} 0,&{} \hbox {if }R_a\subseteq R_b, \\ \frac{[[R_b]]\setminus [[R_a]]}{[[R_b]]}, &{} \hbox {otherwise.} \\ \end{array}\right. } \end{aligned}
(2)

$$IL(R_a,R_b)$$ measures the information loss when decision rule $$R_b$$ is removed.

• Precision loss:

\begin{aligned} PL(R_a,R_b)= & {} 1 - IL(R_a,R_b). \end{aligned}
(3)

These two measures vary in different directions and can be used to decide which of decision rules $$R_a$$ and $$R_b$$ should be removed. It consists of a tradeoff between information loss and precision loss.

## 5 Application

To partially illustrate the proposed algorithm, we consider a real-world data relative to a business school admission where two decision makers (designed by DM1 and DM2 in the rest of the paper) are involved. The learning set is composed of 175 objects (students in this case). A randomly selected extract from the learning set is given in Table 1. In this table, the decisions ‘A’ and ‘R’ stand for ‘accepted’ and ‘rejected’, respectively. The comparison of the individual assignments shows that the decision makers disagree on 40 (22.86%) students.

We then applied the DRSA two times to approximate this learning set using the assignments given by DM1 and DM2. The application of rule induction algorithm DOMLEM on the obtained approximations leads to two collections of decision rules, which are given Table 2 (for DM1) and Table 3 (for DM2).

This illustrative example uses only two decision classes. Accordingly, there is no overlap between decision rules of different types. Then, only the second step will applied to aggregate the decision rules. A careful examination of Table 2 and Table 3 shows that there are three cases of redundancy: (i) Rule 1.9 and Rule 2.18; (ii) Rule 1.12 and Rule 2.11; and (iii) Rule 1.13 and Rule 2.16. The result of the application of Equations (1) and (2) on these pairs of decision rules is summarized in Table 4. Based on these results and to reduce information loss, decision rules 1.18, 2.11 and 1.13 should be removed.

We then applied the remaining decision rules to classify all the students. Results show that the obtained collective assignments match with the initial assignments of DM1 for about 96.2% of students and with the initial assignments of DM2 for about 92.3% of students. Thus, DM1 and DM2 need to discuss only a very limited number of conflicting situations (instead on 40 conflicting situations initially).

## 6 Conclusion

We proposed an output oriented aggregation strategy to coherently combine different sets of decision rules obtained from different decision makers. The proposed aggregation algorithm is illustrated by using real-world data relative to business school admission. An important aspect of the proposed approach is that the consensus between decision makers [13] is computed using objective preference information. In the future, we intend first to apply the proposed aggregation algorithms to other datasets, especially those non-binary decision attributes and with more decision makers. We also intend to study the behavior of the aggregation algorithm with large datasets. Finally, we will intend to design new measures to evaluate information loss, precision loss and information redundancy.