Abstract
In multilabel classification, an instance is associated with multiple relevant labels, and the goal is to predict these labels simultaneously. Many realworld applications of multilabel classification come with different performance evaluation criteria. It is thus important to design general multilabel classification methods that can flexibly take different criteria into account. Such methods tackle the problem of costsensitive multilabel classification (CSMLC). Most existing CSMLC methods either suffer from high computational complexity or focus on only certain specific criteria. In this work, we propose a novel CSMLC method, named progressive random klabelsets (PRAkEL), to resolve the two issues above. The method is extended from a popular multilabel classification method, random klabelsets, and hence inherits its efficiency. Furthermore, the proposed method can handle arbitrary examplebased evaluation criteria by progressively transforming the CSMLC problem into a series of costsensitive multiclass classification problems. Experimental results demonstrate that PRAkEL is competitive with existing methods under the specific criteria they can optimize, and is superior under other criteria.
Introduction
Multilabel classification (MLC) extends traditional multiclass classification by allowing each instance to be associated with a set of relevant labels. For example, in text classification, a document (instance) can belong to several topics (labels). Given a set of instances as well as their relevant labels, the goal of an MLC method is to predict the relevant labels of a new instance. Recently, MLC has attracted much research attention with a wide range of applications including music tag annotation (Trohidis et al. 2008; Lo et al. 2011), image classification (Boutell et al. 2004), and video classification (Qi et al. 2007).
In contrast to multiclass classification, one important characteristic of MLC is the possible correlations between different labels. Many approaches have been proposed to exploit the correlations. Chaining methods learn a label by treating other labels as features (Read et al. 2011; Dembczynski et al. 2010). Labelsetbased methods learn several labels jointly (Tsoumakas et al. 2010; Tsoumakas and Vlahavas 2007; Lo et al. 2014; Lo 2013). Other methods transform the space of labels to capture the correlations (Hsu et al. 2009; Tai and Lin 2012; Hardoon et al. 2004).
A key challenge of MLC is to automatically adapt a method to the evaluation criterion of interest. In realworld applications, different criteria are often required to evaluate the performance of an MLC method. For example, Hamming loss measures the proportion of the misclassified labels to the total number of labels; the F1 score, originating from information retrieval, is the harmonic mean of the precision and recall; subset 0/1 loss requires all labels to be correctly predicted. Because of the different natures of those criteria, a method that performs well under one criterion may not be wellsuited for other criteria. It is therefore important to design general MLC methods that take the evaluation criterion into account, either in the training or prediction stage. Since the evaluation criterion, or metric, determines the cost for misclassifying an instance, this type of problem is generally called costsensitive multilabel classification (CSMLC) (Lo et al. 2014; Li and Lin 2014), which is formally defined in Sect. 2.
We shall explain in Sect. 3 that most existing MLC methods either aim for optimizing a certain evaluation metric or require extra efforts to be adapted to each metric. For example, binary relevance (BR) (Tsoumakas et al. 2010) minimizes Hamming loss by learning each label independently. Label powerset (LP) (Tsoumakas et al. 2010) minimizes subset 0/1 loss by transforming the MLC problem to a multiclass classification problem with a huge number of hyperclasses. The wellknown random klabelsets (RA\(k\)EL) (Tsoumakas and Vlahavas 2007) method focuses on many smaller multiclass classification problems for computational efficiency, but it is only loosely connected to subset 0/1 loss (Ferng and Lin 2013).
There are currently few methods for dealing with general CSMLC problems (Dembczynski et al. 2010; Tsochantaridis et al. 2005; Li and Lin 2014; Doppa et al. 2014). RA\(k\)EL has been extended to costsensitive random klabelsets (CSRA\(k\)EL) (Lo 2013) and generalized klabelsets ensemble (GLE) (Lo et al. 2014) to handle a weighted version of Hamming loss, but not general metrics. Probabilistic classifier chain (Dembczynski et al. 2010) requires designing an efficient inference rule with respect to the metric, and covers many, but not all, of the metrics of interest (Li and Lin 2014). Condensed filter tree (Li and Lin 2014) is a chaining method that takes any evaluation metric into account during the training stage, but its training time is quadratic in the number of labels. The structured support vector machine (Tsochantaridis et al. 2005) can also handle arbitrary metrics, but it relies on solving a sophisticated optimization problem depending on the metric and is thus also inefficient. To the best of our knowledge, no existing CSMLC methods are both general and efficient.
In this work, we design a general and efficient CSMLC method in Sect. 4. This novel method, named progressive random \(k\)labelsets (PRA\(k\)EL), is extended from RA\(k\)EL and hence inherits its efficiency. In particular, PRA\(k\)EL practically enjoys linear training time in terms of the number of labels. Moreover, PRA\(k\)EL is able to optimize any examplebased metric by modifying the training stage of RA\(k\)EL. More specifically, RA\(k\)EL reduces the original problem to many regular multiclass problems and ignores the original cost information; PRA\(k\)EL reduces the CSMLC problem to many costsensitive multiclass ones by transferring the cost information to the subproblems. The transferring task is nontrivial, however, because each subproblem involves only a subset of labels of the original problem. We therefore introduce the notion of reference labels to determine the costs in the subproblems. We carefully propose two strategies for defining the reference labels, which lead to different advantages and disadvantages in both theoretical and empirical aspects.
We conducted experiments on seven benchmark datasets with various sizes and domains. The experimental results in Sect. 5 show that PRA\(k\)EL is competitive with stateoftheart MLC methods under the specific metrics associated with the methods. Furthermore, in terms of general metrics, PRA\(k\)EL usually outperforms other methods. The results demonstrate that the proposed method is indeed more general, and more suitable for solving realworld problems.
Problem setup
In CSMLC, we denote an instance by a vector \(\mathbf {x}\in \mathcal {X} = \mathbb {R}^d\) and the relevant labels of \(\mathbf {x}\) by a set \(Y \subseteq \{1, 2, \ldots , K\}\), where K is the total number of labels. Equivalently, this set of labels can be represented by a bit vector \(\mathbf {y}\in \mathcal {Y}=\{0, 1\}^K\), where the lth component \(\mathbf {y}[l]\) is 1 if and only if the lth label is relevant, i.e., \(l \in Y\). Here, \(\mathcal {X}\) and \(\mathcal {Y}\) are called the input space and label space, respectively; the pair \((\mathbf {x}, \mathbf {y})\) is called an example. In this work, we consider a particular CSMLC setup that allows each example to carry its own cost information. The examplebased setup, which assumes exampledependent costs, is more general than the setup with labeldependent costs, in which all examples share the same cost functions. The more general setup makes it possible to express the importance of different instances easily through embedding the importance in the exampledependent costs, and has been considered in several studies of costsensitive learning (Fan et al. 1999; Zadrozny et al. 2003; Sun et al. 2007). Formally, given a training set \(\{(\mathbf {x}_n, \mathbf {y}_n, \mathbf {c}_n)\}_{n=1}^N\) consisting of N examples, where \(\mathbf {c}_n:\mathcal {Y}\rightarrow \mathbb {R}_{\ge 0}\) is a nonnegative cost function and each \((\mathbf {x}_n, \mathbf {y}_n, \mathbf {c}_n)\) is drawn independently from an unknown distribution \(\mathcal {D}\), the goal of CSMLC is to learn a classifier \(h:\mathcal {X}\rightarrow \mathcal {Y}\) such that the expected cost \(\mathrm {E}_{(\mathbf {x}, \mathbf {y}, \mathbf {c})\sim \mathcal {D}}[\mathbf {c}(h(\mathbf {x}))]\) is small.
Note that the examplebased setup cannot cover all popular evaluation criteria in multilabel classification. For instance, the microF1 and macroF1 criteria, which are defined on a set of \(\mathbf {y}\) rather than a single one, cannot be expressed as exampledependent cost functions. Nonetheless, as highlighted by earlier CSMLC works (Li and Lin 2014), studying the examplebased setup can be viewed as an intermediate step toward those more complicated criteria.
Two remarks about this setup are in order. First, for a classifier h, since \(\mathbf {c}(h(\mathbf {x}))\) is being minimized, it is natural to assume \(\mathbf {c}\) has a minimum of 0 at \(\mathbf {y}\), the true label vector of \(\mathbf {x}\). With this assumption, although \(\mathbf {y}\) does not appear in the learning goal, its information is implicitly stored in the cost function. Second, we can similarly define the problem of costsensitive multiclass classification (CSMCC) by replacing the label space \(\mathcal {Y}\) with \(\{1, 2, \ldots , K\}\), which stands for K different classes. In fact, this setup is widely adopted in many existing works (Tu and Lin 2010; Zhou and Liu 2010; Abe et al. 2004).
Modern CSMCC works (Zhou and Liu 2010) allow flexibly taking any cost functions into account based on application needs. While the proposed method shares the same flexibility in its derivation, we consider a more realistic scenario of CSMLC in the experiments. In particular, many CSMLC problems are actually associated with a global, labeldependent cost \(L:\mathcal {Y}\times \mathcal {Y}\rightarrow \mathbb {R}\), typically called a loss function, where \(L(\mathbf {y}, \hat{\mathbf {y}})\) is the loss when predicting \(\mathbf {y}\) as \(\hat{\mathbf {y}}\). Those problems aim to learn a classifier \(h:\mathcal {X}\rightarrow \mathcal {Y}\) such that \(\mathrm {E}[L(\mathbf {y}, h(\mathbf {x}))]\) is small (Dembczynski et al. 2010; Li and Lin 2014). The aim can be easily expressed in our setup by assigning
We focus on CSMLC with such loss functions to demonstrate the applicability of the proposed method and to make a fair comparison with existing CSMLC methods (Li and Lin 2014; Dembczynski et al. 2010). Popular loss functions include

Hamming loss^{Footnote 1}
$$\begin{aligned} L_H\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \frac{1}{K}\sum _{l=1}^K\llbracket {}\hat{\mathbf {y}}[l] \ne \mathbf {y}[l]\rrbracket {}; \end{aligned}$$ 
weighted Hamming loss with respect to the weight \(\mathbf {w}\in {\mathbb {R}_{\ge 0}}^K\)
$$\begin{aligned} L_{H,\mathbf {w}}\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \sum _{l=1}^K\mathbf {w}[l]\cdot \llbracket {}\hat{\mathbf {y}}[l] \ne \mathbf {y}[l]\rrbracket {}; \end{aligned}$$ 
ranking loss
$$\begin{aligned} L_r\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \frac{1}{R(\mathbf {y})}\sum _{(k,l):\mathbf {y}[k]<\mathbf {y}[l]}\llbracket {}\hat{\mathbf {y}}[k]>\hat{\mathbf {y}}[l]\rrbracket {}+\frac{1}{2}\llbracket {}\hat{\mathbf {y}}[k]=\hat{\mathbf {y}}[l]\rrbracket {}, \end{aligned}$$where \(R(\mathbf {y}) = \{(k,l)\mid \mathbf {y}[k]<\mathbf {y}[l]\}\) is a normalizer;

F1 loss^{Footnote 2}
$$\begin{aligned} L_F\left( \mathbf {y}, \hat{\mathbf {y}}\right) = 1  \frac{2\mathbf {y}\cdot \hat{\mathbf {y}}}{\Vert \mathbf {y}\Vert _1+\Vert \hat{\mathbf {y}}\Vert _1}, \end{aligned}$$which is one minus the F1 score;

subset 0/1 loss
$$\begin{aligned} L_s\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \llbracket {}\hat{\mathbf {y}}\ne \mathbf {y}\rrbracket {}. \end{aligned}$$
For those loss functions defined above, we follow the convention that when the denominator is zero, the loss is defined as zero.
To simplify the explanations of the proposed method, we further introduce some terminology. We denote the set of K labels by \(\mathcal {L}_K{} = \{1, \ldots , K\}\). A subset S of \(\mathcal {L}_K{}\) with \(S=k\) is called a klabelset. If \(S = \{s_1, \ldots , s_k\}\) is a klabelset with \(s_1< \cdots < s_k\), then we denote \((\mathbf {y}[s_1], \ldots , \mathbf {y}[s_k]) \in \{0, 1\}^k\) by \(\mathbf {y}[S]\). When the number of labels, K, is clear in the context, we also use the notation \(S^c\) to represent the \((Kk)\)labelset \(\mathcal {L}_K{} {\setminus } S = \{1\le l\le K\mid l \notin S\}\). We summarize the main notation used throughout the paper in Table 1.
Related work
Multilabel classification methods can be divided into two main categories, namely, algorithm adaptation and problem transformation (Tsoumakas and Katakis 2007). Algorithm adaptation methods directly extend a specific learning algorithm to tackle MLC problems. Multilabel knearest neighbor (ML\(k\)NN) (Zhang and Zhou 2007) is adapted from the famous knearest neighbors algorithm. AdaBoost.MH and AdaBoost.MR (Schapire and Singer 2000) are two multilabel extensions of the AdaBoost algorithm (Freund and Schapire 1999). MLC4.5 (Clare and King 2001) is an adaptation of the popular C4.5 algorithm. BPMLL (Zhang and Zhou 2006) is derived from the backpropagation algorithm of neural networks.
Problem transformation methods transform MLC problems into other types of learning problems and solve them by existing algorithms. Such methods are general and can be coupled with any mature algorithms. Our proposed method in Sect. 4 belongs to this category.
Binary relevance (BR) (Tsoumakas et al. 2010) is arguably the simplest problem transformation method, which transforms the MLC problem into several binary classification problems by learning and predicting each label independently. Classifier chain (CC) (Read et al. 2011) iteratively learns a binary classifier to predict the lth label using \(\{(\mathbf {x}_n, \hat{\mathbf {y}}_n[1], \ldots , \hat{\mathbf {y}}_n[l1])\}\) as the training set, where \(\hat{\mathbf {y}}_n\) contains the previously predicted labels. Although it considers the label dependencies, the order of labels becomes crucial to the performance of CC. Many approaches have been proposed to address this issue (Read et al. 2011, 2014; Goncalves et al. 2013). In particular, the ensemble of classifier chains (ECC) (Read et al. 2011) learns several CC classifiers, each with a random ordering of labels, and it averages the predictions from all the classifiers to classify a new instance.
Instead of learning one binary classifier for each label, probabilistic classifier chain (PCC) (Dembczynski et al. 2010) learns probabilistic classifiers to estimate \(P(\mathbf {y}\mid \mathbf {x})\) by the chain rule
and then applies Bayes optimal inference rule designed for the evaluation metric to produce the final prediction. In principle, PCC is able to be adapted to any different metric to tackle CSMLC problems by designing proper inference rules for the metric. However, deriving efficient inference rules for different metrics is practically challenging. Inference rules for Hamming, ranking, F1 and subset 0/1 loss have been designed (Dembczynski et al. 2010, 2011), but the rules for other metrics remain an open question. Similar to ECC, the ensembled probabilistic classifier chain (EPCC) (Dembczynski et al. 2010) resolves the issue of label ordering by random orderings.
The Monte Carlo optimization for classifier chains (MCC) (Read et al. 2014) employs the Monte Carlo scheme to find a good label ordering in the training stage of PCC. A recently proposed method, the classifier trellis (CT) (Read et al. 2015), is extended from MCC to consider a trellis structure of labels rather than a chain to improve efficiency. During the prediction stage of both methods (Read et al. 2014, 2015), the Monte Carlo scheme is applied to generate samples from \(P(\mathbf {y}\mid \mathbf {x})\). A large number of samples may be required for Monte Carlo simulation, which results in possible computational challenges during prediction. While those samples can in principle be used to produce costsensitive predictions, the possibility has not been fully studied in both works. In fact, the original works consider only approximate inference for Hamming loss and subset 0/1 loss.
A group of methods take label dependencies into account by learning multiple labels jointly. Label powerset (LP) (Tsoumakas et al. 2010) transforms each label vector into a unique hyperclass and learns a multiclass classifier. If there are K labels in total, then the number of classes may be as large as \(2^K\). Hence, when the number of labels is large, LP suffers from computational issues and an insufficient number of training examples within each class.
To overcome the drawback, a method called random klabelsets (RA\(k\)EL) (Tsoumakas and Vlahavas 2007) focuses on one labelset at a time. Recall that a klabelset is a sizek subset of \(\{1, 2, \ldots , K\}\). RA\(k\)EL iteratively selects a random klabelset \(S_m\) and learns an LP classifier \(h_m\) for the training set restricted to the labels within \(S_m\), i.e., \(\{(\mathbf {x}_n, \mathbf {y}_n[S_m])\}\). Each classifier \(h_m\) predicts the k labels within \(S_m\), and the final prediction of an instance is produced by a majority vote of all the classifiers. Because the number of classes in each LP classifier is decreased, RA\(k\)EL is more efficient than LP. In addition, it achieves better performance than LP in terms of Hamming and F1 loss.
Nonetheless, there is a noticeable issue with RA\(k\)EL. In each multiclass subproblem, a onebit prediction error and a twobit error are equally penalized. That is, the LP classifiers cannot distinguish between small and big errors. Because these classifiers are learned without considering the evaluation metric, RA\(k\)EL is not a costsensitive method.
Two extensions of RA\(k\)EL were proposed to address the above issue, but they both consider only the exampledependent weighted Hamming loss rather than general metrics. The costsensitive random klabelsets (CSRA\(k\)EL) (Lo 2013) method reduces the CSMLC problem to several multiclass ones with instance weights. The weight of each instance is defined as the sum of the misclassified costs of the relevant labels. Despite the restriction, one advantage of CSRA\(k\)EL is that it only requires reweighting of the instances and can hence be coupled with many traditional multiclass classification algorithms.
Generalized klabelsets ensemble (GLE) (Lo et al. 2014) learns a set of LP classifiers and determines a linear combination of them by minimizing the averaged loss of training examples. The minimization is formulated as a quadratic optimization problem without any constraints and hence can be solved efficiently. While both CSRA\(k\)EL and GLE are pioneering works on extending RA\(k\)EL for CSMLC, they focus on specific applications of tagging. As a consequence, the two methods do not come with much theoretical guarantee, and it is nontrivial to extend them to handle other types of costs.
For the methods introduced above, BR and CC optimize Hamming loss; CSRA\(k\)EL and GLE deal with weighted Hamming loss; MCC and CT minimize Hamming and subset 0/1 loss currently, with the potential of handling general metrics yet to be studied; PCC is designed to deal with general metrics, but is computationally demanding for arbitrary metrics that come without efficient inference rules. Another method that deals with general metrics is the structured support vector machine (SSVM) (Tsochantaridis et al. 2005). The SSVM optimizes a metric by rescaling certain variables in the traditional SVM optimization problem based on the metric. However, the complexity of solving the problem depends on the metric and is usually too high for practical applications.
Condensed filter tree (CFT) (Li and Lin 2014) is a stateoftheart CSMLC method, extended from the wellknown filter tree algorithm (Beygelzimer et al. 2009) to handle multilabel data. Similarly, the divideandconquer tree algorithm (Beygelzimer et al. 2009) for multiclass problems can be directly adapted to CSMLC problems, resulting in the topdown tree (TT) method (Li and Lin 2014). Both CFT and TT can be viewed as costsensitive extensions of CC. CFT suffers from its training time, which is quadratic to the number of labels; TT suffers from its weaker performance as compared with CFT (Li and Lin 2014).
Multilabel search (MLS) (Doppa et al. 2014) optimizes a metric by adapting the \(\mathcal {HC}\)search framework to multilabel problems. It learns a heuristic function and estimates the evaluation metric in the training stage. Then, during the prediction stage, MLS conducts a heuristic search towards minimizing the estimated cost. Despite its generality, MLS suffers from high computational complexity. To learn the heuristic function during training, it needs to solve a ranking problem consisting of \(O(\textit{NK})\) examples, where N is the number of training examples and K is the number of labels.
In summary, many existing MLC methods are not applicable to arbitrary examplebased metrics of CSMLC (BR, CC, LP, RA\(k\)EL). There are some extensions dealing with restricted metrics of CSMLC (CSRA\(k\)EL, GLE). For general metrics, current methods suffer from computational issues (CFT, MLS, SSVM), performance issues (TT), or require elegant design of inference rules or more studies to handle different metrics (PCC, MCC, CT). In the next section, we present a general yet efficient costsensitive multilabel method, which is competitive with stateoftheart CSMLC methods.
Proposed method
Recall that the LP method solves an MLC problem by transforming it into a single multiclass problem. Similarly, a CSMLC problem can be transformed into a costsensitive multiclass classification (CSMCC) problem, as illustrated in the CFT work (Li and Lin 2014). The resulting method, however, suffers from the same computational issue as LP, and hence is not feasible for large problems. CFT solves the computational issue by considering an efficient multiclass classification model—the filter tree.
In this work, we deal with the computational issue differently. We extend the idea of RA\(k\)EL and propose a novel labelsetbased method, which iteratively transforms the CSMLC problem into a series of CSMCC problems. Different from RA\(k\)EL, the critical part of the proposed method is the transfer of the cost information to the subproblems in the training stage. This is not a trivial task, since each subproblem involves only a subset of labels and hence the costs in each subproblem cannot be easily connected to those in the original problem. Therefore, we introduce the notion of reference label vectors to determine the costs in the subproblems. While the overall idea sounds simple, it advances the study of CSMLC in several aspects:

Compared with traditional MLC methods such as RA\(k\)EL, the proposed method is sensitive to the evaluation metric and hence is able to optimize arbitrary examplebased metrics.

Compared with CSRA\(k\)EL and GLE, the proposed method handles more general metrics and comes with solid theoretical analysis.

Compared with PCC, MCC and SSVMs, our method alternatively considers label dependencies through labelsets and requires no manual adaptation to each evaluation metric.

Compared with existing CSMLC methods such as CFT, our method is more efficient in terms of training time complexity while reaching similar level of performance.
We first provide the framework of the proposed method. Then, we describe it in great detail and present its analysis.
Framework
Let \(\mathcal {T} = \{(\mathbf {x}_n, \mathbf {y}_n, \mathbf {c}_n)\}_{n=1}^N\) be the training set and M be the number of iterations. Inspired by RA\(k\)EL, in the mth iteration, our method selects a random klabelset \(S_m\) and constructs a CSMCC training set \(\mathcal {T}_m^{\prime }=\{(\mathbf {x}_n, \mathbf {y}_n[S_m], \mathbf {c}_n^{\prime })\}_{n=1}^N\) of \(K^{\prime }=2^k\) classes, where \(\mathbf {c}_n^{\prime }:\{0,1\}^k\rightarrow \mathbb {R}\). The main difference between our method and RA\(k\)EL is that, the multiclass subproblems defined here contain the costs \(\mathbf {c}_n^{\prime }\), and hence our method is able to carry the information of the evaluation metric. The two issues of RA\(k\)EL discussed in Sect. 3 can also be resolved by properly defining these \(\mathbf {c}_n^{\prime }\). Although in our problem setup described in Sect. 2, the label space of a CSMCC problem should be \(\mathcal {L}_{K'}{}\), by considering a bijection between \(\mathcal {L}_{K'}{}\) and \(\{0,1\}^k\), we may treat \(\mathbf {y}_n[S_m]\) as an element of \(\mathcal {L}_{K'}{}\) and assume \(\mathbf {c}_n^{\prime }:\mathcal {L}_{K'}{}\rightarrow \mathbb {R}\). Then, any CSMCC algorithm can be employed to learn a multiclass classifier \(h_m^{\prime }:\mathcal {X}\rightarrow \{0, 1\}^k\) for \(\mathcal {T}_m^{\prime }\). Similar to RA\(k\)EL, the final prediction of a new instance \(\mathbf {x}\) is produced by a majority vote of all the classifiers \(h_m^{\prime }\). More precisely, if we define \(h_m:\mathcal {X}\rightarrow \{1, 0, 1\}^K\) by
then the final prediction \(\hat{\mathbf {y}}\in \mathcal {Y}\) can be obtained by setting \(\hat{\mathbf {y}}[l] = 1\) if and only if \(\sum \nolimits _{m=1}^Mh_m[l] > 0\).
Cost transformation
Having described the framework, we now turn our attention to the multiclass cost functions \(\mathbf {c}_n^{\prime }\) in the subproblems, which must be defined in each iteration. At this point, notice that if we define \(\mathbf {c}_n^{\prime }(\hat{\mathbf {y}}^{\prime }) = \llbracket {}\hat{\mathbf {y}}^{\prime } \ne \mathbf {y}_n[S_m]\rrbracket {}\), then the proposed method degenerates into RA\(k\)EL. Since this \(\mathbf {c}_n^{\prime }\) is independent of the original cost function \(\mathbf {c}_n\), it can also be seen from this assignment that RA\(k\)EL is not a costsensitive method.
To establish the connections between these two cost functions, \(\mathbf {c}_n^{\prime }\) must carry a certain amount of information of \(\mathbf {c}_n\). Note that the domain of \(\mathbf {c}_n^{\prime }\) is \(\{0,1\}^k\) and \(\mathbf {c}_n\) is defined on \(\mathcal {Y} = \{0, 1\}^K\). To extend \(\mathbf {c}_n^{\prime }\) to the domain of \(\mathbf {c}_n\), we propose considering a reference label vector \(\tilde{\mathbf {y}}_n \in \mathcal {Y}\) and setting the value of \(\mathbf {c}_n^{\prime }\) to be the cost \(\mathbf {c}_n\) assuming the labels outside \(S_m\) were predicted the same as \(\tilde{\mathbf {y}}_n\). Mathematically,
Here, we treat \(\hat{\mathbf {y}}^{\prime }\) and \(\tilde{\mathbf {y}}_n[S_m^c]\) as subsets of \(S_m\) and \(S_m^c\), respectively, and therefore, their union is considered as a subset of \(\mathcal {L}_K\), or equivalently a bit vector in \(\{0, 1\}^K\).
It then remains to define these \(\tilde{\mathbf {y}}_n\) in each iteration to complete the transformation. We shall see in the next section that these reference vectors may depend on the classifiers learned in the previous iterations, and hence, the multiclass cost functions would be obtained progressively. As a consequence, the proposed method is called progressive random \(k\)labelsets (PRA\(k\)EL). The training and prediction algorithms of PRA\(k\)EL are presented in Algorithms 1 and 2, where the weighting strategy mentioned in line 8 of Algorithm 1 is described in Sect. 4.4. For now we simply assume \(\alpha _m = 1\) for \(1 \le m \le M\). Another thing to note is that, we do not explicitly require selecting a labelset that has not been chosen before. However, in practice we give higher priority to those labels that were selected fewer times in the previous iterations. In particular, we guarantee that all labels are selected at least once if \(kM \ge K\).
Defining reference label vectors
We propose two strategies for defining the reference label vectors. The first, and also the most intuitive, is to let \(\tilde{\mathbf {y}}_n = \mathbf {y}_n\) in every iteration. The proposed method with this assignment is denoted by \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) to indicate the usage of the true label vectors. In this strategy, we implicitly assume that the labels outside the labelset can be perfectly predicted by the other classifiers.
In realworld situations, however, this is usually not the case. Therefore, in the second strategy, we define \(\tilde{\mathbf {y}}_n\) to be the predicted label vector of \(\mathbf {x}_n\) obtained thus far. Thus, the optimization in each subproblem no longer depends on the perfect predictions from the previous classifiers. Formally, let \(F_{m,n} = \sum \nolimits _{p=1}^mh_p(\mathbf {x}_n)\) for \(1 \le n \le N\) and define \(H_{m,n} \in \mathcal {Y}\) by \(H_{m,n}[l] =\llbracket {}F_{m,n}[l] > 0\rrbracket {}\). That is, \(H_{m,n}\) is the prediction of \(\mathbf {x}_n\) by a majority vote of the first m classifiers. We then define \(\tilde{\mathbf {y}}_n\) in the mth iteration to be \(H_{m1,n}\) for \(m \ge 2\), and let \(\tilde{\mathbf {y}}_n = \mathbf {y}_n\) in the first iteration. Since the reference label vectors as well as the multiclass subproblems are obtained progressively, the proposed method coupled with this strategy is denoted simply by PRA\(k\)EL.
Recall that in our problem setup we assume the minimum of each \(\mathbf {c}_n\) is 0. Therefore, for \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) we have \(\min _{\hat{\mathbf {y}}^{\prime }\in \{0, 1\}^k}\mathbf {c}_n^{\prime }(\hat{\mathbf {y}}^{\prime }) = \min _{\hat{\mathbf {y}}\in \mathcal {Y}}\mathbf {c}_n(\hat{\mathbf {y}}[S_m]\cup \mathbf {y}_n[S^c_m]) = \mathbf {c}_n(\mathbf {y}_n) = 0\). In other words, the minimum cost for every example in each subproblem is 0, which is a consequence of \(\tilde{\mathbf {y}}_n=\mathbf {y}_n\). For PRA\(k\)EL, however, this identity may not hold. Since the predicted labels outside \(S_m\) cannot be altered in the mth iteration, it is natural to add a constant to each of the functions \(\mathbf {c}_n^{\prime }\) such that \(\min _{\hat{\mathbf {y}}^{\prime }\in \{0, 1\}^k}\mathbf {c}_n^{\prime }(\hat{\mathbf {y}}^{\prime }) = 0\). Therefore, the transformed cost functions for PRA\(k\)EL are all shifted to satisfy this equality by the following formula.
Interestingly, after shifting the costs, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL become equivalent under Hamming loss and ranking loss. To show this, we first present two lemmas.
Lemma 1
Let \(L_r\) be the function of ranking loss and \(\mathbf {y}\in \mathcal {Y}=\{0, 1\}^K\). Then, there exists a unique \(\mathbf {w}\in {\mathbb {R}_{\ge 0}}^K\) such that \(L_r(\mathbf {y}, \cdot ) = L_{H,\mathbf {w}}(\mathbf {y}, \cdot )\), where \(L_{H,\mathbf {w}}\) is the function of weighted Hamming loss with respect to \(\mathbf {w}\).
Proof
See Appendix. \(\square \)
Lemma 2
Let \(L_{H,\mathbf {w}}\) be the function of weighted Hamming loss and S be a klabelset. For any subsets \(\mathbf {y}_0^{\prime }\) and \(\mathbf {y}_1^{\prime }\) of S, \(L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_0^{\prime }\cup \tilde{\mathbf {y}}[S^c])L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_1^{\prime }\cup \tilde{\mathbf {y}}[S^c])\) is independent of \(\tilde{\mathbf {y}}\in \{0, 1\}^K\).
Proof
See Appendix. \(\square \)
Theorem 3
Under Hamming loss and ranking loss, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL are equivalent.
Proof
Let L be the loss function of interest and consider the mth iteration. For any instance \(\mathbf {x}\), let \(\mathbf {b}^{\prime }\) and \(\mathbf {c}^{\prime }\) be the cost functions of \(\mathbf {x}\) in the mth multiclass subproblem, in the training of \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL, respectively. We show that \(\mathbf {b}^{\prime }(\mathbf {y}^{\prime }) = \mathbf {c}^{\prime }(\mathbf {y}^{\prime })  \min \mathbf {c}^{\prime }\). Let \(\tilde{\mathbf {y}}\) be the reference label vector of \(\mathbf {x}\) for PRA\(k\)EL. Since we are considering a single instance, by Lemma 1, we may assume L is the function of weighted Hamming loss. Let S be the klabelset in the current iteration and \(\mathbf {y}\) be the true label vector of \(\mathbf {x}\).
If \(\mathbf {y}^{\prime } \subseteq S\), then by definition,
In addition, by Lemma 2, \(L(\mathbf {y}, \mathbf {y}^{\prime }\cup \tilde{\mathbf {y}}[S^c])  L(\mathbf {y}, \hat{\mathbf {y}}^{\prime }\cup \tilde{\mathbf {y}}[S^c])\) is independent of \(\tilde{\mathbf {y}}[S^c]\) for all \(\hat{\mathbf {y}}^{\prime }\subseteq S\). Therefore, we have
\(\square \)
Moreover, for these two loss functions, it is easy to derive an upper bound of the training cost. Consider a training example \((\mathbf {x}, \mathbf {y}, \mathbf {c})\). Let \(e_m\) be the training cost of \(\mathbf {x}\) in the mth CSMCC subproblem. We hope to bound the overall multilabel training cost of \(\mathbf {x}\) in terms of these \(e_m\).
By Lemma 1, again, it suffices to consider weighted Hamming loss. Recall that K is the number of labels, k is the size of the labelsets, and M is the number of iterations. For simplicity, assume kM is a multiple of K. In addition, we assume that each label appears in exactly \(r=kM{/}K\) labelsets. That is, the labelsets are selected uniformly. Let \(h_m \in \{1, 0, 1\}^K\) be the prediction of \(\mathbf {x}\) in the mth iteration as defined in Sect. 4.1 and \(\hat{\mathbf {y}}\in \mathcal {Y}\) be the final prediction, which is obtained by averaging these \(h_m\). Now, focus on the lth label. If \(\hat{\mathbf {y}}[l] \ne \mathbf {y}[l]\), then there must be at least half of those m with \(l \in S_m\) such that \(h_m[l]\) is predicted incorrectly. Hence, the part of the overall training cost contributed by the lth label cannot exceed \(e_m/(r/2) = 2e_m/r\). As a result, by the property of weighted Hamming loss, the training cost is no more than \(\sum \nolimits _{m=1}^M2e_m/r = (2K/k)\bar{e}\), where \(\bar{e}= \sum \nolimits _{m=1}^M{e_m/M}\). By the above arguments, we have the following theorem.
Theorem 4
Let \(E_m\) be the multiclass training cost of the training set in the mth iteration. Then, under Hamming loss and ranking loss, the overall CSMLC training cost for both \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL is no more than \((2K/k)\bar{E}\), where \(\bar{E}\) is the mean of \(E_m\).
Proof
Since the statement is true for each example, the proof is straightforward. \(\square \)
Despite the equivalence between \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL for Hamming and ranking loss, they are not the same for arbitrary cost functions. In the experiment section, we demonstrate that PRA\(k\)EL is more effective under F1 loss. For now, we present an explanation, by restricting ourselves to the case where the labelsets are disjoint. In this case, \(K/k = M\), and the upper bound in Theorem 4 can be improved to \((K/k)\bar{E} = M\bar{E}\) because the final prediction of each label is determined by a single LP classifier. Under this restriction, we have a similar result for PRA\(k\)EL. Before stating the next theorem, we have to make some normality assumption about the cost functions. For a label vector \(\mathbf {y}\) and its corresponding cost function \(\mathbf {c}\), we assume that if \(\hat{\mathbf {y}}^{\prime }\in \mathcal {Y}\) is one bit closer to \(\mathbf {y}\) than \(\hat{\mathbf {y}}^{\prime \prime }\in \mathcal {Y}\), then \(\mathbf {c}(\hat{\mathbf {y}}^{\prime }) \le \mathbf {c}(\hat{\mathbf {y}}^{\prime \prime })\). That is, a more correct prediction does not result in a larger cost. In fact, this simple assumption has been implicitly made by many MLC methods such as BR, CC and RA\(k\)EL.
Theorem 5
Assume the labelsets are disjoint. Then, for any cost function satisfying the above assumption, the overall training cost for PRA\(k\)EL is no more than \(M\bar{E}\).
Proof
We may assume there is only one training example \((\mathbf {x}, \mathbf {y}, \mathbf {c})\), where the subscript n is dropped here for simplicity. Recall that the reference label vector of \(\mathbf {x}\) in the mth iteration, denoted by \(\tilde{\mathbf {y}}^{(m)}\), is defined to be \(H_{m1}\) for \(m \ge 2\). Then, for \(m \ge 2\),
where the third equality is by definition of \(E_m\), and the inequality follows from the assumption we just made. Hence, by induction, the overall training cost is \(\mathbf {c}(H_M) \le \mathbf {c}(\tilde{\mathbf {y}}^{(1)}) + \sum _{m=1}^ME_m = \mathbf {c}(\mathbf {y}) + M\bar{E} = M\bar{E}\). \(\square \)
Note that this bound cannot be improved since all inequalities in the proof become equalities under Hamming loss. Nonetheless, there is no analogous result for \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), as shown in the following theorem.
Theorem 6
Assume \(k < K\). For \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), there is no constant \(B>0\) such that the bound \(B\bar{E}\) of the overall training cost holds for any cost functions.
Proof
Again, assume the labelsets are disjoint and there is only one instance \(\mathbf {x}\). Consider the special case where the true label vector of \(\mathbf {x}\) is \(\mathbf {y}= (1, \ldots , 1) \in \mathcal {Y}\), and assume \(h_m[l] = 1\) for all \(l \in S_m\) and all m. In this case, \(\hat{\mathbf {y}}= (0, \ldots , 0) \in \mathcal {Y}\), and therefore, its F1 loss is \(L_F(\mathbf {y}, \hat{\mathbf {y}}) = 1\). In addition, if we define \(\hat{\mathbf {y}}_m = \hat{\mathbf {y}}[S_m]\cup \mathbf {y}[S_m^c]\), then
Hence, we have \(L_F(\mathbf {y}, \hat{\mathbf {y}}) = 1 = ((2Kk)/k)\bar{E}\). Note that if the factor 2 in the (7) is replaced by a larger constant, then the bound needs to be larger. Moreover, we can freely define a loss function L similar to \(L_F\) by replacing the constant 2 in (6) with an arbitrary positive one. Letting the constant tend to infinity, the proof is complete. \(\square \)
Theorems 5 and 6 suggest we define the reference label vectors to be the predicted instead of the true ones. Empirical results in the experiment section also support this finding. In fact, a previous study on multitarget regression has already revealed the problem of treating true targets as additional input variables (SpyromitrosXioufis et al. 2016). Besides, the authors showed that insample estimates of target variables are still problematic, and proposed an approach of outofsample estimates to tackle the issue. Although we do not consider these kinds of estimates in this paper, we believe that a similar approach for PRA\(k\)EL could be considered in future work.
One disadvantage of employing the predicted labels is that the subproblems need to be learned iteratively, while the training process of the LP classifiers of RA\(k\)EL can be parallelized. In addition, the two costsensitive extensions of RA\(k\)EL, CSRA\(k\)EL and GLE, as well as \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), apparently do not have this drawback. There is thus a tradeoff between performance and efficiency.
Weighting of base classifiers
In general, some subproblems of PRA\(k\)EL are easier to solve, while others are more difficult. Thus, the performance of each LP classifier within PRA\(k\)EL can be different, and the majority vote of these classifiers may be suboptimal. Inspired by GLE (Lo et al. 2014), we can further assign different weights to the LP classifiers to represent the importance of them. To achieve this, a linear combination of the classifiers is learned by minimizing the training cost.
Formally, given a new instance \(\mathbf {x}\), its prediction \(\hat{\mathbf {y}}\in \mathcal {Y}\) is produced by setting \(\hat{\mathbf {y}}[l]=1\) if and only if \(\sum \nolimits _{m=1}^M\alpha _mh_m(\mathbf {x})[l] > 0\), where these \(\alpha _m > 0\) are called the weights of the base classifiers. Accordingly, the assignment \(F_{m,n} = \sum \nolimits _{p=1}^mh_p(\mathbf {x}_n)\) in the previous section should be changed to \(F_{m,n} = \sum \nolimits _{p=1}^m\alpha _ph_p(\mathbf {x}_n)\).
One approach for determining these weights is to solve an optimization problem after all the \(h_m\) are learned, just as GLE does. However, this overall optimization ignores the iterative nature of PRA\(k\)EL, where the value of \(F_{m,n}\) depends on \(\alpha _p\) for \(1 \le p < m\) in the mth iteration. We therefore iteratively determine \(\alpha _m\) by greedily minimizing the training cost. More precisely, let \(\alpha _1 = 1\) for simplicity, and for \(m \ge 2\), by regarding \(H_{m,n}\) as a function of \(\alpha _m\), we solve the following singlevariable optimization problem and define \(\alpha _m\) to be an optimal solution.
It is not easy to solve this type of problem in general. Nevertheless, since the objective function is piecewise constant, the optimization problem (8) can be solved by considering only finitely many \(\alpha \), and the remaining task is to obtain these candidate \(\alpha \). It then suffices to find the discontinuities of the objective function, and therefore the zeros of each component of the function \(F_{m,n}(\alpha )\) for all n, denoted by a set \(E_{m, n}\subseteq \mathbb {R}\). Since \(F_{m,n}(\alpha ) = F_{m1,n} + \alpha h_m(\mathbf {x}_n)\), we have \(E_{m,n} \subseteq \{\alpha \mid F_{m,n}(\alpha )[l]=0 \text{ for } \text{ some } l\in S_m\} = \{F_{m1,n}[l]/h_m(\mathbf {x})[l]\mid l\in S_m\}\), implying \(E_{m,n} \le S_m = k\). If \((\cup _nE_{m,n})\cap \mathbb {R}_{>0} = \{a_1, \ldots , a_P\}\) with \(0< a_1< \cdots < a_P\), then clearly \(P \le Nk\), and the set of candidate \(\alpha \) can be chosen to be \(\{(a_i+a_{i+1})/2\mid 1 \le i < P\}\cup \{a_1/2, a_P+1\}\). This weighting strategy is called greedy weighting (GW).
Certainly, one can simplify the process of solving (8) by minimizing it over a fixed finite set, E, the candidate set of \(\alpha \), to ease the burden of computation and decrease the possibility of overfitting. For example, let \(E = \{i/P\mid 1 \le i \le P\}\cup \{\epsilon \}\) for some \(P \in \mathbb {N}\), where \(0<\epsilon <1/PM\) is a small number for tie breaking. This weighting strategy is called simple weighting (SW).
Analysis of time complexity
First, we analyze the training time complexity of PRA\(k\)EL without considering the weighting of the base classifiers. The trivial steps of Algorithm 1 to form the subproblems are of time complexity at most O(N) multiplied by the time needed to calculate the reference label \(\tilde{\mathbf {y}}_n\) and the cost \(\mathbf {c}_n\). The more timeconsuming step of PRA\(k\)EL, similar to RA\(k\)EL, depends on the time spent on the CSMCC base classifier, which is denoted as \(T_0(N, d, K^{\prime })\) for N examples, d features, and \(K'\) classes. The empirical results of PRA\(k\)EL in the next section demonstrate that it suffices to let each label appear in a fixed number of labelsets on average. That is, only \(M = O(K/k)\) iterations are needed, and hence, the practical training time of PRA\(k\)EL is \(T_0(N, d, 2^k)\cdot O(K/k)\), which is linear in K. In contrast, as discussed in Sect. 3, the training time of CFT (Li and Lin 2014) is \(O(\textit{NK}^2)\) multiplied by the time needed to calculate the cost \(\mathbf {c}_n\), and summed with O(K) calls to the base classifier. The complexity analysis reveals the asymptotic efficiency of PRA\(k\)EL over CFT.
When considering the weighting, in each iteration, GW (which is generally more time consuming than SW) needs O(k) to determine the zeros of each \(F_{m, n}\), and evaluating the goodness of all candidate \(\alpha \) can be done within O(Nk), multiplied by the time needed to calculate \(\mathbf {c}_n\). That is, the running time of PRA\(k\)ELGW with \(M = O(K/k)\) iterations needs an additional \(O(\textit{NK})\) multiplied by the time needed to calculate the cost \(\mathbf {c}_n\). The additional time of PRA\(k\)ELGW is still asymptotically more efficient than the training time of CFT.
Experiment
Experimental setup
The experiments were conducted on seven benchmark datasets (Tsoumakas et al. 2011).^{Footnote 3} These datasets were taken because of their diversity of domains and popularity in multilabel research community. Their basic statistics are provided in Table 2.
For statistical significance, all results reported in Sect. 5.2 were averaged over 30 independent runs. For each run, we randomly sampled 75% of the dataset for training and used the remaining data for testing. One third of the training set was reserved for validation.
We compared four variants of the proposed method, namely, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), PRA\(k\)EL, PRA\(k\)ELGW and PRA\(k\)ELSW, with three types of methods: (a) labelsetrelated methods, including RA\(k\)EL (Tsoumakas and Vlahavas 2007) and CSRA\(k\)EL (Lo 2013; b) stateoftheart CSMLC methods, including EPCC (Dembczynski et al. 2010, 2011, 2012) and CFT (Li and Lin 2014; c) a stateoftheart costinsensitive MLC method, ML\(k\)NN (Zhang and Zhou 2007). All hyperparameters of all the compared methods and the base classifiers were selected by grid search on the validation set. For our method and the labelsetrelated methods, the parameter k was selected from \(\{2, \ldots , 9\}\), and for each k, the maximum M was fixed to 10K / k. The ensemble size of EPCC was selected from \(\{1, \ldots , 7\}\) for efficiency, and on datasets with more than 20 labels, the Monte Carlo sampling technique was employed with a sample size of 200 (Dembczynski et al. 2012). For CFT, the number of internal iterations was selected from \(\{2, \ldots , 8\}\), as suggested by the original authors.^{Footnote 4}
For the base classifier of EPCC, we employed logistic regression implemented in LIBLINEAR (Fan et al. 2008). For the methods requiring a regular binary or multiclass classifier, we used linear oneversusall support vector machines (SVMs) implemented in LIBLINEAR. Our method was coupled with linear REDOSSVR (Tu and Lin 2010).^{Footnote 5} The regularization parameter in linear SVMs and REDOSSVR was also selected by grid search on the validation set. The cost functions we considered in the experiments are all derived from loss functions, as explained in Sect. 2.
Results and discussion
Tables 3, 4 and 5 present the results of the four variants of our method, EPCC, CFT, RA\(k\)EL and ML\(k\)NN in terms of Hamming, ranking and F1 loss. The best results for each dataset are marked in bold.
Comparison of variants of PRAk EL
In this subsection, we draw a comparison between the four variants of the proposed method, namely, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), PRA\(k\)EL, PRA\(k\)ELGW and PRA\(k\)ELSW. We first compare \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL to understand the difference between using the true and the predicted ones as the reference label vectors. Recall that \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL are theoretically equivalent under Hamming and ranking loss, and therefore, it is not a coincidence that the results of the first two variants in Tables 3 and 4 are exactly the same. Table 5 shows that on all the datasets PRA\(k\)EL has lower costs than \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) in terms of F1 loss. We also present in Table 6 the results of the Student’s ttest at a significance level of 0.05, on two pairs of variants. The comparison of PRA\(k\)EL and \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) under F1 loss reveals that PRA\(k\)EL is significantly superior on five datasets. This demonstrates the benefits of exploiting previous predictions, and is also consistent with the theoretical results in Theorems 5 and 6. Thus, for the remaining experiments, the results of \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) are not presented.
Next, we compare the three weighting strategies, i.e., uniform, greedy and simple weighting. From Table 6, overall PRA\(k\)EL is competitive with PRA\(k\)ELGW, although under ranking loss the performance of PRA\(k\)ELGW is slightly better. In addition, from the last comparison we see that PRA\(k\)ELSW is never outperformed by PRA\(k\)EL under these three loss functions. For Hamming loss, there is no significant difference between the performance of PRA\(k\)EL and PRA\(k\)ELSW. For ranking loss and F1 loss, however, PRA\(k\)ELSW performs slightly better than PRA\(k\)EL.
Since the last two variants greedily minimize the training costs in every iteration, it is expected that their training costs are much lower than PRA\(k\)EL’s. Table 7 and Figure 1, which show the training costs in terms of F1 loss, verify this deduction. Under other loss functions we also observe similar behavior . The reason is that, for PRA\(k\)ELGW, the weights of the classifiers are determined from an optimization problem with no constraints, while for PRA\(k\)ELSW, the weights are restricted to the candidate set. From a holistic point of view, the candidate set acts as a regularizer, which prevents PRA\(k\)ELSW from excessively overfitting the training set. In conclusion, among the four variants of our method, PRA\(k\)ELSW is the most stable.
Finally, we demonstrate the effectiveness of ensemble. Figure 2 shows the training and test costs versus the number of iterations M on yeast dataset. We can see that all the costs are decreasing as a function of M. The behavior of the costs on the other datasets is similar.
Comparison with stateoftheart methods
We compare our method with EPCC, CFT, RA\(k\)EL and ML\(k\)NN in terms of Hamming, ranking and F1 loss. Table 3 shows the performance of each method under Hamming loss. RA\(k\)EL and ML\(k\)NN individually achieve the best performance on one dataset. On the other datasets, the method with the lowest cost is either PRA\(k\)EL or EPCC. Overall, all the methods perform fairly well under Hamming loss.
The results for the other two loss functions are shown in Tables 4 and 5. In terms of ranking loss, EPCC is the most stable method, which outperforms the others on five datasets, and the proposed method reaches the lowest cost on the remaining two datasets. Under F1 loss, our method is superior to the others on half of the datasets, and EPCC has the best performance on two datasets. In addition, it can be seen that under these two loss functions, the two costinsensitive methods, RA\(k\)EL and ML\(k\)NN, are completely not comparable to either of the other costsensitive methods. This observation also demonstrates the effectiveness of cost sensitivity.
To compare all the classifiers over multiple datasets, we conducted the Friedman test with the corresponding Nemenyi posthoc test (Demšar 2006). For all the three loss functions, the pvalues of the Friedman test were \(6.6 \times 10^{3}\), \(3.6 \times 10^{5}\) and \(8.7 \times 10^{6}\), respectively. Therefore, the null hypothesis was rejected at \(\alpha = 0.05\), and the posthoc test was performed afterwards. The results of Nemenyi test are shown in Table 8, which agree with the discussion in the last paragraph. Basically the proposed method and EPCC outperform the two costinsensitive methods, RA\(k\)EL and ML\(k\)NN. However, according to the Nemenyi test, the performance of the three costsensitive methods does not have significant differences. To make further comparisons of our method, EPCC and CFT, we conducted the pairwise Student’s ttest at a significance level of 0.05 for each dataset. Table 9 shows the number of datasets on which PRA\(k\)EL is statistically superior, comparable, or inferior to each of the other methods. We conclude that under these three metrics, PRA\(k\)EL performs significantly better than both RA\(k\)EL and ML\(k\)NN and generally better than CFT. As compared with EPCC, PRA\(k\)EL is competitive under Hamming and F1 loss, but performs slightly worse than EPCC under ranking loss.
Comparison with EPCC and CFT under composite loss
To demonstrate our method’s capability to optimize general metrics, we defined the function of composite loss as \(L_c = 0.8 L_H + 0.2 L_F\), where \(L_H\) and \(L_F\) are the functions of Hamming and F1 loss, respectively. This loss function was similarly defined in one experiment on CFT (Li and Lin 2014).
Because there is no inference rule for EPCC yet, we used the rules for both Hamming and F1 loss. The results are shown in Table 10, where EPCCHam is EPCC with the inference rule for Hamming loss, and EPCCF1 is the one with the rule for F1 loss. Since CFT is a general costsensitive method, we also included it in this experiment. The null hypothesis of the Friedman test was rejected at \(\alpha = 0.05\) with the pvalue being \(4.6 \times 10^{4}\), and the average rank of PRA\(k\)EL is 1.14. According to the Nemenyi test, the performance of PRA\(k\)EL is significantly better than that of both EPCClHam and EPCCF1. In addition, according to the results of the ttest in Table 11, the proposed method is superior to CFT except on the enron dataset.
Comparison with CSRAk EL and GLE
Recall that under the problem setup of CSRA\(k\)EL and GLE, they can handle only weighted Hamming loss, as defined in Sect. 2. Therefore, in this subsection we compare PRA\(k\)EL with these two methods in terms of Hamming loss and weighted Hamming loss. For each dataset, each component of the weight, \(\mathbf {w}[l]\), was drawn independently from the uniform distribution over [0, 1], and then the weight \(\mathbf {w}\) was normalized such that \(\sum _{l=1}^K\mathbf {w}[l] = 1\). The results are shown in Tables 12 and 13. We see that PRA\(k\)EL reaches lower costs than both CSRA\(k\)EL and GLE on nearly all the datasets. It is clear from these two tables that PRA\(k\)EL performs significantly better than the other methods in terms of both loss functions. For completeness, we also provide the results of the Student’s ttest in Table 14. The reason for such a significant improvement is that our method considers not only the differences between misclassified costs for each example, but also the varying costs among all the examples. In contrast, CSRA\(k\)EL and GLE take only the latter into account.
Conclusion
We proposed an efficient costsensitive extension of RA\(k\)EL, named PRA\(k\)EL, which meets the needs of different MLC applications by taking into account the evaluation metric. Experimental results demonstrate that PRA\(k\)EL is competitive with other methods designed for certain specific metrics, and frequently outperforms others under general loss functions. The generality of PRA\(k\)EL allows it to optimize arbitrary examplebased evaluation metrics without additional knowledge, inference rule, or approximation, and thus, it is more suitable for solving realworld problems.
Notes
\(\llbracket {}\cdot \rrbracket {}\) is the indicator function.
\(\Vert \cdot \Vert _1\) is the \(\ell _1\) norm.
They were obtained from http://mulan.sourceforge.net/datasetsmlc.html.
Because of its efficiency issues, we restricted the maximum number of iterations to 4 on datasets with \(K > 20\).
REDOSSVR can be shown to be equivalent to oneversusall SVMs for cost functions \(\mathbf {c}_n(\hat{\mathbf {y}}) = \llbracket {}\hat{\mathbf {y}}\ne \mathbf {y}_n\rrbracket {}\).
The normalizer of ranking loss was defined in Sect. 2.
References
Abe, N., Zadrozny, B., & Langford, J. (2004). An iterative method for multiclass costsensitive learning. In Proceedings of the 10th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 3–11).
Beygelzimer, A., Langford, J., & Ravikumar, P. (2009). Errorcorrecting tournaments. In Proceedings of the 20th international conference on algorithmic learning theory (pp. 247–262).
Boutell, M. R., Luo, J., Shen, X., & Brown, C. M. (2004). Learning multilabel scene classification. Pattern Recognition, 37(9), 1757–1771.
Clare, A., & King, R. D. (2001). Knowledge discovery in multilabel phenotype data. In L. De Raedt & A. Siebes (Eds.), Principles of data mining and knowledge discovery (pp. 42–53). Berlin Heidelberg: Springer.
Dembczynski, K., Cheng, W., & Hüllermeier, E. (2010). Bayes optimal multilabel classification via probabilistic classifier chains. In Proceedings of the 27th international conference on machine learning (pp. 279–286).
Dembczynski, K., Waegeman, W., & Hüllermeier, E. (2012). An analysis of chaining in multilabel classification. In Proceedings of the 21st European conference on artificial intelligence (pp. 294–299).
Dembczynski, K. J. , Waegeman, W., Cheng, W., & Hüllermeier, E. (2011). An exact algorithm for Fmeasure maximization. In Advances in neural information processing systems (pp. 1404–1412).
Demšar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.
Doppa, J. R., Yu, J., Ma, C., Fern, A., & Tadepalli, P. (2014). HCsearch for multilabel prediction: An empirical study. In Proceedings of the 28th AAAI conference on artificial intelligence (pp. 1795–1801).
Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., & Lin, C.J. (2008). LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9, 1871–1874.
Fan, W., Stolfo, S. J., Zhang, J., & Chan, P. K. (1999). Adacost: Misclassification costsensitive boosting. In Proceedings of the 16th international conference on machine learning (pp. 97–105).
Ferng, C.S., & Lin, H.T. (2013). Multilabel classification using errorcorrecting codes of hard or soft bits. IEEE Transactions on Neural Networks and Learning Systems, 24(11), 1888–1900.
Freund, Y., & Schapire, R. E. (1999). A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence, 14(5), 771–780.
Goncalves, E. C., Plastino, A., Freitas, A. A. (2013). A genetic algorithm for optimizing the label ordering in multilabel classifier chains. In Proceedings of the 25th international conference on tools with artificial intelligence (pp. 469–476).
Hardoon, D. R., Szedmak, S., & ShaweTaylor, J. (2004). Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12), 2639–2664.
Hsu, D., Kakade, S., Langford, J., & Zhang, T. (2009). Multilabel prediction via compressed sensing. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, & A. Culotta (Eds.), Advances in neural information processing systems (pp. 772–780). New York: Curran Associates Inc.
Li, C.L., & Lin, H.T. (2014). Condensed filter tree for costsensitive multilabel classification. In Proceedings of the 31st international conference on machine learning (pp. 423–431).
Lo, H.Y. (2013). Costsensitive multilabel classification with applications. Ph.D. thesis, National Taiwan University.
Lo, H.Y., Wang, J.C., Wang, H.M., & Lin, S.D. (2011). Costsensitive multilabel learning for audio tag annotation and retrieval. IEEE Transactions on Multimedia, 13(3), 518–529.
Lo, H.Y., Lin, S.D., & Wang, H.M. (2014). Generalized klabelsets ensemble for multilabel and costsensitive classification. IEEE Transactions on Knowledge and Data Engineering, 26(7), 1679–1691.
Qi, G.J., Hua, X.S., Rui, Y., Tang, J., Mei, T., & Zhang, H.J. (2007). Correlative multilabel video annotation. In Proceedings of the 15th international conference on multimedia (pp. 17–26).
Read, J., Pfahringer, B., Holmes, G., & Frank, E. (2011). Classifier chains for multilabel classification. Machine Learning, 85(3), 333–359.
Read, J., Martino, L., & Luengo, D. (2014). Efficient monte carlo methods for multidimensional learning with classifier chains. Pattern Recognition, 47(3), 1535–1546.
Read, J., Martino, L., Olmos, P. M., & Luengo, D. (2015). Scalable multioutput label prediction: From classifier chains to classifier trellises. Pattern Recognition, 48(6), 2096–2109.
Schapire, R. E., & Singer, Y. (2000). Boostexter: A boostingbased system for text categorization. Machine Learning, 39(2), 135–168.
SpyromitrosXioufis, E., Tsoumakas, G., Groves, W., & Vlahavas, I. (2016). Multitarget regression via input space expansion: Treating targets as inputs. Machine Learning, 104(1), 55–98.
Sun, Y., Kamel, M. S., Wong, A. K. C., & Wang, Y. (2007). Costsensitive boosting for classification of imbalanced data. Pattern Recognition, 40(12), 3358–3378.
Tai, F., & Lin, H.T. (2012). Multilabel classification with principal label space transformation. Neural Computation, 24(9), 2508–2542.
Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I. P. (2008). Multilabel classification of music into emotions. In Proceedings of the 9th international conference on music information retrieval (pp. 325–330).
Tsochantaridis, I., Joachims, T., Hofmann, T., & Altun, Y. (2005). Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6, 1453–1484.
Tsoumakas, G., & Katakis, I. (2007). Multilabel classification: An overview. International Journal of Data Warehousing and Mining, 3(3), 1–13.
Tsoumakas, G., & Vlahavas, I. (2007). Random klabelsets: An ensemble method for multilabel classification. European Conference on Machine Learning, 2007, 406–417.
Tsoumakas, G., Katakis, I., Vlahavas, I. (2010). Mining multilabel data. In O. Maimon & L. Rokach (Eds.), Data mining and knowledge discovery handbook (pp. 667–685). Springer US.
Tsoumakas, G., SpyromitrosXioufis, E., Vilcek, J., & Vlahavas, I. (2011). MULAN: A java library for multilabel learning. Journal of Machine Learning Research, 12, 2411–2414.
Tu, H.H, & Lin, H.T. (2010). Onesided support vector regression for multiclass costsensitive classification. In Proceedings of the 27th international conference on machine learning (pp. 1095–1102).
Zadrozny, B., Langford, J., Abe, N. (2003). Costsensitive learning by costproportionate example weighting. In Proceedings of the 3rd IEEE international conference on data mining (pp. 435–442).
Zhang, M.L., & Zhou, Z.H. (2006). Multilabel neural networks with applications to functional genomics and text categorization. IEEE Transactions on Knowledge and Data Engineering, 18(10), 1338–1351.
Zhang, M.L., & Zhou, Z.H. (2007). MLKNN: A lazy learning approach to multilabel learning. Pattern Recognition, 40(7), 2038–2048.
Zhou, Z.H., & Liu, X.Y. (2010). On multiclass costsensitive learning. Computational Intelligence, 26(3), 232–257.
Author information
Authors and Affiliations
Corresponding author
Additional information
Editors: Bob Durrant, KeeEung Kim, Geoff Holmes, Stephen Marsland, ZhiHua Zhou and Masashi Sugiyama.
Appendix: Proof
Appendix: Proof
Lemma 1
Let \(L_r\) be the function of ranking loss and \(\mathbf {y}\in \mathcal {Y}=\{0, 1\}^K\). Then, there exists a unique \(\mathbf {w}\in {\mathbb {R}_{\ge 0}}^K\) such that \(L_r(\mathbf {y}, \cdot ) = L_{H,\mathbf {w}}(\mathbf {y}, \cdot )\), where \(L_{H,\mathbf {w}}\) is the function of weighted Hamming loss with respect to \(\mathbf {w}\).
Proof
Let \(p = \{1 \le k \le K\mid \mathbf {y}[k] = 0\}\). If \(\mathbf {y}= \mathbf {0}\) or \(\mathbf {1}\), then \(L_r(\mathbf {y}, \hat{\mathbf {y}}) = 0\) for all \(\hat{\mathbf {y}}\), and hence the proof is trivial. Therefore, for now we assume \(0< p < K\). Since the normalizer^{Footnote 6} \(R(\mathbf {y}) = \{(k,l)\mid \mathbf {y}[k]<\mathbf {y}[l]\} = p(Kp)\), we write
Because \(\mathbf {y}[k]\) and \(\hat{\mathbf {y}}[k]\) are either 0 or 1, the second term in the right hand side of the last equation is
and similarly, the third term is
In addition, by interchanging k and l, the first term can be written as
Hence, combining the first term of (11) with (10), and the second term with (9), we have
where \(\mathbf {w}[k] = \frac{1}{2}\sum _l\llbracket {}\mathbf {y}[l]=\mathbf {y}[k]\rrbracket {}/p(Kp)\). The uniqueness follows immediately from the above argument.\(\square \)
Lemma 2
Let \(L_{H,\mathbf {w}}\) be the function of weighted Hamming loss and S be a klabelset. For any subsets \(\mathbf {y}_0^{\prime }\) and \(\mathbf {y}_1^{\prime }\) of S, \(L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_0^{\prime }\cup \tilde{\mathbf {y}}[S^c])L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_1^{\prime }\cup \tilde{\mathbf {y}}[S^c])\) is independent of \(\tilde{\mathbf {y}}\in \{0, 1\}^K\).
Proof
By induction, we may assume that \(\mathbf {y}_0^{\prime }\) and \(\mathbf {y}_1^{\prime }\) differ by only the jth bit. That is, \(\mathbf {y}_0^{\prime }[j] = 0\), \(\mathbf {y}_1^{\prime }[j] = 1\), and \(\mathbf {y}_0^{\prime }[l] = \mathbf {y}_1^{\prime }[l]\) for all \(l \ne j\). It then suffices to prove the case where \(k = 1\).
Since \(\mathbf {y}_0^{\prime }[j] = 0\), \(L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_0^{\prime }\cup \tilde{\mathbf {y}}[S^c]) = \sum _{l\ne j}\mathbf {w}[l]\cdot \llbracket {}\tilde{\mathbf {y}}[l] \ne \mathbf {y}[l]\rrbracket {} + \mathbf {w}[j]\cdot \llbracket {}\mathbf {y}[j] \ne 0\rrbracket {}\) , and similarly \(L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_1^{\prime }\cup \tilde{\mathbf {y}}[S^c]) = \sum _{l\ne j}\mathbf {w}[l]\cdot \llbracket {}\tilde{\mathbf {y}}[l] \ne \mathbf {y}[l]\rrbracket {} + \mathbf {w}[j]\cdot \llbracket {}\mathbf {y}[j] \ne 1\rrbracket {}\). Therefore, the difference is \(\mathbf {w}[j]\cdot \llbracket {}\mathbf {y}[j] \ne 0\rrbracket {}\llbracket {}\mathbf {y}[j] \ne 1\rrbracket {} = \mathbf {w}[j]\), which is clearly independent of \(\tilde{\mathbf {y}}\). \(\square \)
Rights and permissions
About this article
Cite this article
Wu, YP., Lin, HT. Progressive random klabelsets for costsensitive multilabel classification. Mach Learn 106, 671–694 (2017). https://doi.org/10.1007/s109940165600x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s109940165600x
Keywords
 Machine learning
 Multilabel classification
 Loss function
 Costsensitive learning
 Labelset
 Ensemble method