1 Introduction

Multi-label classification (MLC) extends traditional multi-class classification by allowing each instance to be associated with a set of relevant labels. For example, in text classification, a document (instance) can belong to several topics (labels). Given a set of instances as well as their relevant labels, the goal of an MLC method is to predict the relevant labels of a new instance. Recently, MLC has attracted much research attention with a wide range of applications including music tag annotation (Trohidis et al. 2008; Lo et al. 2011), image classification (Boutell et al. 2004), and video classification (Qi et al. 2007).

In contrast to multi-class classification, one important characteristic of MLC is the possible correlations between different labels. Many approaches have been proposed to exploit the correlations. Chaining methods learn a label by treating other labels as features (Read et al. 2011; Dembczynski et al. 2010). Labelset-based methods learn several labels jointly (Tsoumakas et al. 2010; Tsoumakas and Vlahavas 2007; Lo et al. 2014; Lo 2013). Other methods transform the space of labels to capture the correlations (Hsu et al. 2009; Tai and Lin 2012; Hardoon et al. 2004).

A key challenge of MLC is to automatically adapt a method to the evaluation criterion of interest. In real-world applications, different criteria are often required to evaluate the performance of an MLC method. For example, Hamming loss measures the proportion of the misclassified labels to the total number of labels; the F1 score, originating from information retrieval, is the harmonic mean of the precision and recall; subset 0/1 loss requires all labels to be correctly predicted. Because of the different natures of those criteria, a method that performs well under one criterion may not be well-suited for other criteria. It is therefore important to design general MLC methods that take the evaluation criterion into account, either in the training or prediction stage. Since the evaluation criterion, or metric, determines the cost for misclassifying an instance, this type of problem is generally called cost-sensitive multi-label classification (CSMLC) (Lo et al. 2014; Li and Lin 2014), which is formally defined in Sect. 2.

We shall explain in Sect. 3 that most existing MLC methods either aim for optimizing a certain evaluation metric or require extra efforts to be adapted to each metric. For example, binary relevance (BR) (Tsoumakas et al. 2010) minimizes Hamming loss by learning each label independently. Label powerset (LP) (Tsoumakas et al. 2010) minimizes subset 0/1 loss by transforming the MLC problem to a multi-class classification problem with a huge number of hyper-classes. The well-known random k-labelsets (RA\(k\)EL) (Tsoumakas and Vlahavas 2007) method focuses on many smaller multi-class classification problems for computational efficiency, but it is only loosely connected to subset 0/1 loss (Ferng and Lin 2013).

There are currently few methods for dealing with general CSMLC problems (Dembczynski et al. 2010; Tsochantaridis et al. 2005; Li and Lin 2014; Doppa et al. 2014). RA\(k\)EL has been extended to cost-sensitive random k-labelsets (CS-RA\(k\)EL) (Lo 2013) and generalized k-labelsets ensemble (GLE) (Lo et al. 2014) to handle a weighted version of Hamming loss, but not general metrics. Probabilistic classifier chain (Dembczynski et al. 2010) requires designing an efficient inference rule with respect to the metric, and covers many, but not all, of the metrics of interest (Li and Lin 2014). Condensed filter tree (Li and Lin 2014) is a chaining method that takes any evaluation metric into account during the training stage, but its training time is quadratic in the number of labels. The structured support vector machine (Tsochantaridis et al. 2005) can also handle arbitrary metrics, but it relies on solving a sophisticated optimization problem depending on the metric and is thus also inefficient. To the best of our knowledge, no existing CSMLC methods are both general and efficient.

In this work, we design a general and efficient CSMLC method in Sect. 4. This novel method, named progressive random \(k\)-labelsets (PRA\(k\)EL), is extended from RA\(k\)EL and hence inherits its efficiency. In particular, PRA\(k\)EL practically enjoys linear training time in terms of the number of labels. Moreover, PRA\(k\)EL is able to optimize any example-based metric by modifying the training stage of RA\(k\)EL. More specifically, RA\(k\)EL reduces the original problem to many regular multi-class problems and ignores the original cost information; PRA\(k\)EL reduces the CSMLC problem to many cost-sensitive multi-class ones by transferring the cost information to the sub-problems. The transferring task is non-trivial, however, because each sub-problem involves only a subset of labels of the original problem. We therefore introduce the notion of reference labels to determine the costs in the sub-problems. We carefully propose two strategies for defining the reference labels, which lead to different advantages and disadvantages in both theoretical and empirical aspects.

We conducted experiments on seven benchmark datasets with various sizes and domains. The experimental results in Sect. 5 show that PRA\(k\)EL is competitive with state-of-the-art MLC methods under the specific metrics associated with the methods. Furthermore, in terms of general metrics, PRA\(k\)EL usually outperforms other methods. The results demonstrate that the proposed method is indeed more general, and more suitable for solving real-world problems.

2 Problem setup

In CSMLC, we denote an instance by a vector \(\mathbf {x}\in \mathcal {X} = \mathbb {R}^d\) and the relevant labels of \(\mathbf {x}\) by a set \(Y \subseteq \{1, 2, \ldots , K\}\), where K is the total number of labels. Equivalently, this set of labels can be represented by a bit vector \(\mathbf {y}\in \mathcal {Y}=\{0, 1\}^K\), where the l-th component \(\mathbf {y}[l]\) is 1 if and only if the l-th label is relevant, i.e., \(l \in Y\). Here, \(\mathcal {X}\) and \(\mathcal {Y}\) are called the input space and label space, respectively; the pair \((\mathbf {x}, \mathbf {y})\) is called an example. In this work, we consider a particular CSMLC setup that allows each example to carry its own cost information. The example-based setup, which assumes example-dependent costs, is more general than the setup with label-dependent costs, in which all examples share the same cost functions. The more general setup makes it possible to express the importance of different instances easily through embedding the importance in the example-dependent costs, and has been considered in several studies of cost-sensitive learning (Fan et al. 1999; Zadrozny et al. 2003; Sun et al. 2007). Formally, given a training set \(\{(\mathbf {x}_n, \mathbf {y}_n, \mathbf {c}_n)\}_{n=1}^N\) consisting of N examples, where \(\mathbf {c}_n:\mathcal {Y}\rightarrow \mathbb {R}_{\ge 0}\) is a non-negative cost function and each \((\mathbf {x}_n, \mathbf {y}_n, \mathbf {c}_n)\) is drawn independently from an unknown distribution \(\mathcal {D}\), the goal of CSMLC is to learn a classifier \(h:\mathcal {X}\rightarrow \mathcal {Y}\) such that the expected cost \(\mathrm {E}_{(\mathbf {x}, \mathbf {y}, \mathbf {c})\sim \mathcal {D}}[\mathbf {c}(h(\mathbf {x}))]\) is small.

Note that the example-based setup cannot cover all popular evaluation criteria in multi-label classification. For instance, the micro-F1 and macro-F1 criteria, which are defined on a set of \(\mathbf {y}\) rather than a single one, cannot be expressed as example-dependent cost functions. Nonetheless, as highlighted by earlier CSMLC works (Li and Lin 2014), studying the example-based setup can be viewed as an intermediate step toward those more complicated criteria.

Two remarks about this setup are in order. First, for a classifier h, since \(\mathbf {c}(h(\mathbf {x}))\) is being minimized, it is natural to assume \(\mathbf {c}\) has a minimum of 0 at \(\mathbf {y}\), the true label vector of \(\mathbf {x}\). With this assumption, although \(\mathbf {y}\) does not appear in the learning goal, its information is implicitly stored in the cost function. Second, we can similarly define the problem of cost-sensitive multi-class classification (CSMCC) by replacing the label space \(\mathcal {Y}\) with \(\{1, 2, \ldots , K\}\), which stands for K different classes. In fact, this setup is widely adopted in many existing works (Tu and Lin 2010; Zhou and Liu 2010; Abe et al. 2004).

Modern CSMCC works (Zhou and Liu 2010) allow flexibly taking any cost functions into account based on application needs. While the proposed method shares the same flexibility in its derivation, we consider a more realistic scenario of CSMLC in the experiments. In particular, many CSMLC problems are actually associated with a global, label-dependent cost \(L:\mathcal {Y}\times \mathcal {Y}\rightarrow \mathbb {R}\), typically called a loss function, where \(L(\mathbf {y}, \hat{\mathbf {y}})\) is the loss when predicting \(\mathbf {y}\) as \(\hat{\mathbf {y}}\). Those problems aim to learn a classifier \(h:\mathcal {X}\rightarrow \mathcal {Y}\) such that \(\mathrm {E}[L(\mathbf {y}, h(\mathbf {x}))]\) is small (Dembczynski et al. 2010; Li and Lin 2014). The aim can be easily expressed in our setup by assigning

$$\begin{aligned} \mathbf {c}_n\left( \hat{\mathbf {y}}\right) = L\left( \mathbf {y}_n, \hat{\mathbf {y}}\right) . \end{aligned}$$
(1)

We focus on CSMLC with such loss functions to demonstrate the applicability of the proposed method and to make a fair comparison with existing CSMLC methods (Li and Lin 2014; Dembczynski et al. 2010). Popular loss functions include

  • Hamming lossFootnote 1

    $$\begin{aligned} L_H\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \frac{1}{K}\sum _{l=1}^K\llbracket {}\hat{\mathbf {y}}[l] \ne \mathbf {y}[l]\rrbracket {}; \end{aligned}$$
  • weighted Hamming loss with respect to the weight \(\mathbf {w}\in {\mathbb {R}_{\ge 0}}^K\)

    $$\begin{aligned} L_{H,\mathbf {w}}\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \sum _{l=1}^K\mathbf {w}[l]\cdot \llbracket {}\hat{\mathbf {y}}[l] \ne \mathbf {y}[l]\rrbracket {}; \end{aligned}$$
  • ranking loss

    $$\begin{aligned} L_r\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \frac{1}{R(\mathbf {y})}\sum _{(k,l):\mathbf {y}[k]<\mathbf {y}[l]}\llbracket {}\hat{\mathbf {y}}[k]>\hat{\mathbf {y}}[l]\rrbracket {}+\frac{1}{2}\llbracket {}\hat{\mathbf {y}}[k]=\hat{\mathbf {y}}[l]\rrbracket {}, \end{aligned}$$

    where \(R(\mathbf {y}) = |\{(k,l)\mid \mathbf {y}[k]<\mathbf {y}[l]\}|\) is a normalizer;

  • F1 lossFootnote 2

    $$\begin{aligned} L_F\left( \mathbf {y}, \hat{\mathbf {y}}\right) = 1 - \frac{2\mathbf {y}\cdot \hat{\mathbf {y}}}{\Vert \mathbf {y}\Vert _1+\Vert \hat{\mathbf {y}}\Vert _1}, \end{aligned}$$

    which is one minus the F1 score;

  • subset 0/1 loss

    $$\begin{aligned} L_s\left( \mathbf {y}, \hat{\mathbf {y}}\right) = \llbracket {}\hat{\mathbf {y}}\ne \mathbf {y}\rrbracket {}. \end{aligned}$$

For those loss functions defined above, we follow the convention that when the denominator is zero, the loss is defined as zero.

To simplify the explanations of the proposed method, we further introduce some terminology. We denote the set of K labels by \(\mathcal {L}_K{} = \{1, \ldots , K\}\). A subset S of \(\mathcal {L}_K{}\) with \(|S|=k\) is called a k-labelset. If \(S = \{s_1, \ldots , s_k\}\) is a k-labelset with \(s_1< \cdots < s_k\), then we denote \((\mathbf {y}[s_1], \ldots , \mathbf {y}[s_k]) \in \{0, 1\}^k\) by \(\mathbf {y}[S]\). When the number of labels, K, is clear in the context, we also use the notation \(S^c\) to represent the \((K-k)\)-labelset \(\mathcal {L}_K{} {\setminus } S = \{1\le l\le K\mid l \notin S\}\). We summarize the main notation used throughout the paper in Table 1.

Table 1 Main notation used in the paper

3 Related work

Multi-label classification methods can be divided into two main categories, namely, algorithm adaptation and problem transformation (Tsoumakas and Katakis 2007). Algorithm adaptation methods directly extend a specific learning algorithm to tackle MLC problems. Multi-label k-nearest neighbor (ML-\(k\)NN) (Zhang and Zhou 2007) is adapted from the famous k-nearest neighbors algorithm. AdaBoost.MH and AdaBoost.MR (Schapire and Singer 2000) are two multi-label extensions of the AdaBoost algorithm (Freund and Schapire 1999). ML-C4.5 (Clare and King 2001) is an adaptation of the popular C4.5 algorithm. BP-MLL (Zhang and Zhou 2006) is derived from the back-propagation algorithm of neural networks.

Problem transformation methods transform MLC problems into other types of learning problems and solve them by existing algorithms. Such methods are general and can be coupled with any mature algorithms. Our proposed method in Sect. 4 belongs to this category.

Binary relevance (BR) (Tsoumakas et al. 2010) is arguably the simplest problem transformation method, which transforms the MLC problem into several binary classification problems by learning and predicting each label independently. Classifier chain (CC) (Read et al. 2011) iteratively learns a binary classifier to predict the l-th label using \(\{(\mathbf {x}_n, \hat{\mathbf {y}}_n[1], \ldots , \hat{\mathbf {y}}_n[l-1])\}\) as the training set, where \(\hat{\mathbf {y}}_n\) contains the previously predicted labels. Although it considers the label dependencies, the order of labels becomes crucial to the performance of CC. Many approaches have been proposed to address this issue (Read et al. 2011, 2014; Goncalves et al. 2013). In particular, the ensemble of classifier chains (ECC) (Read et al. 2011) learns several CC classifiers, each with a random ordering of labels, and it averages the predictions from all the classifiers to classify a new instance.

Instead of learning one binary classifier for each label, probabilistic classifier chain (PCC) (Dembczynski et al. 2010) learns probabilistic classifiers to estimate \(P(\mathbf {y}\mid \mathbf {x})\) by the chain rule

$$\begin{aligned} P(\mathbf {y}\mid \mathbf {x}) = P(\mathbf {y}[1]\mid \mathbf {x})\cdot \prod _{l=2}^KP\left( \mathbf {y}[l]\mid \mathbf {x}, \mathbf {y}[1], \ldots , \mathbf {y}[l-1]\right) \end{aligned}$$

and then applies Bayes optimal inference rule designed for the evaluation metric to produce the final prediction. In principle, PCC is able to be adapted to any different metric to tackle CSMLC problems by designing proper inference rules for the metric. However, deriving efficient inference rules for different metrics is practically challenging. Inference rules for Hamming, ranking, F1 and subset 0/1 loss have been designed (Dembczynski et al. 2010, 2011), but the rules for other metrics remain an open question. Similar to ECC, the ensembled probabilistic classifier chain (EPCC) (Dembczynski et al. 2010) resolves the issue of label ordering by random orderings.

The Monte Carlo optimization for classifier chains (MCC) (Read et al. 2014) employs the Monte Carlo scheme to find a good label ordering in the training stage of PCC. A recently proposed method, the classifier trellis (CT) (Read et al. 2015), is extended from MCC to consider a trellis structure of labels rather than a chain to improve efficiency. During the prediction stage of both methods (Read et al. 2014, 2015), the Monte Carlo scheme is applied to generate samples from \(P(\mathbf {y}\mid \mathbf {x})\). A large number of samples may be required for Monte Carlo simulation, which results in possible computational challenges during prediction. While those samples can in principle be used to produce cost-sensitive predictions, the possibility has not been fully studied in both works. In fact, the original works consider only approximate inference for Hamming loss and subset 0/1 loss.

A group of methods take label dependencies into account by learning multiple labels jointly. Label powerset (LP) (Tsoumakas et al. 2010) transforms each label vector into a unique hyper-class and learns a multi-class classifier. If there are K labels in total, then the number of classes may be as large as \(2^K\). Hence, when the number of labels is large, LP suffers from computational issues and an insufficient number of training examples within each class.

To overcome the drawback, a method called random k-labelsets (RA\(k\)EL) (Tsoumakas and Vlahavas 2007) focuses on one labelset at a time. Recall that a k-labelset is a size-k subset of \(\{1, 2, \ldots , K\}\). RA\(k\)EL iteratively selects a random k-labelset \(S_m\) and learns an LP classifier \(h_m\) for the training set restricted to the labels within \(S_m\), i.e., \(\{(\mathbf {x}_n, \mathbf {y}_n[S_m])\}\). Each classifier \(h_m\) predicts the k labels within \(S_m\), and the final prediction of an instance is produced by a majority vote of all the classifiers. Because the number of classes in each LP classifier is decreased, RA\(k\)EL is more efficient than LP. In addition, it achieves better performance than LP in terms of Hamming and F1 loss.

Nonetheless, there is a noticeable issue with RA\(k\)EL. In each multi-class sub-problem, a one-bit prediction error and a two-bit error are equally penalized. That is, the LP classifiers cannot distinguish between small and big errors. Because these classifiers are learned without considering the evaluation metric, RA\(k\)EL is not a cost-sensitive method.

Two extensions of RA\(k\)EL were proposed to address the above issue, but they both consider only the example-dependent weighted Hamming loss rather than general metrics. The cost-sensitive random k-labelsets (CS-RA\(k\)EL) (Lo 2013) method reduces the CSMLC problem to several multi-class ones with instance weights. The weight of each instance is defined as the sum of the misclassified costs of the relevant labels. Despite the restriction, one advantage of CS-RA\(k\)EL is that it only requires re-weighting of the instances and can hence be coupled with many traditional multi-class classification algorithms.

Generalized k-labelsets ensemble (GLE) (Lo et al. 2014) learns a set of LP classifiers and determines a linear combination of them by minimizing the averaged loss of training examples. The minimization is formulated as a quadratic optimization problem without any constraints and hence can be solved efficiently. While both CS-RA\(k\)EL and GLE are pioneering works on extending RA\(k\)EL for CSMLC, they focus on specific applications of tagging. As a consequence, the two methods do not come with much theoretical guarantee, and it is non-trivial to extend them to handle other types of costs.

For the methods introduced above, BR and CC optimize Hamming loss; CS-RA\(k\)EL and GLE deal with weighted Hamming loss; MCC and CT minimize Hamming and subset 0/1 loss currently, with the potential of handling general metrics yet to be studied; PCC is designed to deal with general metrics, but is computationally demanding for arbitrary metrics that come without efficient inference rules. Another method that deals with general metrics is the structured support vector machine (SSVM) (Tsochantaridis et al. 2005). The SSVM optimizes a metric by re-scaling certain variables in the traditional SVM optimization problem based on the metric. However, the complexity of solving the problem depends on the metric and is usually too high for practical applications.

Condensed filter tree (CFT) (Li and Lin 2014) is a state-of-the-art CSMLC method, extended from the well-known filter tree algorithm (Beygelzimer et al. 2009) to handle multi-label data. Similarly, the divide-and-conquer tree algorithm (Beygelzimer et al. 2009) for multi-class problems can be directly adapted to CSMLC problems, resulting in the top-down tree (TT) method (Li and Lin 2014). Both CFT and TT can be viewed as cost-sensitive extensions of CC. CFT suffers from its training time, which is quadratic to the number of labels; TT suffers from its weaker performance as compared with CFT (Li and Lin 2014).

Multi-label search (MLS) (Doppa et al. 2014) optimizes a metric by adapting the \(\mathcal {HC}\)-search framework to multi-label problems. It learns a heuristic function and estimates the evaluation metric in the training stage. Then, during the prediction stage, MLS conducts a heuristic search towards minimizing the estimated cost. Despite its generality, MLS suffers from high computational complexity. To learn the heuristic function during training, it needs to solve a ranking problem consisting of \(O(\textit{NK})\) examples, where N is the number of training examples and K is the number of labels.

In summary, many existing MLC methods are not applicable to arbitrary example-based metrics of CSMLC (BR, CC, LP, RA\(k\)EL). There are some extensions dealing with restricted metrics of CSMLC (CS-RA\(k\)EL, GLE). For general metrics, current methods suffer from computational issues (CFT, MLS, SSVM), performance issues (TT), or require elegant design of inference rules or more studies to handle different metrics (PCC, MCC, CT). In the next section, we present a general yet efficient cost-sensitive multi-label method, which is competitive with state-of-the-art CSMLC methods.

4 Proposed method

Recall that the LP method solves an MLC problem by transforming it into a single multi-class problem. Similarly, a CSMLC problem can be transformed into a cost-sensitive multi-class classification (CSMCC) problem, as illustrated in the CFT work (Li and Lin 2014). The resulting method, however, suffers from the same computational issue as LP, and hence is not feasible for large problems. CFT solves the computational issue by considering an efficient multi-class classification model—the filter tree.

In this work, we deal with the computational issue differently. We extend the idea of RA\(k\)EL and propose a novel labelset-based method, which iteratively transforms the CSMLC problem into a series of CSMCC problems. Different from RA\(k\)EL, the critical part of the proposed method is the transfer of the cost information to the sub-problems in the training stage. This is not a trivial task, since each sub-problem involves only a subset of labels and hence the costs in each sub-problem cannot be easily connected to those in the original problem. Therefore, we introduce the notion of reference label vectors to determine the costs in the sub-problems. While the overall idea sounds simple, it advances the study of CSMLC in several aspects:

  • Compared with traditional MLC methods such as RA\(k\)EL, the proposed method is sensitive to the evaluation metric and hence is able to optimize arbitrary example-based metrics.

  • Compared with CS-RA\(k\)EL and GLE, the proposed method handles more general metrics and comes with solid theoretical analysis.

  • Compared with PCC, MCC and SSVMs, our method alternatively considers label dependencies through labelsets and requires no manual adaptation to each evaluation metric.

  • Compared with existing CSMLC methods such as CFT, our method is more efficient in terms of training time complexity while reaching similar level of performance.

We first provide the framework of the proposed method. Then, we describe it in great detail and present its analysis.

4.1 Framework

Let \(\mathcal {T} = \{(\mathbf {x}_n, \mathbf {y}_n, \mathbf {c}_n)\}_{n=1}^N\) be the training set and M be the number of iterations. Inspired by RA\(k\)EL, in the mth iteration, our method selects a random k-labelset \(S_m\) and constructs a CSMCC training set \(\mathcal {T}_m^{\prime }=\{(\mathbf {x}_n, \mathbf {y}_n[S_m], \mathbf {c}_n^{\prime })\}_{n=1}^N\) of \(K^{\prime }=2^k\) classes, where \(\mathbf {c}_n^{\prime }:\{0,1\}^k\rightarrow \mathbb {R}\). The main difference between our method and RA\(k\)EL is that, the multi-class sub-problems defined here contain the costs \(\mathbf {c}_n^{\prime }\), and hence our method is able to carry the information of the evaluation metric. The two issues of RA\(k\)EL discussed in Sect. 3 can also be resolved by properly defining these \(\mathbf {c}_n^{\prime }\). Although in our problem setup described in Sect. 2, the label space of a CSMCC problem should be \(\mathcal {L}_{K'}{}\), by considering a bijection between \(\mathcal {L}_{K'}{}\) and \(\{0,1\}^k\), we may treat \(\mathbf {y}_n[S_m]\) as an element of \(\mathcal {L}_{K'}{}\) and assume \(\mathbf {c}_n^{\prime }:\mathcal {L}_{K'}{}\rightarrow \mathbb {R}\). Then, any CSMCC algorithm can be employed to learn a multi-class classifier \(h_m^{\prime }:\mathcal {X}\rightarrow \{0, 1\}^k\) for \(\mathcal {T}_m^{\prime }\). Similar to RA\(k\)EL, the final prediction of a new instance \(\mathbf {x}\) is produced by a majority vote of all the classifiers \(h_m^{\prime }\). More precisely, if we define \(h_m:\mathcal {X}\rightarrow \{-1, 0, 1\}^K\) by

$$\begin{aligned} \left\{ \begin{array}{l} h_m(\mathbf {x})[S_m] = 2\cdot h_m^{\prime }(\mathbf {x})-1 \in \{-1, 1\}^k\\ h_m(\mathbf {x})[S_m^c] = \mathbf {0}\in \{0\}^{K-k}, \end{array} \right. \end{aligned}$$
(2)

then the final prediction \(\hat{\mathbf {y}}\in \mathcal {Y}\) can be obtained by setting \(\hat{\mathbf {y}}[l] = 1\) if and only if \(\sum \nolimits _{m=1}^Mh_m[l] > 0\).

4.2 Cost transformation

Having described the framework, we now turn our attention to the multi-class cost functions \(\mathbf {c}_n^{\prime }\) in the sub-problems, which must be defined in each iteration. At this point, notice that if we define \(\mathbf {c}_n^{\prime }(\hat{\mathbf {y}}^{\prime }) = \llbracket {}\hat{\mathbf {y}}^{\prime } \ne \mathbf {y}_n[S_m]\rrbracket {}\), then the proposed method degenerates into RA\(k\)EL. Since this \(\mathbf {c}_n^{\prime }\) is independent of the original cost function \(\mathbf {c}_n\), it can also be seen from this assignment that RA\(k\)EL is not a cost-sensitive method.

To establish the connections between these two cost functions, \(\mathbf {c}_n^{\prime }\) must carry a certain amount of information of \(\mathbf {c}_n\). Note that the domain of \(\mathbf {c}_n^{\prime }\) is \(\{0,1\}^k\) and \(\mathbf {c}_n\) is defined on \(\mathcal {Y} = \{0, 1\}^K\). To extend \(\mathbf {c}_n^{\prime }\) to the domain of \(\mathbf {c}_n\), we propose considering a reference label vector \(\tilde{\mathbf {y}}_n \in \mathcal {Y}\) and setting the value of \(\mathbf {c}_n^{\prime }\) to be the cost \(\mathbf {c}_n\) assuming the labels outside \(S_m\) were predicted the same as \(\tilde{\mathbf {y}}_n\). Mathematically,

$$\begin{aligned} \mathbf {c}_n^{\prime }\left( \hat{\mathbf {y}}^{\prime }\right) = \mathbf {c}_n\left( \hat{\mathbf {y}}^{\prime }\cup \tilde{\mathbf {y}}_n\left[ S_m^c\right] \right) . \end{aligned}$$
(3)

Here, we treat \(\hat{\mathbf {y}}^{\prime }\) and \(\tilde{\mathbf {y}}_n[S_m^c]\) as subsets of \(S_m\) and \(S_m^c\), respectively, and therefore, their union is considered as a subset of \(\mathcal {L}_K\), or equivalently a bit vector in \(\{0, 1\}^K\).

It then remains to define these \(\tilde{\mathbf {y}}_n\) in each iteration to complete the transformation. We shall see in the next section that these reference vectors may depend on the classifiers learned in the previous iterations, and hence, the multi-class cost functions would be obtained progressively. As a consequence, the proposed method is called progressive random \(k\)-labelsets (PRA\(k\)EL). The training and prediction algorithms of PRA\(k\)EL are presented in Algorithms 1 and 2, where the weighting strategy mentioned in line 8 of Algorithm 1 is described in Sect. 4.4. For now we simply assume \(\alpha _m = 1\) for \(1 \le m \le M\). Another thing to note is that, we do not explicitly require selecting a labelset that has not been chosen before. However, in practice we give higher priority to those labels that were selected fewer times in the previous iterations. In particular, we guarantee that all labels are selected at least once if \(kM \ge K\).

figure a
figure b

4.3 Defining reference label vectors

We propose two strategies for defining the reference label vectors. The first, and also the most intuitive, is to let \(\tilde{\mathbf {y}}_n = \mathbf {y}_n\) in every iteration. The proposed method with this assignment is denoted by \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) to indicate the usage of the true label vectors. In this strategy, we implicitly assume that the labels outside the labelset can be perfectly predicted by the other classifiers.

In real-world situations, however, this is usually not the case. Therefore, in the second strategy, we define \(\tilde{\mathbf {y}}_n\) to be the predicted label vector of \(\mathbf {x}_n\) obtained thus far. Thus, the optimization in each sub-problem no longer depends on the perfect predictions from the previous classifiers. Formally, let \(F_{m,n} = \sum \nolimits _{p=1}^mh_p(\mathbf {x}_n)\) for \(1 \le n \le N\) and define \(H_{m,n} \in \mathcal {Y}\) by \(H_{m,n}[l] =\llbracket {}F_{m,n}[l] > 0\rrbracket {}\). That is, \(H_{m,n}\) is the prediction of \(\mathbf {x}_n\) by a majority vote of the first m classifiers. We then define \(\tilde{\mathbf {y}}_n\) in the m-th iteration to be \(H_{m-1,n}\) for \(m \ge 2\), and let \(\tilde{\mathbf {y}}_n = \mathbf {y}_n\) in the first iteration. Since the reference label vectors as well as the multi-class sub-problems are obtained progressively, the proposed method coupled with this strategy is denoted simply by PRA\(k\)EL.

Recall that in our problem setup we assume the minimum of each \(\mathbf {c}_n\) is 0. Therefore, for \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) we have \(\min _{\hat{\mathbf {y}}^{\prime }\in \{0, 1\}^k}\mathbf {c}_n^{\prime }(\hat{\mathbf {y}}^{\prime }) = \min _{\hat{\mathbf {y}}\in \mathcal {Y}}\mathbf {c}_n(\hat{\mathbf {y}}[S_m]\cup \mathbf {y}_n[S^c_m]) = \mathbf {c}_n(\mathbf {y}_n) = 0\). In other words, the minimum cost for every example in each sub-problem is 0, which is a consequence of \(\tilde{\mathbf {y}}_n=\mathbf {y}_n\). For PRA\(k\)EL, however, this identity may not hold. Since the predicted labels outside \(S_m\) cannot be altered in the mth iteration, it is natural to add a constant to each of the functions \(\mathbf {c}_n^{\prime }\) such that \(\min _{\hat{\mathbf {y}}^{\prime }\in \{0, 1\}^k}\mathbf {c}_n^{\prime }(\hat{\mathbf {y}}^{\prime }) = 0\). Therefore, the transformed cost functions for PRA\(k\)EL are all shifted to satisfy this equality by the following formula.

$$\begin{aligned} \mathbf {c}_n^{\prime }\hat{\mathbf {y}}^{\prime } =\mathbf {c}_n(\hat{\mathbf {y}}'\cup \tilde{\mathbf {y}}_n[S_m^c]) - \mathbf {c}_n(\mathbf {y}_n[S_m]\cup \tilde{\mathbf {y}}_n[S_m^c]) \end{aligned}$$
(4)

Interestingly, after shifting the costs, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL become equivalent under Hamming loss and ranking loss. To show this, we first present two lemmas.

Lemma 1

Let \(L_r\) be the function of ranking loss and \(\mathbf {y}\in \mathcal {Y}=\{0, 1\}^K\). Then, there exists a unique \(\mathbf {w}\in {\mathbb {R}_{\ge 0}}^K\) such that \(L_r(\mathbf {y}, \cdot ) = L_{H,\mathbf {w}}(\mathbf {y}, \cdot )\), where \(L_{H,\mathbf {w}}\) is the function of weighted Hamming loss with respect to \(\mathbf {w}\).

Proof

See Appendix. \(\square \)

Lemma 2

Let \(L_{H,\mathbf {w}}\) be the function of weighted Hamming loss and S be a k-labelset. For any subsets \(\mathbf {y}_0^{\prime }\) and \(\mathbf {y}_1^{\prime }\) of S, \(L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_0^{\prime }\cup \tilde{\mathbf {y}}[S^c])-L_{H,\mathbf {w}}(\mathbf {y}, \mathbf {y}_1^{\prime }\cup \tilde{\mathbf {y}}[S^c])\) is independent of \(\tilde{\mathbf {y}}\in \{0, 1\}^K\).

Proof

See Appendix. \(\square \)

Theorem 3

Under Hamming loss and ranking loss, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL are equivalent.

Proof

Let L be the loss function of interest and consider the m-th iteration. For any instance \(\mathbf {x}\), let \(\mathbf {b}^{\prime }\) and \(\mathbf {c}^{\prime }\) be the cost functions of \(\mathbf {x}\) in the m-th multi-class sub-problem, in the training of \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL, respectively. We show that \(\mathbf {b}^{\prime }(\mathbf {y}^{\prime }) = \mathbf {c}^{\prime }(\mathbf {y}^{\prime }) - \min \mathbf {c}^{\prime }\). Let \(\tilde{\mathbf {y}}\) be the reference label vector of \(\mathbf {x}\) for PRA\(k\)EL. Since we are considering a single instance, by Lemma 1, we may assume L is the function of weighted Hamming loss. Let S be the k-labelset in the current iteration and \(\mathbf {y}\) be the true label vector of \(\mathbf {x}\).

If \(\mathbf {y}^{\prime } \subseteq S\), then by definition,

$$\begin{aligned} \mathbf {c}^{\prime }\left( \mathbf {y}^{\prime }\right) - \min \mathbf {c}^{\prime }&= \mathbf {c}\left( \mathbf {y}^{\prime }\cup \tilde{\mathbf {y}}\left[ S^c\right] \right) - \min _{\hat{\mathbf {y}}:\hat{\mathbf {y}}\left[ S^c\right] =\tilde{\mathbf {y}}\left[ S^c\right] }\mathbf {c}\left( \hat{\mathbf {y}}\right) \\&= L\left( \mathbf {y}, \mathbf {y}^{\prime }\cup \tilde{\mathbf {y}}\left[ S^c\right] \right) - \min _{\hat{\mathbf {y}}:\hat{\mathbf {y}}\left[ S^c\right] =\tilde{\mathbf {y}}\left[ S^c\right] }L\left( \mathbf {y}, \hat{\mathbf {y}}\right) \\&= L\left( \mathbf {y}, \mathbf {y}^{\prime }\cup \tilde{\mathbf {y}}\left[ S^c\right] \right) - \min _{\hat{\mathbf {y}}^{\prime }\subseteq S}L\left( \mathbf {y}, \hat{\mathbf {y}}^{\prime }\cup \tilde{\mathbf {y}}\left[ S^c\right] \right) \\&= \max _{\hat{\mathbf {y}}^{\prime }\subseteq S} \left( L\left( \mathbf {y}, \mathbf {y}^{\prime }\cup \tilde{\mathbf {y}}\left[ S^c\right] \right) - L\left( \mathbf {y}, \hat{\mathbf {y}}^{\prime }\cup \tilde{\mathbf {y}}\left[ S^c\right] \right) \right) . \end{aligned}$$

In addition, by Lemma 2, \(L(\mathbf {y}, \mathbf {y}^{\prime }\cup \tilde{\mathbf {y}}[S^c]) - L(\mathbf {y}, \hat{\mathbf {y}}^{\prime }\cup \tilde{\mathbf {y}}[S^c])\) is independent of \(\tilde{\mathbf {y}}[S^c]\) for all \(\hat{\mathbf {y}}^{\prime }\subseteq S\). Therefore, we have

$$\begin{aligned} \mathbf {c}^{\prime }\left( \mathbf {y}^{\prime }\right) -\min \mathbf {c}^{\prime }&= \max _{\hat{\mathbf {y}}^{\prime }\subseteq S} \left( L\left( \mathbf {y}, \mathbf {y}^{\prime }\cup \mathbf {y}\left[ S^c\right] \right) - L\left( \mathbf {y}, \hat{\mathbf {y}}^{\prime }\cup \mathbf {y}\left[ S^c\right] \right) \right) \\&= L\left( \mathbf {y}, \mathbf {y}^{\prime }\cup \mathbf {y}\left[ S^c\right] \right) - L\left( \mathbf {y}, \mathbf {y}[S]\cup \mathbf {y}\left[ S^c\right] \right) \\&= L\left( \mathbf {y}, \mathbf {y}^{\prime }\cup \mathbf {y}\left[ S^c\right] \right) \\&= \mathbf {b}\left( \mathbf {y}^{\prime }\cup \mathbf {y}\left[ S^c\right] \right) \\&= \mathbf {b}^{\prime }\left( \mathbf {y}^{\prime }\right) . \end{aligned}$$

\(\square \)

Moreover, for these two loss functions, it is easy to derive an upper bound of the training cost. Consider a training example \((\mathbf {x}, \mathbf {y}, \mathbf {c})\). Let \(e_m\) be the training cost of \(\mathbf {x}\) in the m-th CSMCC sub-problem. We hope to bound the overall multi-label training cost of \(\mathbf {x}\) in terms of these \(e_m\).

By Lemma 1, again, it suffices to consider weighted Hamming loss. Recall that K is the number of labels, k is the size of the labelsets, and M is the number of iterations. For simplicity, assume kM is a multiple of K. In addition, we assume that each label appears in exactly \(r=kM{/}K\) labelsets. That is, the labelsets are selected uniformly. Let \(h_m \in \{-1, 0, 1\}^K\) be the prediction of \(\mathbf {x}\) in the m-th iteration as defined in Sect. 4.1 and \(\hat{\mathbf {y}}\in \mathcal {Y}\) be the final prediction, which is obtained by averaging these \(h_m\). Now, focus on the l-th label. If \(\hat{\mathbf {y}}[l] \ne \mathbf {y}[l]\), then there must be at least half of those m with \(l \in S_m\) such that \(h_m[l]\) is predicted incorrectly. Hence, the part of the overall training cost contributed by the l-th label cannot exceed \(e_m/(r/2) = 2e_m/r\). As a result, by the property of weighted Hamming loss, the training cost is no more than \(\sum \nolimits _{m=1}^M2e_m/r = (2K/k)\bar{e}\), where \(\bar{e}= \sum \nolimits _{m=1}^M{e_m/M}\). By the above arguments, we have the following theorem.

Theorem 4

Let \(E_m\) be the multi-class training cost of the training set in the m-th iteration. Then, under Hamming loss and ranking loss, the overall CSMLC training cost for both \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL is no more than \((2K/k)\bar{E}\), where \(\bar{E}\) is the mean of \(E_m\).

Proof

Since the statement is true for each example, the proof is straightforward. \(\square \)

Despite the equivalence between \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL for Hamming and ranking loss, they are not the same for arbitrary cost functions. In the experiment section, we demonstrate that PRA\(k\)EL is more effective under F1 loss. For now, we present an explanation, by restricting ourselves to the case where the labelsets are disjoint. In this case, \(K/k = M\), and the upper bound in Theorem 4 can be improved to \((K/k)\bar{E} = M\bar{E}\) because the final prediction of each label is determined by a single LP classifier. Under this restriction, we have a similar result for PRA\(k\)EL. Before stating the next theorem, we have to make some normality assumption about the cost functions. For a label vector \(\mathbf {y}\) and its corresponding cost function \(\mathbf {c}\), we assume that if \(\hat{\mathbf {y}}^{\prime }\in \mathcal {Y}\) is one bit closer to \(\mathbf {y}\) than \(\hat{\mathbf {y}}^{\prime \prime }\in \mathcal {Y}\), then \(\mathbf {c}(\hat{\mathbf {y}}^{\prime }) \le \mathbf {c}(\hat{\mathbf {y}}^{\prime \prime })\). That is, a more correct prediction does not result in a larger cost. In fact, this simple assumption has been implicitly made by many MLC methods such as BR, CC and RA\(k\)EL.

Theorem 5

Assume the labelsets are disjoint. Then, for any cost function satisfying the above assumption, the overall training cost for PRA\(k\)EL is no more than \(M\bar{E}\).

Proof

We may assume there is only one training example \((\mathbf {x}, \mathbf {y}, \mathbf {c})\), where the subscript n is dropped here for simplicity. Recall that the reference label vector of \(\mathbf {x}\) in the m-th iteration, denoted by \(\tilde{\mathbf {y}}^{(m)}\), is defined to be \(H_{m-1}\) for \(m \ge 2\). Then, for \(m \ge 2\),

$$\begin{aligned} \mathbf {c}(H_m)&= \mathbf {c}\left( H_m[S_m]\cup H_m\left[ S_m^c\right] \right) \\&= \mathbf {c}\left( h_m'(\mathbf {x})\cup H_{m-1}\left[ S_m^c\right] \right) \\&= E_m + \mathbf {c}\left( \mathbf {y}[S_m]\cup H_{m-1}\left[ S_m^c\right] \right) \\&\le E_m + \mathbf {c}(H_{m-1}), \end{aligned}$$

where the third equality is by definition of \(E_m\), and the inequality follows from the assumption we just made. Hence, by induction, the overall training cost is \(\mathbf {c}(H_M) \le \mathbf {c}(\tilde{\mathbf {y}}^{(1)}) + \sum _{m=1}^ME_m = \mathbf {c}(\mathbf {y}) + M\bar{E} = M\bar{E}\). \(\square \)

Note that this bound cannot be improved since all inequalities in the proof become equalities under Hamming loss. Nonetheless, there is no analogous result for \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), as shown in the following theorem.

Theorem 6

Assume \(k < K\). For \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), there is no constant \(B>0\) such that the bound \(B\bar{E}\) of the overall training cost holds for any cost functions.

Proof

Again, assume the labelsets are disjoint and there is only one instance \(\mathbf {x}\). Consider the special case where the true label vector of \(\mathbf {x}\) is \(\mathbf {y}= (1, \ldots , 1) \in \mathcal {Y}\), and assume \(h_m[l] = -1\) for all \(l \in S_m\) and all m. In this case, \(\hat{\mathbf {y}}= (0, \ldots , 0) \in \mathcal {Y}\), and therefore, its F1 loss is \(L_F(\mathbf {y}, \hat{\mathbf {y}}) = 1\). In addition, if we define \(\hat{\mathbf {y}}_m = \hat{\mathbf {y}}[S_m]\cup \mathbf {y}[S_m^c]\), then

$$\begin{aligned} E_m&= L_F(\mathbf {y}, \hat{\mathbf {y}}_m) \end{aligned}$$
(5)
$$\begin{aligned}&=\frac{\sum _l\llbracket {}\mathbf {y}[l]\ne \hat{\mathbf {y}}_m[l]\rrbracket {}}{\sum _l\llbracket {}\mathbf {y}[l]\ne \hat{\mathbf {y}}_m[l]\rrbracket {} + 2\sum _l\llbracket {}\mathbf {y}[l]=\hat{\mathbf {y}}_m[l]=1\rrbracket {}} \end{aligned}$$
(6)
$$\begin{aligned}&= \frac{k}{k+2(K-k)}. \end{aligned}$$
(7)

Hence, we have \(L_F(\mathbf {y}, \hat{\mathbf {y}}) = 1 = ((2K-k)/k)\bar{E}\). Note that if the factor 2 in the (7) is replaced by a larger constant, then the bound needs to be larger. Moreover, we can freely define a loss function L similar to \(L_F\) by replacing the constant 2 in (6) with an arbitrary positive one. Letting the constant tend to infinity, the proof is complete. \(\square \)

Theorems 5 and 6 suggest we define the reference label vectors to be the predicted instead of the true ones. Empirical results in the experiment section also support this finding. In fact, a previous study on multi-target regression has already revealed the problem of treating true targets as additional input variables (Spyromitros-Xioufis et al. 2016). Besides, the authors showed that in-sample estimates of target variables are still problematic, and proposed an approach of out-of-sample estimates to tackle the issue. Although we do not consider these kinds of estimates in this paper, we believe that a similar approach for PRA\(k\)EL could be considered in future work.

One disadvantage of employing the predicted labels is that the sub-problems need to be learned iteratively, while the training process of the LP classifiers of RA\(k\)EL can be parallelized. In addition, the two cost-sensitive extensions of RA\(k\)EL, CS-RA\(k\)EL and GLE, as well as \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), apparently do not have this drawback. There is thus a tradeoff between performance and efficiency.

4.4 Weighting of base classifiers

In general, some sub-problems of PRA\(k\)EL are easier to solve, while others are more difficult. Thus, the performance of each LP classifier within PRA\(k\)EL can be different, and the majority vote of these classifiers may be sub-optimal. Inspired by GLE (Lo et al. 2014), we can further assign different weights to the LP classifiers to represent the importance of them. To achieve this, a linear combination of the classifiers is learned by minimizing the training cost.

Formally, given a new instance \(\mathbf {x}\), its prediction \(\hat{\mathbf {y}}\in \mathcal {Y}\) is produced by setting \(\hat{\mathbf {y}}[l]=1\) if and only if \(\sum \nolimits _{m=1}^M\alpha _mh_m(\mathbf {x})[l] > 0\), where these \(\alpha _m > 0\) are called the weights of the base classifiers. Accordingly, the assignment \(F_{m,n} = \sum \nolimits _{p=1}^mh_p(\mathbf {x}_n)\) in the previous section should be changed to \(F_{m,n} = \sum \nolimits _{p=1}^m\alpha _ph_p(\mathbf {x}_n)\).

One approach for determining these weights is to solve an optimization problem after all the \(h_m\) are learned, just as GLE does. However, this overall optimization ignores the iterative nature of PRA\(k\)EL, where the value of \(F_{m,n}\) depends on \(\alpha _p\) for \(1 \le p < m\) in the m-th iteration. We therefore iteratively determine \(\alpha _m\) by greedily minimizing the training cost. More precisely, let \(\alpha _1 = 1\) for simplicity, and for \(m \ge 2\), by regarding \(H_{m,n}\) as a function of \(\alpha _m\), we solve the following single-variable optimization problem and define \(\alpha _m\) to be an optimal solution.

$$\begin{aligned} \min _{\alpha \in \mathbb {R}}\frac{1}{N}\sum _{n=1}^N\mathbf {c}_n(H_{m,n}(\alpha )) \end{aligned}$$
(8)

It is not easy to solve this type of problem in general. Nevertheless, since the objective function is piece-wise constant, the optimization problem (8) can be solved by considering only finitely many \(\alpha \), and the remaining task is to obtain these candidate \(\alpha \). It then suffices to find the discontinuities of the objective function, and therefore the zeros of each component of the function \(F_{m,n}(\alpha )\) for all n, denoted by a set \(E_{m, n}\subseteq \mathbb {R}\). Since \(F_{m,n}(\alpha ) = F_{m-1,n} + \alpha h_m(\mathbf {x}_n)\), we have \(E_{m,n} \subseteq \{\alpha \mid F_{m,n}(\alpha )[l]=0 \text{ for } \text{ some } l\in S_m\} = \{-F_{m-1,n}[l]/h_m(\mathbf {x})[l]\mid l\in S_m\}\), implying \(|E_{m,n}| \le |S_m| = k\). If \((\cup _nE_{m,n})\cap \mathbb {R}_{>0} = \{a_1, \ldots , a_P\}\) with \(0< a_1< \cdots < a_P\), then clearly \(P \le Nk\), and the set of candidate \(\alpha \) can be chosen to be \(\{(a_i+a_{i+1})/2\mid 1 \le i < P\}\cup \{a_1/2, a_P+1\}\). This weighting strategy is called greedy weighting (GW).

Certainly, one can simplify the process of solving (8) by minimizing it over a fixed finite set, E, the candidate set of \(\alpha \), to ease the burden of computation and decrease the possibility of overfitting. For example, let \(E = \{i/P\mid 1 \le i \le P\}\cup \{\epsilon \}\) for some \(P \in \mathbb {N}\), where \(0<\epsilon <1/PM\) is a small number for tie breaking. This weighting strategy is called simple weighting (SW).

4.5 Analysis of time complexity

First, we analyze the training time complexity of PRA\(k\)EL without considering the weighting of the base classifiers. The trivial steps of Algorithm 1 to form the sub-problems are of time complexity at most O(N) multiplied by the time needed to calculate the reference label \(\tilde{\mathbf {y}}_n\) and the cost \(\mathbf {c}_n\). The more time-consuming step of PRA\(k\)EL, similar to RA\(k\)EL, depends on the time spent on the CSMCC base classifier, which is denoted as \(T_0(N, d, K^{\prime })\) for N examples, d features, and \(K'\) classes. The empirical results of PRA\(k\)EL in the next section demonstrate that it suffices to let each label appear in a fixed number of labelsets on average. That is, only \(M = O(K/k)\) iterations are needed, and hence, the practical training time of PRA\(k\)EL is \(T_0(N, d, 2^k)\cdot O(K/k)\), which is linear in K. In contrast, as discussed in Sect. 3, the training time of CFT (Li and Lin 2014) is \(O(\textit{NK}^2)\) multiplied by the time needed to calculate the cost \(\mathbf {c}_n\), and summed with O(K) calls to the base classifier. The complexity analysis reveals the asymptotic efficiency of PRA\(k\)EL over CFT.

When considering the weighting, in each iteration, GW (which is generally more time consuming than SW) needs O(k) to determine the zeros of each \(F_{m, n}\), and evaluating the goodness of all candidate \(\alpha \) can be done within O(Nk), multiplied by the time needed to calculate \(\mathbf {c}_n\). That is, the running time of PRA\(k\)EL-GW with \(M = O(K/k)\) iterations needs an additional \(O(\textit{NK})\) multiplied by the time needed to calculate the cost \(\mathbf {c}_n\). The additional time of PRA\(k\)EL-GW is still asymptotically more efficient than the training time of CFT.

5 Experiment

5.1 Experimental setup

The experiments were conducted on seven benchmark datasets (Tsoumakas et al. 2011).Footnote 3 These datasets were taken because of their diversity of domains and popularity in multi-label research community. Their basic statistics are provided in Table 2.

Table 2 Statistics of the datasets

For statistical significance, all results reported in Sect. 5.2 were averaged over 30 independent runs. For each run, we randomly sampled 75% of the dataset for training and used the remaining data for testing. One third of the training set was reserved for validation.

We compared four variants of the proposed method, namely, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), PRA\(k\)EL, PRA\(k\)EL-GW and PRA\(k\)EL-SW, with three types of methods: (a) labelset-related methods, including RA\(k\)EL (Tsoumakas and Vlahavas 2007) and CS-RA\(k\)EL (Lo 2013; b) state-of-the-art CSMLC methods, including EPCC (Dembczynski et al. 2010, 2011, 2012) and CFT (Li and Lin 2014; c) a state-of-the-art cost-insensitive MLC method, ML-\(k\)NN (Zhang and Zhou 2007). All hyper-parameters of all the compared methods and the base classifiers were selected by grid search on the validation set. For our method and the labelset-related methods, the parameter k was selected from \(\{2, \ldots , 9\}\), and for each k, the maximum M was fixed to 10K / k. The ensemble size of EPCC was selected from \(\{1, \ldots , 7\}\) for efficiency, and on datasets with more than 20 labels, the Monte Carlo sampling technique was employed with a sample size of 200 (Dembczynski et al. 2012). For CFT, the number of internal iterations was selected from \(\{2, \ldots , 8\}\), as suggested by the original authors.Footnote 4

For the base classifier of EPCC, we employed logistic regression implemented in LIBLINEAR (Fan et al. 2008). For the methods requiring a regular binary or multi-class classifier, we used linear one-versus-all support vector machines (SVMs) implemented in LIBLINEAR. Our method was coupled with linear RED-OSSVR (Tu and Lin 2010).Footnote 5 The regularization parameter in linear SVMs and RED-OSSVR was also selected by grid search on the validation set. The cost functions we considered in the experiments are all derived from loss functions, as explained in Sect. 2.

Table 3 Performance of each method in terms of Hamming loss (mean ± SE)
Table 4 Performance of each method in terms of ranking loss (mean ± SE)
Table 5 Performance of each method in terms of F1 loss (mean ± SE)

5.2 Results and discussion

Tables 3, 4 and 5 present the results of the four variants of our method, EPCC, CFT, RA\(k\)EL and ML-\(k\)NN in terms of Hamming, ranking and F1 loss. The best results for each dataset are marked in bold.

5.3 Comparison of variants of PRAk EL

In this subsection, we draw a comparison between the four variants of the proposed method, namely, \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\), PRA\(k\)EL, PRA\(k\)EL-GW and PRA\(k\)EL-SW. We first compare \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL to understand the difference between using the true and the predicted ones as the reference label vectors. Recall that \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) and PRA\(k\)EL are theoretically equivalent under Hamming and ranking loss, and therefore, it is not a coincidence that the results of the first two variants in Tables 3 and 4 are exactly the same. Table 5 shows that on all the datasets PRA\(k\)EL has lower costs than \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) in terms of F1 loss. We also present in Table 6 the results of the Student’s t-test at a significance level of 0.05, on two pairs of variants. The comparison of PRA\(k\)EL and \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) under F1 loss reveals that PRA\(k\)EL is significantly superior on five datasets. This demonstrates the benefits of exploiting previous predictions, and is also consistent with the theoretical results in Theorems 5 and 6. Thus, for the remaining experiments, the results of \({\hbox {PRA}k\hbox {EL}_{\mathrm{t}}}\) are not presented.

Table 6 Variants of PRA\(k\)EL versus other variants by the Student’s t-test at a significance level of 0.05 (superior/comparable/inferior)
Table 7 Training costs of PRA\(k\)EL, PRA\(k\)EL-GW and PRA\(k\)EL-SW in terms of F1 loss (mean ± SE)

Next, we compare the three weighting strategies, i.e., uniform, greedy and simple weighting. From Table 6, overall PRA\(k\)EL is competitive with PRA\(k\)EL-GW, although under ranking loss the performance of PRA\(k\)EL-GW is slightly better. In addition, from the last comparison we see that PRA\(k\)EL-SW is never outperformed by PRA\(k\)EL under these three loss functions. For Hamming loss, there is no significant difference between the performance of PRA\(k\)EL and PRA\(k\)EL-SW. For ranking loss and F1 loss, however, PRA\(k\)EL-SW performs slightly better than PRA\(k\)EL.

Since the last two variants greedily minimize the training costs in every iteration, it is expected that their training costs are much lower than PRA\(k\)EL’s. Table 7 and Figure 1, which show the training costs in terms of F1 loss, verify this deduction. Under other loss functions we also observe similar behavior . The reason is that, for PRA\(k\)EL-GW, the weights of the classifiers are determined from an optimization problem with no constraints, while for PRA\(k\)EL-SW, the weights are restricted to the candidate set. From a holistic point of view, the candidate set acts as a regularizer, which prevents PRA\(k\)EL-SW from excessively overfitting the training set. In conclusion, among the four variants of our method, PRA\(k\)EL-SW is the most stable.

Finally, we demonstrate the effectiveness of ensemble. Figure 2 shows the training and test costs versus the number of iterations M on yeast dataset. We can see that all the costs are decreasing as a function of M. The behavior of the costs on the other datasets is similar.

Fig. 1
figure 1

Training costs of PRA\(k\)EL, PRA\(k\)EL-GW and PRA\(k\)EL-SW in terms of F1 loss with the standard errors

Fig. 2
figure 2

Training and test costs of PRA\(k\)EL versus the number of iterations (M) on the yeast dataset. a Hamming loss, b ranking loss, c F1 loss

5.4 Comparison with state-of-the-art methods

We compare our method with EPCC, CFT, RA\(k\)EL and ML-\(k\)NN in terms of Hamming, ranking and F1 loss. Table 3 shows the performance of each method under Hamming loss. RA\(k\)EL and ML-\(k\)NN individually achieve the best performance on one dataset. On the other datasets, the method with the lowest cost is either PRA\(k\)EL or EPCC. Overall, all the methods perform fairly well under Hamming loss.

The results for the other two loss functions are shown in Tables 4 and 5. In terms of ranking loss, EPCC is the most stable method, which outperforms the others on five datasets, and the proposed method reaches the lowest cost on the remaining two datasets. Under F1 loss, our method is superior to the others on half of the datasets, and EPCC has the best performance on two datasets. In addition, it can be seen that under these two loss functions, the two cost-insensitive methods, RA\(k\)EL and ML-\(k\)NN, are completely not comparable to either of the other cost-sensitive methods. This observation also demonstrates the effectiveness of cost sensitivity.

To compare all the classifiers over multiple datasets, we conducted the Friedman test with the corresponding Nemenyi post-hoc test (Demšar 2006). For all the three loss functions, the p-values of the Friedman test were \(6.6 \times 10^{-3}\), \(3.6 \times 10^{-5}\) and \(8.7 \times 10^{-6}\), respectively. Therefore, the null hypothesis was rejected at \(\alpha = 0.05\), and the post-hoc test was performed afterwards. The results of Nemenyi test are shown in Table 8, which agree with the discussion in the last paragraph. Basically the proposed method and EPCC outperform the two cost-insensitive methods, RA\(k\)EL and ML-\(k\)NN. However, according to the Nemenyi test, the performance of the three cost-sensitive methods does not have significant differences. To make further comparisons of our method, EPCC and CFT, we conducted the pairwise Student’s t-test at a significance level of 0.05 for each dataset. Table 9 shows the number of datasets on which PRA\(k\)EL is statistically superior, comparable, or inferior to each of the other methods. We conclude that under these three metrics, PRA\(k\)EL performs significantly better than both RA\(k\)EL and ML-\(k\)NN and generally better than CFT. As compared with EPCC, PRA\(k\)EL is competitive under Hamming and F1 loss, but performs slightly worse than EPCC under ranking loss.

Table 8 Significance indicated by the Nemenyi test at a significance level of 0.05 (\(\succ \) means significantly better than)
Table 9 PRA\(k\)EL versus each method by the Student’s t-test at a significance level of 0.05 (superior/comparable/inferior)
Table 10 Performance of each method in terms of composite loss (mean ± SE)

5.5 Comparison with EPCC and CFT under composite loss

To demonstrate our method’s capability to optimize general metrics, we defined the function of composite loss as \(L_c = 0.8 L_H + 0.2 L_F\), where \(L_H\) and \(L_F\) are the functions of Hamming and F1 loss, respectively. This loss function was similarly defined in one experiment on CFT (Li and Lin 2014).

Because there is no inference rule for EPCC yet, we used the rules for both Hamming and F1 loss. The results are shown in Table 10, where EPCC-Ham is EPCC with the inference rule for Hamming loss, and EPCC-F1 is the one with the rule for F1 loss. Since CFT is a general cost-sensitive method, we also included it in this experiment. The null hypothesis of the Friedman test was rejected at \(\alpha = 0.05\) with the p-value being \(4.6 \times 10^{-4}\), and the average rank of PRA\(k\)EL is 1.14. According to the Nemenyi test, the performance of PRA\(k\)EL is significantly better than that of both EPCCl-Ham and EPCC-F1. In addition, according to the results of the t-test in Table 11, the proposed method is superior to CFT except on the enron dataset.

Table 11 PRA\(k\)EL versus EPCC and CFT under composite loss by the Student’s t-test at a significance level of 0.05 (superior/comparable/inferior)
Table 12 Performance of PRA\(k\)EL and CS-RA\(k\)EL in terms of Hamming loss (mean ± SE)
Table 13 Performance of PRA\(k\)EL and CS-RA\(k\)EL in terms of weighted Hamming loss (mean ± SE)

5.6 Comparison with CS-RAk EL and GLE

Recall that under the problem setup of CS-RA\(k\)EL and GLE, they can handle only weighted Hamming loss, as defined in Sect. 2. Therefore, in this subsection we compare PRA\(k\)EL with these two methods in terms of Hamming loss and weighted Hamming loss. For each dataset, each component of the weight, \(\mathbf {w}[l]\), was drawn independently from the uniform distribution over [0, 1], and then the weight \(\mathbf {w}\) was normalized such that \(\sum _{l=1}^K\mathbf {w}[l] = 1\). The results are shown in Tables 12 and 13. We see that PRA\(k\)EL reaches lower costs than both CS-RA\(k\)EL and GLE on nearly all the datasets. It is clear from these two tables that PRA\(k\)EL performs significantly better than the other methods in terms of both loss functions. For completeness, we also provide the results of the Student’s t-test in Table 14. The reason for such a significant improvement is that our method considers not only the differences between misclassified costs for each example, but also the varying costs among all the examples. In contrast, CS-RA\(k\)EL and GLE take only the latter into account.

Table 14 PRA\(k\)EL versus CS-RA\(k\)EL by the Student’s t-test at a significance level of 0.05 (superior/comparable/inferior)

6 Conclusion

We proposed an efficient cost-sensitive extension of RA\(k\)EL, named PRA\(k\)EL, which meets the needs of different MLC applications by taking into account the evaluation metric. Experimental results demonstrate that PRA\(k\)EL is competitive with other methods designed for certain specific metrics, and frequently outperforms others under general loss functions. The generality of PRA\(k\)EL allows it to optimize arbitrary example-based evaluation metrics without additional knowledge, inference rule, or approximation, and thus, it is more suitable for solving real-world problems.