1 Introduction

The multi-label classification problem (MLC), which allows multiple labels to be associated with each example, is an extension of the multi-class classification problem. The MLC problem satisfies the demands of many real-world applications (Carneiro et al. 2007; Trohidis et al. 2008; Barutçuoglu et al. 2006). Different applications usually need different criteria to evaluate the prediction performance of MLC algorithms. Some popular criteria are Hamming loss, Rank loss, F1 score, and Accuracy score (Tsoumakas et al. 2010; Madjarov et al. 2012).

Label embedding (LE) is an important family of MLC algorithms that jointly extract the information of all labels to improve the prediction performance. LE algorithms automatically transform the original labels to an embedded space, which represents the hidden structure of the labels. After conducting learning within the embedded space, LE algorithms make more accurate predictions with the help of the hidden structure.

Existing LE algorithms can be grouped into two categories based on the dimension of the embedded space: label space dimension reduction (LSDR) and label space dimension expansion (LSDE). LSDR algorithms (Hsu et al. 2009; Kapoor et al. 2012; Tai and Lin 2012; Sun et al. 2011; Chen and Lin 2012; Yu et al. 2014; Lin et al. 2014; Balasubramanian and Lebanon 2012; Bi and Kwok 2013; Bhatia et al. 2015; Yeh et al. 2017) consider a low-dimensional embedded space for digesting the information between labels and conduct more effective learning. In contrast to LSDR algorithms, LSDE algorithms (Zhang and Schneider 2011; Ferng and Lin 2013; Tsoumakas et al. 2011a) focus on a high-dimensional embedded space. The additional dimensions can then be used to represent different angles of joint information between the labels to reach better performance.

While LE algorithms have become major tools for tackling the MLC problem, most existing LE algorithms are designed to optimize only one or few specific criteria. The algorithms may then suffer from bad performance with respect to other criteria. Given that different applications demand different criteria, it is thus important to achieve cost (criterion) sensitivity to make MLC algorithms more realistic. Cost-sensitive MLC (CSMLC) algorithms consider the criterion as an additional input, and take it into account either in the training or the predicting stage. The additional input can then be used to guide the algorithm towards more realistic predictions. CSMLC algorithms are attracting research attention in recent years (Lo et al. 2011, 2014; Dembczynski et al. 2010, 2011; Li and Lin 2014), but to the best of our knowledge, there is no work on cost-sensitive label embedding (CSLE) algorithms yet.

In this paper, we study the design of CSLE algorithms, which take the intended criterion into account in the training stage to locate a cost-sensitive hidden structure in the embedded space. The cost-sensitive hidden structure can then be used for more effective learning and more accurate predictions with respect to the criterion of interest. Inspired by the finding that many of the existing LSDR algorithms can be viewed as linear manifold learning approaches, we propose to adopt manifold learning for CSLE. Nevertheless, to embed any general and possibly complicated criterion, linear manifold learning may not be sophisticated enough. We thus start with multidimensional scaling (MDS), one famous non-linear manifold learning approach, to propose a novel CSLE algorithm. The proposed cost-sensitive label embedding with multidimensional scaling (CLEMS) algorithm embeds the cost information within the distance measure of the embedded space. We further design a mirroring trick for CLEMS to properly embed the possibly asymmetric criterion information within the symmetric distance measure. We also design an efficient procedure that conquers the difficulty of making predictions through the non-linear cost-sensitive hidden structure. Theoretical results justify that CLEMS achieves cost-sensitivity through learning in the MDS-embedded space. Extensive empirical results demonstrate that CLEMS usually reaches better performance than leading LE algorithms across different criteria. In addition, CLEMS also performs better than the state-of-the-art CSMLC algorithms (Li and Lin 2014; Dembczynski et al. 2010, 2011). The results suggest that CLEMS is a promising algorithm for CSMLC.

This paper is organized as follows. Section 2 formalizes the CSLE problem and Sect. 3 illustrates the proposed algorithm along with theoretical justifications. We discuss the experimental results in Sect. 4 and conclude in Sect. 5.

2 Cost-sensitive label embedding

In multi-label classification (MLC), we denote the feature vector of an instance by \({\mathbf {x}} \in {\mathcal {X}} \subseteq {\mathbb {R}}^{d}\) and denote the label vector by \({\mathbf {y}} \in {\mathcal {Y}} \subseteq \{ 0,1 \}^{K}\) where \({\mathbf {y}}[i]=1\) if and only if the instance is associated with the i-th label. Given the training instances \(\mathcal {D} = \{({\mathbf {x}}^{(n)}, {\mathbf {y}}^{(n)}) \}_{n=1}^{N}\), the goal of MLC algorithms is to train a predictor \(h:{\mathcal {X}} \rightarrow {\mathcal {Y}}\) from \(\mathcal {D}\) in the training stage, with the expectation that for any unseen testing instance \(({\mathbf {x}}, {\mathbf {y}})\), the prediction \(\tilde{{\mathbf {y}}} = h({\mathbf {x}})\) can be close to the ground truth \({\mathbf {y}}\).

A simple criterion for evaluating the closeness between \({\mathbf {y}}\) and \(\tilde{{\mathbf {y}}}\) is Hamming loss \(({\mathbf {y}}, \tilde{{\mathbf {y}}}) = \frac{1}{K}\sum _{i=1}^{K} \llbracket {\mathbf {y}}[i] \ne \tilde{{\mathbf {y}}}[i] \rrbracket \). It is worth noting that Hamming loss separately evaluates each label component of \(\tilde{{\mathbf {y}}}\). There are other criteria that jointly evaluate all the label components of \(\tilde{{\mathbf {y}}}\), such as F1 score, Rank loss, 0/1 loss, and Accuracy score (Tsoumakas et al. 2010; Madjarov et al. 2012).

Arguably the simplest algorithm for MLC is binary relevance (BR) (Tsoumakas and Katakis 2007). BR separately trains a binary classifier for each label without considering the information of other labels. In contrast to BR, label embedding (LE) is an important family of MLC algorithms that jointly use the information of all labels to achieve better prediction performance. LE algorithms try to identify the hidden structure behind the labels. In the training stage, instead of training a predictor h directly, LE algorithms first embed each K-dimensional label vector \({\mathbf {y}}^{(n)}\) as an M-dimensional embedded vector \(\mathbf {z}^{(n)} \in {\mathcal {Z}}\subseteq {\mathbb {R}}^{M}\) by an embedding function \(\varPhi :{\mathcal {Y}} \rightarrow {\mathcal {Z}}\). The embedded vector \(\mathbf {z}^{(n)}\) can be viewed as the hidden structure that contains the information pertaining to all labels. Then, the algorithms train a internal predictor \(g:{\mathcal {X}} \rightarrow {\mathcal {Z}}\) from \(\{({\mathbf {x}}^{(n)}, \mathbf {z}^{(n)}) \}_{n=1}^{N}\). In the predicting stage, for the testing instance \({\mathbf {x}}\), LE algorithms obtain the predicted embedded vector \({\tilde{\mathbf {z}}} = g({\mathbf {x}})\) and use a decoding function \(\varPsi :{\mathcal {Z}} \rightarrow {\mathcal {Y}}\) to get the prediction \(\tilde{{\mathbf {y}}}\). In other words, LE algorithms learn the predictor by \(h = \varPsi \circ g\). Figure 1 illustrates the flow of LE algorithms.

Fig. 1
figure 1

Flow of label embedding

Existing LE algorithms can be grouped into two categories based on M (the dimension of \({\mathcal {Z}}\)) and K (the dimension of \({\mathcal {Y}}\)). LE algorithms that work with \(M \le K\) are termed as label space dimension reduction (LSDR) algorithms. They consider a low-dimensional embedded space for digesting the information between labels and utilize different pairs of \((\varPhi , \varPsi )\) to conduct more effective learning. Compressed sensing (Hsu et al. 2009) and Bayesian compressed sensing (Kapoor et al. 2012) consider a random projection as \(\varPhi \) and obtain \(\varPsi \) by solving an optimization problem per test instance. Principal label space transformation (Tai and Lin 2012) considers \(\varPhi \) calculated from an optimal linear projection of the label vectors and derives \(\varPsi \) accordingly. Some other works also consider optimal linear projections as \(\varPhi \) but take feature vectors into account in the optimality criterion, including canonical-correlation-analysis methods (Sun et al. 2011), conditional principal label space transformation (Chen and Lin 2012), low-rank empirical risk minimization for multi-label learning (Yu et al. 2014), and feature-aware implicit label space encoding (Lin et al. 2014). Canonical-correlated autoencoder (Yeh et al. 2017) extends the linear projection works by using neural networks instead. Landmark selection method (Balasubramanian and Lebanon 2012) and column subset selection (Bi and Kwok 2013) design \(\varPhi \) to select a subset of labels as embedded vectors and derive the corresponding \(\varPsi \). Sparse local embeddings for extreme classification (Bhatia et al. 2015) trains a locally-linear projection as \(\varPhi \) and constructs \(\varPsi \) by nearest neighbors. The smaller M in LSDR algorithms allows the internal predictor g to be learned more efficiently and effectively.

Other LE algorithms work with \(M > K\), which are called label space dimension expansion (LSDE) algorithms. Canonical-correlation-analysis output codes (Zhang and Schneider 2011) design \(\varPhi \) based on canonical correlation analysis to generate additional output codes to enhance the performance. Error-correcting-code (ECC) algorithms (Ferng and Lin 2013) utilize the encoding and decoding functions of standard error-correcting codes for communication as \(\varPhi \) and \(\varPsi \), respectively. Random k-labelsets (Tsoumakas et al. 2011a), a popular algorithm for MLC, can be considered as an ECC-based algorithm with the repetition code (Ferng and Lin 2013). LSDE algorithms use additional dimensions to represent different angles of joint information between the labels to reach the better performance.

To the best of our knowledge, all the existing LE algorithms above are designed for one or few specific criteria and may suffer from bad performance with respect to other criteria. For example, the optimality criterion within principal label space transformation (Tai and Lin 2012) is closely related to Hamming loss. For MLC data with very few non-zero \({\mathbf {y}}[i]\), which are commonly encountered in real-world applications, optimizing Hamming loss can easily results in all-zero predictions \(\tilde{{\mathbf {y}}}[i]\), which suffer from bad F1 score.

MLC algorithms that take the evaluation criterion into account are called cost-sensitive MLC (CSMLC) algorithms and are attracting research attentions in recent years. CSMLC algorithms take the criterion as an additional input and consider it either in the training or the predicting stage. For any given criterion, CSMLC algorithms can ideally make cost-sensitive predictions with respect to the criterion without extra efforts in algorithm design. Generalized k-labelsets ensemble (Lo et al. 2011, 2014) is extended from random k-labelsets (Tsoumakas et al. 2011a) and digests the criterion by giving appropriate weights to labels. The ensemble algorithm performs well for any weighted Hamming loss but cannot tackle more general criteria that jointly evaluate all the label components, such as F1 score. Two CSMLC algorithms for arbitrary criterion are probabilistic classifier chain (PCC) (Dembczynski et al. 2010, 2011) and condensed filter tree (CFT) (Li and Lin 2014). PCC is based on estimating the probability of each label and making a Bayes-optimal inference for the evaluation criterion. While PCC can in principle be used for any criterion, it may suffer from computational difficulty unless an efficient inference rule for the criterion is designed first. CFT is based on converting the criterion as weights when learning each label. CFT conducts the weight-assignment in a more sophisticated manner than generalized k-labelsets ensemble does, and can hence work with arbitrary criterion. Both PCC and CFT are extended from classifier chain (CC) (Read et al. 2011) and form a chain of labels to utilize the information of the earlier labels in the chain, but they cannot globally find the hidden structure of all labels like LE algorithms.

In this paper, we study the design of cost-sensitive label embedding (CSLE) algorithms that respect the criterion when calculating the embedding function \(\varPhi \) and the decoding function \(\varPsi \). We take an initiative of studying CSLE algorithms, with the hope of achieving cost-sensitivity and finding the hidden structure at the same time. More precisely, we take the following CSMLC setting (Li and Lin 2014). Consider a cost function \(c( {\mathbf {y}}, \tilde{{\mathbf {y}}})\) which represents the penalty when the ground truth is \({\mathbf {y}}\) and the prediction is \(\tilde{{\mathbf {y}}}\). We naturally assume that \(c( {\mathbf {y}}, \tilde{{\mathbf {y}}}) \ge 0\), with value 0 attained if and only if \({\mathbf {y}}\) and \(\tilde{{\mathbf {y}}}\) are the same. Given training instances \(\mathcal {D} = \{({\mathbf {x}}^{(n)}, {\mathbf {y}}^{(n)}) \}_{n=1}^{N}\) and the cost function \(c(\cdot , \cdot )\), CSLE algorithms learn an embedding function \(\varPhi \), a decoding function \(\varPsi \), and an internal predictor g, based on both the training instance \(\mathcal {D}\) and the cost function \(c(\cdot , \cdot )\). The objective of CSLE algorithms is to minimize the expected cost \(c({\mathbf {y}}, h({\mathbf {x}}))\) for any unseen testing instance \(({\mathbf {x}}, {\mathbf {y}})\), where \(h = \varPsi \circ g\).

3 Proposed algorithm

We first discuss the difficulties of directly extending state-of-the-art LE algorithms for CSLE. In particular, the decoding function \(\varPsi \) of many existing algorithms, such as conditional principal label space transformation (Chen and Lin 2012) and feature-aware implicit label space encoding (Lin et al. 2014), are derived from \(\varPhi \) and can be divided into two steps. The first step is using some \(\psi :{\mathcal {Z}} \rightarrow {\mathbb {R}}^K\) that corresponds to \(\varPhi \) to decode the embedded vector \(\mathbf {z}\) to a real-valued vector \(\hat{{\mathbf {y}}} \in {\mathbb {R}}^K\); the second step is choosing a threshold to transform \(\hat{{\mathbf {y}}}\) to \(\tilde{{\mathbf {y}}} \in \{0, 1\}^{K}\). If the embedding function \(\varPhi \) is a linear function, the corresponding \(\psi \) can be efficiently computed by pseudo-inverse. However, for complicated cost functions, a linear function may not be sufficient to properly embed the cost information. On the other hand, if the embedding function \(\varPhi \) is a non-linear function, such as those within kernel principal component analysis (Schölkopf et al. 1998) and kernel dependency estimation (Weston et al. 2002), \(\psi \) is often difficult to derive or time-consuming in calculation, which then makes \(\varPsi \) practically infeasible to compute.

To resolve the difficulties, we do not consider the two-step decoding function \(\varPsi \) that depends on deriving \(\psi \) from \(\varPhi \). Instead, we first fix a decent decoding function \(\varPsi \) and then derive the corresponding embedding function \(\varPhi \). We realize that the goal of \(\varPsi \) is simply to locate the most probable label vector \(\tilde{{\mathbf {y}}}\) from \({\mathcal {Y}}\), which is of a finite cardinality, based on the predicted embedded vector \({\tilde{\mathbf {z}}} = g({\mathbf {x}}) \in {\mathcal {Z}}\). If all the embedded vectors are sufficiently far away from each other in \({\mathcal {Z}}\), one natural decoding function is to calculate the nearest neighbor \(\mathbf {z}_q\) of \({\tilde{\mathbf {z}}}\) and return the corresponding \({\mathbf {y}}_q\) as \(\tilde{{\mathbf {y}}}\). Such a nearest-neighbor decoding function \(\varPsi \) is behind some ECC-based LSDE algorithms (Ferng and Lin 2013) and will be adopted.

The nearest-neighbor decoding function \(\varPsi \) is based on the distance measure of \({\mathcal {Z}}\), which matches our primary need of representing the cost information. In particular, if \({\mathbf {y}}_i\) is a lower-cost prediction than \({\mathbf {y}}_j\) with respect to the ground truth \({\mathbf {y}}_t\), we hope that the corresponding embedded vector \(\mathbf {z}_i\) would be closer to \(\mathbf {z}_t\) than \(\mathbf {z}_j\). Then, even if g makes a small error such that \({\tilde{\mathbf {z}}} = g({\mathbf {x}})\) deviates from the desired \(\mathbf {z}_t\), nearest-neighbor decoding function \(\varPsi \) can decode to the lower-cost \({\mathbf {y}}_i\) as \(\tilde{{\mathbf {y}}}\) instead of \({\mathbf {y}}_j\). In other words, for any two label vectors \({\mathbf {y}}_i, {\mathbf {y}}_j \in {\mathcal {Y}}\) and the corresponding embedded vectors \(\mathbf {z}_i, \mathbf {z}_j \in {\mathcal {Z}}\), we want the Euclidean distance between \(\mathbf {z}_i\) and \(\mathbf {z}_j\), which is denoted by \(d( \mathbf {z}_i, \mathbf {z}_j)\), to preserve the magnitude-relationship of the cost \(c({\mathbf {y}}_i, {\mathbf {y}}_j)\).

Based on this objective, the framework of the proposed algorithm is as follows. In the training stage, for each label vector \({\mathbf {y}}_i \in {\mathcal {Y}}\), the proposed algorithm determines an embedded vector \(\mathbf {z}_i\) such that the distance between two embedded vectors \(d( \mathbf {z}_i, \mathbf {z}_j)\) in \({\mathcal {Z}}\) approximates the transformed cost \(\delta (c( {\mathbf {y}}_i, {\mathbf {y}}_j))\), where \(\delta (\cdot )\) is a monotonic transform function to preserve the magnitude-relationship and will be discussed later. We let the embedding function \(\varPhi \) be the mapping \({\mathbf {y}}_i \rightarrow \mathbf {z}_i\) and use \(\mathcal {Q}\) to represent the embedded vector set \(\{\varPhi ({\mathbf {y}}_i) \, | \, {\mathbf {y}}_i \in {\mathcal {Y}} \}\). Then the algorithm trains a regressor \(g:{\mathcal {X}} \rightarrow {\mathcal {Z}}\) as the internal predictor.

In the predicting stage, when receiving a testing instance \({\mathbf {x}}\), the algorithm obtains the predicted embedded vector \({\tilde{\mathbf {z}}} = g({\mathbf {x}})\). Given that the cost information is embedded in the distance, for each \(\mathbf {z}_i \in \mathcal {Q}\), the distance \(d( \mathbf {z}_i, {\tilde{\mathbf {z}}})\) can be viewed as the estimated cost if the underlying truth is \({\mathbf {y}}_i\). Hence the algorithm finds \(\mathbf {z}_q \in \mathcal {Q}\) such that the distance \(d( \mathbf {z}_q, {\tilde{\mathbf {z}}})\) is the smallest (the smallest estimated cost), and lets the corresponding \({\mathbf {y}}_q = \varPhi ^{-1}(\mathbf {z}_q) = \tilde{{\mathbf {y}}}\) be the final prediction for \({\mathbf {x}}\). In other words, we have a nearest-neighbor-based \(\varPsi \), with the first step being the determination of the nearest-neighbor of  \({\tilde{\mathbf {z}}}\) and the second step being the utilization of \(\varPhi ^{-1}\) to obtain the prediction \(\tilde{{\mathbf {y}}}\).

Three key issues of the framework above are yet to be addressed. The first issue is the determination of the embedded vectors \(\mathbf {z}_i\). The second issue is using the symmetric distance measure to embed the possibly asymmetric cost functions where \(c( {\mathbf {y}}_i, {\mathbf {y}}_j) \ne c( {\mathbf {y}}_j, {\mathbf {y}}_i)\). The last issue is the choice of a proper monotonic transform function \(\delta (\cdot )\). The issues will be discussed in the following sub-sections.

3.1 Calculating the embedded vectors by multidimensional scaling

As mentioned above, our objective is to determine embedded vectors \(\mathbf {z}_i\) such that the distance \(d( \mathbf {z}_i, \mathbf {z}_j)\) approximates the transformed cost \(\delta (c( {\mathbf {y}}_i, {\mathbf {y}}_j))\). The objective can be formally defined as minimizing the embedding error \((d(\mathbf {z}_i, \mathbf {z}_j) - \delta (c({\mathbf {y}}_i, {\mathbf {y}}_j)))^2\).

We observe that the transformed cost \(\delta (c( {\mathbf {y}}_i, {\mathbf {y}}_j))\) can be viewed as the dissimilarity between label vectors \({\mathbf {y}}_i\) and \({\mathbf {y}}_j\). Computing an embedding based on the dissimilarity information matches the task of manifold learning, which is able to preserve the information and discover the hidden structure. Based on our discussions above, any approach that solves the manifold learning task can then be taken to solve the CSLE problem. Nevertheless, for CSLE, different cost functions may need different M (the dimension of \({\mathcal {Z}}\)) to achieve a decent embedding. We thus consider manifold learning approaches that can flexibly take M as the parameter, and adopt a classic manifold learning approach called multidimensional scaling (MDS) (Kruskal 1964).

For a target dimension M, MDS attempts to discover the hidden structure of \(L_{\scriptscriptstyle MDS}\) objects by embedding their dissimilarities in an M-dimensional target space. The dissimilarity is represented by a symmetric, non-negative, and zero-diagonal dissimilarity matrix \({\varvec{\Delta }}\), which is an \(L_{\scriptscriptstyle MDS} \times L_{\scriptscriptstyle MDS}\) matrix with \({\varvec{\Delta }}_{i, j}\) being the dissimilarity between the i-th object and the j-th object. The objective of MDS is to determine target vectors \(\mathbf {u}_1, \mathbf {u}_2, \ldots , \mathbf {u}_{L_{\scriptscriptstyle MDS}}\) in the target space to minimize the stress, which is defined as \(\sum _{i, j} \mathbf {W}_{i, j}( d(\mathbf {u}_i, \mathbf {u}_j) - {\varvec{\Delta }}_{i, j})^2\), where d denotes the Euclidean distance in the target space, and \(\mathbf {W}\) is a symmetric, non-negative, and zero-diagonal matrix that carries the weight \(\mathbf {W}_{i, j}\) of each object pair. There are several algorithms available in the literature for solving MDS. A representative algorithm is Scaling by MAjorizing a COmplicated Function (SMACOF) (De Leeuw 1977), which can iteratively minimize stress. The complexity of SMACOF is generally \(\mathcal {O}((L_{\scriptscriptstyle MDS})^3)\), but there is often room for speeding up with special weight matrices \(\mathbf {W}\).

The embedding error \((d(\mathbf {z}_i, \mathbf {z}_j) - \delta (c({\mathbf {y}}_i, {\mathbf {y}}_j)))^2\) and the stress \(( d(\mathbf {u}_i, \mathbf {u}_j) - {\varvec{\Delta }}_{i, j})^2\) are of very similar form. Therefore, we can view the transformed costs as the dissimilarities of embedded vectors and feed MDS with specific values of \({\varvec{\Delta }}\) and \(\mathbf {W}\) to calculate the embedded vectors to reduce the embedding error. Specifically, the relation between MDS and our objective can be described as in Table 1.

Table 1 Relation between MDS and our objective
Fig. 2
figure 2

Embedding cost in distance

The most complete embedding would convert all label vectors \({\mathbf {y}} \in {\mathcal {Y}} \subseteq \{0, 1\}^K\) to the embedded vectors. Nevertheless, the number of all label vectors is \(2^{K}\), which makes solving MDS infeasible. Therefore, we do not consider embedding the entire \({\mathcal {Y}}\). Instead, we select some representative label vectors as a candidate set \({\mathcal {S}} \subseteq {\mathcal {Y}}\), and only embed the label vectors in \({\mathcal {S}}\). While the use of \({\mathcal {S}}\) instead of \({\mathcal {Y}}\) restricts the nearest-neighbor decoding function to only predict from \({\mathcal {S}}\), it can reduce the computational burden. One reasonable choice of \({\mathcal {S}}\) is the set of label vectors that appear in the training instances \(\mathcal {D}\), which is denoted as \({\mathcal {S}}_{tr}\). We will show that using \({\mathcal {S}}_{tr}\) as \({\mathcal {S}}\) readily leads to promising performance and discuss more about the choice of the candidate set in Sect. 4.

After choosing \({\mathcal {S}}\), we can construct \({\varvec{\Delta }}\) and \(\mathbf {W}\) for solving MDS. Let L denote the number of elements in \({\mathcal {S}}\) and let \(\mathbf {C}({\mathcal {S}})\) be the transformed cost matrix of \({\mathcal {S}}\), which is an \(L \times L\) matrix with \(\mathbf {C}({\mathcal {S}})_{i, j} = \delta (c({\mathbf {y}}_i, {\mathbf {y}}_j))\) for \({\mathbf {y}}_i, {\mathbf {y}}_j \in {\mathcal {S}}\). Unfortunately, \(\mathbf {C}({\mathcal {S}})\) cannot be directly used as the symmetric dissimilarity matrix \({\varvec{\Delta }}\) because the cost function \(c(\cdot , \cdot )\) may be asymmetric (\(c({\mathbf {y}}_i, {\mathbf {y}}_j) \ne c({\mathbf {y}}_j, {\mathbf {y}}_i)\)). To resolve this difficulty, we propose a mirroring trick to construct a symmetric \({\varvec{\Delta }}\) from \(\mathbf {C}({\mathcal {S}})\).

3.2 Mirroring trick for asymmetric cost function

The asymmetric cost function implies that each label vector \({\mathbf {y}}_i\) serves two roles: as the ground truth, or as the prediction. When \({\mathbf {y}}_i\) serves as the ground truth, we should use \(c({\mathbf {y}}_i, \cdot )\) to describe the cost behavior. When \({\mathbf {y}}_i\) serves as the prediction, we should use \(c(\cdot , {\mathbf {y}}_i)\) to describe the cost behavior. This motivates us to view these two roles separately.

For each \({\mathbf {y}}_i \in {\mathcal {S}}\), we mirror it as \({\mathbf {y}}^{(t)}_i\) and \({\mathbf {y}}^{(p)}_i\) to denote viewing \({\mathbf {y}}_i\) as the ground truth and the prediction, respectively. Note that the two mirrored label vectors \({\mathbf {y}}^{(t)}_i\) and \({\mathbf {y}}^{(p)}_i\) are in fact the same, but carry different meanings. Now, we have two roles of the candidate sets \({\mathcal {S}}^{(t)} = \{{\mathbf {y}}^{(t)}_i\}_{i=1}^{L}\) and \({\mathcal {S}}^{(p)} = \{{\mathbf {y}}^{(p)}_i\}_{i=1}^{L}\). Then, as illustrated by Fig. 2, \(\delta (c({\mathbf {y}}_i, {\mathbf {y}}_j))\), the transformed cost when \({\mathbf {y}}_i\) is ground truth and \({\mathbf {y}}_j\) is the prediction, can be viewed as the dissimilarity between the ground truth role \({\mathbf {y}}^{(t)}_i\) and the prediction role \({\mathbf {y}}^{(p)}_j\), which is symmetric for them. Similarly, \(\delta (c({\mathbf {y}}_j, {\mathbf {y}}_i))\) can be viewed as the dissimilarity between prediction role \({\mathbf {y}}^{(p)}_i\) and ground truth role \({\mathbf {y}}^{(t)}_j\). That is, all the asymmetric transformed costs can be viewed as the dissimilarities between the label vectors in \({\mathcal {S}}^{(t)}\) and \({\mathcal {S}}^{(p)}\).

Fig. 3
figure 3

Constructions of a \({\varvec{\Delta }}\), b \(\mathbf {W}\)

Based on this view, instead of embedding \({\mathcal {S}}\) by MDS, we embed both \({\mathcal {S}}^{(t)}\) and \({\mathcal {S}}^{(p)}\) by considering 2L objects, the first L objects being the elements in \({\mathcal {S}}^{(t)}\) and the last L objects being the elements in \({\mathcal {S}}^{(p)}\). Following the mirroring step above, we construct symmetric \({\varvec{\Delta }}\) and \(\mathbf {W}\) as \(2 L \times 2 L\) matrices by the following equations and illustrate the constructions by Fig. 3.

$$\begin{aligned} {\varvec{\Delta }}_{i,j}= & {} {\left\{ \begin{array}{ll} \delta (c({\mathbf {y}}_i, {\mathbf {y}}_{j-L})) &{}\quad \text {if }(i,j)\text { in top-right part} \\ \delta (c({\mathbf {y}}_{i-L}, {\mathbf {y}}_j)) &{}\quad \text {if }(i,j)\text { in bottom-left part} \\ 0 &{}\quad \text {otherwise} \\ \end{array}\right. } \end{aligned}$$
(1)
$$\begin{aligned} \mathbf {W}_{i,j}= & {} {\left\{ \begin{array}{ll} f_i &{}\quad \text {if }(i,j)\text { in top-right part} \\ f_j &{}\quad \text {if }(i,j)\text { in bottom-left part} \\ 0 &{}\quad \text {otherwise} \\ \end{array}\right. } \end{aligned}$$
(2)

We explain the constructions and the new notations \(f_i\) as follows. Given that we are concerned only about the dissimilarities between the elements in \({\mathcal {S}}^{(t)}\) and \({\mathcal {S}}^{(p)}\), we set the top-left and the bottom-right parts of \(\mathbf {W}\) to zeros (and set the corresponding parts of \({\varvec{\Delta }}\) conveniently to zeros as well). Then, we set the top-right part and the bottom-left part of \({\varvec{\Delta }}\) to be the transformed costs to reflect the dissimilarities. The top-right part and the bottom-left part of \({\varvec{\Delta }}\) are in fact \(\mathbf {C}({\mathcal {S}})\) and \(\mathbf {C}({\mathcal {S}})^\top \) respectively, as illustrated by Fig. 3. Considering that every label vector may have different importance, to reflect this difference, we set the top-right part of weight \(\mathbf {W}_{i,j}\) to be \(f_i\), the frequency of \({\mathbf {y}}_i\) in \(\mathcal {D}\), and set the bottom-left part of weight \(\mathbf {W}_{i,j}\) to be \(f_j\).

By solving MDS with the above-mentioned \({\varvec{\Delta }}\) and \(\mathbf {W}\), we can obtain the target vector \(\mathbf {u}^{(t)}_i\) and \(\mathbf {u}^{(p)}_i\) corresponding to \({\mathbf {y}}^{(t)}_i\) and \({\mathbf {y}}^{(p)}_i\). We take \(\mathcal {U}^{(t)}\) and \(\mathcal {U}^{(p)}\) to denote the target vector sets \(\{\mathbf {u}^{(t)}_i\}_{i=1}^{L}\) and \(\{\mathbf {u}^{(p)}_i\}_{i=1}^{L}\), respectively. Those target vectors minimize \(\sum _{i, j} \mathbf {W}_{i,j}(d(\mathbf {u}^{(t)}_i, \mathbf {u}^{(p)}_j) - \delta (c({\mathbf {y}}_i, {\mathbf {y}}_j)) )^2\). That is, the cost information is embedded in the distances between the elements in \(\mathcal {U}^{(t)}\) and \(\mathcal {U}^{(p)}\).

Since we mirror each label vector \({\mathbf {y}}_i\) as two roles (\({\mathbf {y}}^{(t)}_i\) and \({\mathbf {y}}^{(p)}_i\)), we need to decide which target vector (\(\mathbf {u}^{(t)}_i\) and \(\mathbf {u}^{(p)}_i\)) is the embedded vector \(\mathbf {z}_i\) of \({\mathbf {y}}_i\). Recall that the goal of the embedded vectors is to train a internal predictor g and obtain \({\tilde{\mathbf {z}}}\), the “predicted” embedded vector. Therefore, we take the elements in \(\mathcal {U}^{(p)}\), which serve the role of the prediction, as the embedded vectors of the elements in \({\mathcal {S}}\), as illustrated by Fig. 4a. Accordingly, the nearest embedded vector \(\mathbf {z}_q\) should be the role of the ground truth because the cost information is embedded in the distance between these two roles of target vectors. Hence, we take \(\mathcal {U}^{(t)}\) as \(\mathcal {Q}\), the embedded vector set in the first step of nearest-neighbor decoding, and find the nearest embedded vector \(\mathbf {z}_q\) from \(\mathcal {Q}\), as illustrated by Fig. 4b. The final cost-sensitive prediction \(\tilde{{\mathbf {y}}} = {\mathbf {y}}_q\) is the corresponding label vector to \(\mathbf {z}_q\), which carries the cost information through nearest-neighbor decoding.

figure a
figure b
Fig. 4
figure 4

Different use of two roles of embedded vectors, a learning g from \(\mathcal {U}^{(p)}\), b making prediction from \(\mathcal {U}^{(t)}\)

With the embedding function \(\varPhi \) using \(\mathcal {U}^{(p)}\) and the nearest-neighbor decoding function \(\varPsi \) using \(\mathcal {Q} = \mathcal {U}^{(t)}\), we have now designed a novel CSLE algorithm. We name it cost-sensitive label embedding with multidimensional scaling (CLEMS). Algorithms 1 and 2 respectively list the training process and the predicting process of CLEMS.

3.3 Theoretical guarantee and monotonic function

The last issue is how to choose the monotonic transform function \(\delta (\cdot )\). We suggest a proper monotonic function \(\delta (\cdot )\) based on the following theoretical results.

Theorem 1

For any instance \(({\mathbf {x}}, {\mathbf {y}})\), let \(\mathbf {z}\) be the embedded vector of \({\mathbf {y}}\), \({\tilde{\mathbf {z}}} = g({\mathbf {x}})\) be the predicted embedded vector, \(\mathbf {z}_q\) be the nearest embedded vector of \({\tilde{\mathbf {z}}}\), and \({\mathbf {y}}_q\) be the corresponding label vector of \(\mathbf {z}_q\). In other words, \({\mathbf {y}}_q\) is the outcome of the nearest-neighbor decoding function \(\varPsi \). Then,

$$\begin{aligned} \delta (c({\mathbf {y}}, {\mathbf {y}}_q))^2 \le 5 \Bigl (\underbrace{(d(\mathbf {z}, \mathbf {z}_q) - \delta (c({\mathbf {y}}, {\mathbf {y}}_q)))^2}_{\text{ embedding } \text{ error }} + \underbrace{d(\mathbf {z}, {\tilde{\mathbf {z}}})^2}_{\text{ regression } \text{ error }}\Bigr ). \end{aligned}$$

Proof

Since \(\mathbf {z}_q\) is the nearest neighbor of \({\tilde{\mathbf {z}}}\), we have \(d(\mathbf {z}, {\tilde{\mathbf {z}}}) \ge \frac{1}{2} d(\mathbf {z}, \mathbf {z}_q)\). Hence,

$$\begin{aligned} \textit{embedding error} + \textit{regression error}&= (d(\mathbf {z}, \mathbf {z}_q) - \delta (c({\mathbf {y}}, {\mathbf {y}}_q)))^2 + d(\mathbf {z}, {\tilde{\mathbf {z}}})^2 \\&\ge (d(\mathbf {z}, \mathbf {z}_q) - \delta (c({\mathbf {y}}, {\mathbf {y}}_q)))^2 + \frac{1}{4} d(\mathbf {z}, \mathbf {z}_q)^2 \\&= \frac{5}{4} (d(\mathbf {z}, \mathbf {z}_q) - \frac{4}{5} \delta (c({\mathbf {y}}, {\mathbf {y}}_q)))^2 + \frac{1}{5} \delta (c({\mathbf {y}}, {\mathbf {y}}_q))^2 \\&\ge \frac{1}{5} \delta (c({\mathbf {y}}, {\mathbf {y}}_q))^2 . \end{aligned}$$

This implies the theorem. \(\square \)

Theorem 1 implies that the cost of the prediction can be bounded by embedding error and regression error. In our framework, the embedding error can be reduced by multidimensional scaling and the regression error can be reduced by learning a good regressor g. Theorem 1 provides a theoretical explanation of how our framework achieves cost-sensitivity.

In general, any monotonic function \(\delta (\cdot )\) can be used in the proposed framework. Based on Theorem 1, we suggest \(\delta (\cdot ) = (\cdot )^{1/2}\) to directly bound the cost by \(c({\mathbf {y}}, {\mathbf {y}}_q) \le 5 (\textit{embedding error} + \textit{regression error})\). We will show that the suggested monotonic function leads to promising practical performance in Sect. 4.

4 Experiments

We conduct the experiments on nine real-world datasets (Tsoumakas et al. 2011b; Read et al. 2016) to validate the proposed algorithm, CLEMS. The details of the datasets are shown by Table 2. We evaluate the algorithms in our cost-sensitive setting with three commonly-used evaluation criteria, namely F1 score \(({\mathbf {y}}, \tilde{{\mathbf {y}}}) = \frac{2 \Vert {\mathbf {y}} \cap \tilde{{\mathbf {y}}} \Vert _1 }{\Vert {\mathbf {y}} \Vert _1 + \Vert \tilde{{\mathbf {y}}} \Vert _1}\), Accuracy score \(({\mathbf {y}}, \tilde{{\mathbf {y}}}) = \frac{\Vert {\mathbf {y}} \cap \tilde{{\mathbf {y}}} \Vert _1 }{\Vert {\mathbf {y}} \cup \tilde{{\mathbf {y}}} \Vert _1}\), and Rank loss \(({\mathbf {y}}, \tilde{{\mathbf {y}}}) = \sum \limits _{{\mathbf {y}}[i]>{\mathbf {y}}[j]} ( \llbracket \tilde{{\mathbf {y}}}[i] < \tilde{{\mathbf {y}}}[j] \rrbracket + \frac{1}{2}\llbracket \tilde{{\mathbf {y}}}[i] = \tilde{{\mathbf {y}}}[j] \rrbracket )\). Note that F1 score and Accuracy score are symmetric while Rank loss is asymmetric. For CLEMS, the input cost function is set as the corresponding evaluation criterion.

All the following experimental results are averaged over 20 runs of experiments. In each run, we randomly split 50, 25, and 25% of the dataset for training, validation, and testing. We use the validation part to select the best parameters for all the algorithms and report the corresponding testing results. For all the algorithms, the internal predictors are set as random forest (Breiman 2001) implemented by scikit-learn (Pedregosa et al. 2011) and the maximum depth of the trees is selected from \(\{ 5, 10, \ldots , 35 \}\). For CLEMS, we use the implementation of scikit-learn for solving SMACOF algorithm to obtain the MDS-based embedding and the parameters of SMACOF algorithm are set as default values by scikit-learn. For other algorithms, the rest parameters are set as the default values suggested by their original papers. In the following figures and tables, we use the notation \(\uparrow (\downarrow )\) to highlight whether a higher (lower) value indicates better performance for the evaluation criterion.

Table 2 Properties of datasets

4.1 Comparing CLEMS with LSDR algorithms

In the first experiment, we compare CLEMS with four LSDR algorithms introduced in Sect. 2: principal label space transformation (PLST) (Tai and Lin 2012), conditional principal label space transformation (CPLST) (Chen and Lin 2012), feature-aware implicit label space encoding (FaIE) (Lin et al. 2014), and sparse local embeddings for extreme classification (SLEEC) (Bhatia et al. 2015)

Since the prediction of SLEEC is a real-value vector rather than binary, we choose the best threshold for quantizing the vector according to the given criterion during training. Thus, our modified SLEEC can be viewed as “semi-cost-sensitive” algorithm that learns the threshold according to the criterion.

Fig. 5
figure 5

F1 score (\(\uparrow \)) with the 95% confidence interval of CLEMS and LSDR algorithms

Fig. 6
figure 6

Accuracy score (\(\uparrow \)) with the 95% confidence interval of CLEMS and LSDR algorithms

Fig. 7
figure 7

Rank loss (\(\downarrow \)) with the 95% confidence interval of CLEMS and LSDR algorithms

Figures 5 and 6 show the results of F1 score and Accuracy score across different embedded dimensions M. As M increases, all the algorithms reach better performance because of the better preservation of label information. CLEMS outperforms the non-cost-sensitive algorithms (PLST, CPLST, and FaIE) in most of the cases, which verifies the importance of constructing a cost-sensitive embedding. CLEMS also exhibits considerably better performance over SLEEC in most of the datasets, which demonstrates the usefulness to consider the cost information during embedding (CLEMS) rather than after the embedding (SLEEC). The results of Rank loss are shown by Fig. 7. CLEMS again reaches the best in most of the cases, which justifies its validity for asymmetric criteria through the mirroring trick.

4.2 Comparing CLEMS with LSDE algorithms

We compare CLEMS with ECC-based LSDE algorithms (Ferng and Lin 2013). We consider two promising error-correcting codes, repetition code (ECC-RREP) and Hamming on repetition code (ECC-HAMR) in the original work. The former is equivalent to the famous Random k-labelsets (RAkEL) algorithm (Tsoumakas et al. 2011a).

Fig. 8
figure 8

F1 score (\(\uparrow \)) with the 95% confidence interval of CLEMS and LSDE algorithms

Fig. 9
figure 9

Accuracy score (\(\uparrow \)) with the 95% confidence interval of CLEMS and LSDE algorithms

Fig. 10
figure 10

Rank loss (\(\downarrow \)) with the 95% confidence interval of CLEMS and LSDE algorithms

Figure 8 shows the results of F1 score. Note that in the figure, the scales of M / K for CLEMS and other LSDE algorithms are different. The scale of CLEMS is \(\{1.2, 1.4, 1.6, 1.8, 2.0\}\) while the scale of other LSDE algorithms is \(\{2, 4, 6, 8, 10 \}\). Although we give LSDE algorithms more dimensions to embed the label information, CLEMS is still superior to those LSDE algorithms in most of cases. Similar results happen for Accuracy score and the Rank loss (Figs. 9, 10). The results again justify the superiority of CLEMS.

4.3 Candidate set and embedded dimension

Now, we discuss the influence of the candidate set \({\mathcal {S}}\). In Sect. 3, we proposed to embed \({\mathcal {S}}_{tr}\) instead of \({\mathcal {Y}}\). To verify the goodness of the choice, we compare CLEMS with different candidate sets. We consider the sets sub-sampled with different percentage from \({\mathcal {S}}_{tr}\) to evaluate the importance of label vectors in \({\mathcal {S}}_{tr}\). Furthermore, to know whether or not larger candidate set leads to better performance, we also randomly sample different percentage of additional label vectors from \({\mathcal {Y}} \setminus {\mathcal {S}}_{tr}\) and merge them with \({\mathcal {S}}_{tr}\) as the candidate sets. The results of three largest datasets are shown by Figs. 11, 12, and 13. From the figures, we observe that sub-sampling from \({\mathcal {S}}_{tr}\) generally lead to worse performance; adding more candidates from \({\mathcal {Y}} \setminus {\mathcal {S}}_{tr}\), on the other hand, does not lead to significantly-better performance. The two findings suggest that using \({\mathcal {S}}_{tr}\) as the candidate set is necessary and sufficient for decent performance.

Fig. 11
figure 11

F1 score (\(\uparrow \)) of CLEMS with different size of candidate sets

Fig. 12
figure 12

Accuracy score (\(\uparrow \)) of CLEMS with different size of candidate sets

Fig. 13
figure 13

Rank loss (\(\downarrow \)) of CLEMS with different size of candidate sets

We conduct another experiment about the candidate set. Instead of random sampling, we consider \({\mathcal {S}}_{all}\), which denotes the set of label vectors that appear in the training instances and the testing instances, to estimate the benefit of “peeping” the testing label vectors and embedding them in advance. We show the results of CLEMS with \({\mathcal {S}}_{tr}\) (CLEMS-train) and \({\mathcal {S}}_{all}\) (CLEMS-all) versus different embedded dimensions by Figs. 14, 15, and 16. From the figures, we see that the improvement of CLEMS-all over CLEMS-train is small and insignificant. The results imply again that \({\mathcal {S}}_{tr}\) readily allows nearest-neighbor decoding to make sufficiently good choices.

Fig. 14
figure 14

F1 score (\(\uparrow \)) with the 95% confidence interval of CLEMS-train and CLEMS-all

Fig. 15
figure 15

Accuracy score (\(\uparrow \)) with the 95% confidence interval of CLEMS-train and CLEMS-all

Fig. 16
figure 16

Rank loss (\(\downarrow \)) with the 95% confidence interval of CLEMS-train and CLEMS-all

Now, we discuss about the embedded dimension M. From Figs. 14, 15, and 16, CLEMS reaches better performance as M increases. For LSDR, M plays an important role since it decides how much information can be preserved in the embedded space. Nevertheless, For LSDE, the improvement becomes marginal when M increases. The results suggest that for LSDE, the influence of the additional dimension is not large, and setting the embedded dimension \(M = K\) is sufficiently good in practice. One possible reason for the sufficiency is that the criteria of interest are generally not complicated enough and thus do not need more dimensions to preserve the cost information.

4.4 Comparing CLEMS with cost-sensitive algorithms

In this section, we compare CLEMS with two state-of-the-art cost-sensitive algorithms, probabilistic classifier chain (PCC) (Dembczynski et al. 2010, 2011) and condensed filter tree (CFT) (Li and Lin 2014). Both CLEMS and CFT can handle arbitrary criteria while PCC can handle only those criteria with efficient inference rules. In addition, we also report the results of some baseline algorithms, such as binary relevance (BR) (Tsoumakas and Katakis 2007) and classifier chain (CC) (Read et al. 2011). Similar to previous experiments, the internal predictors of all algorithms are set as random forest (Breiman 2001) implemented by scikit-learn (Pedregosa et al. 2011) with the same parameter selection process.

Fig. 17
figure 17

Average running time when taking F1 score as cost function, a average training time, b average predicting time, c average total running time

Running time. Figure 17 illustrates the average training, predicting, and total running time when taking F1 score as the intended criterion for the six largest datasets. The running time is normalized by the running time of BR. For training time, CFT is the slowest, because it needs to iteratively estimate the importance of each label and re-train internal predictors. CLEMS, which consumes time for MDS calculations, is intuitively slower than baseline algorithms and PCC during training, but still much faster than CFT. For prediction time, all algorithms, including PCC (using inference calculation) and CLEMS (using nearest-neighbor calculation) are similarly fast. The results suggest that for CSMLC, CLEMS is superior to CFT and competitive to PCC for the overall efficiency.

Performance. We compare the performance of CLEMS and other algorithms across different criteria. To demonstrate the full ability of CLEMS, in addition to F1 score, Accuracy score, and Rank loss, we further consider one additional criterion, Composition loss = 1\(+\)5\(\times \) Hamming lossF1 score, as used by Li and Lin (2014). We also consider three more datasets (arts, flags, and language-log) that comes from other MLC works (Tsoumakas et al. 2011b; Read et al. 2016).

Table 3 Performance across different criteria (mean ±ste (rank))

The results are shown by Table 3. Accuracy score and Composition loss for PCC are left blank since there is no efficient inference rules. The first finding is that cost-sensitive algorithms (CLEMS, PCC, and CFT) generally perform better than non-cost-sensitive algorithms (BR and CC) across different criteria. This validates the usefulness of cost-sensitivity for MLC algorithms.

For F1 score, Accuracy score, and Composition loss, CLEMS outperforms PCC and CFT in most cases. The reason is that these criteria evaluate all the labels jointly, and CLEMS can globally locate the hidden structure of labels to facilitate more effective learning, while PCC and CFT are chain-based algorithms and only locally discover the relation between labels. For Rank loss, PCC performs the best in most cases. One possible reason is that Rank loss can be expressed as a special weighted Hamming loss that does not require globally locating the hidden structure. Thus, chaining algorithms like PCC can still perform decently. Note, however, that CLEMS is often the second best for Rank loss as well.

In summary, we identify two merits of CLEMS. The first is that while PCC performs better on Rank loss, CLEMS is competitive for general cost-sensitivity and can be coupled with arbitrary criteria. The second is that although CFT also shoots for general cost-sensitivity, CLEMS outperforms CFT in most cases for all criteria. The results make CLEMS a decent first-hand-choice for general CSMLC.

Performance on other criteria. So far, we have justified the benefits of CLEMS for directly optimizing towards the criterion of interest. Next, we discuss about whether CLEMS can be used to indirectly optimize other criteria of interest, particularly when the criterion cannot be meaningfully expressed as the input to CLEMS. CLEMS follows the setting in Sect. 2 to accept example-based criterion, which works on one label vector at a time. A more general type of criteria considers multiple or all the label vectors at the same time, called label-based criteria. Two representative label-based criteria are Micro F1 and Macro F1 (Madjarov et al. 2012), and will be studied next. The former calculates the F1 score over all the label components of testing examples, and the latter averages the per-label F1 score across examples. To the best of our knowledge, there is no cost-sensitive algorithms can handle arbitrary label-based criteria.

Another criterion that we will study is subset accuracy (Madjarov et al. 2012). It can be expressed as an example-based criterion with two possible values: whether the label vector is completely correct or not. The criterion is very strict and does not come with trade-off on big or small prediction errors. Thus, it is generally not meaningful to feed the criterion directly to CLEMS or other CSMLC algorithms.

Next, we demonstrate how CLEMS can indirectly optimize Micro/Macro F1 score and subset accuracy when fed with other criteria as inputs. We consider 6 pre-divided datasets (emotions, scene, yeast, medical, enron, and Corel5k) as used by Madjarov et al. (2012). We consider two baseline algorithms (BR and CC), CLEMS with three different input criteria (F1 score, Accuracy score, Rank loss), and PCC with two different criteria (F1 score and Rank loss) that come with efficient inference rules. The results are shown in Table 4.

Table 4 Comparison for other criteria

From the table, we observe that when selecting a proper criterion as the input of CSMLC algorithms (CLEMS or PCC), they can readily perform better than the baseline algorithms. The results justify the value of the CSMLC algorithms beyond handling example-based criteria. In particular, the cost input to CSMLC algorithms act as a tunable parameter towards optimizing other true criteria of interests. We also observe that CLEMS, especially CLEMS-Acc, performs better on the three criteria than PCC in the most datasets, which again validate the usefulness of CLEMS. An interesting future direction is whether CLEMS can be further extended to achieve cost-sensitivity for label-based criteria.

5 Conclusion

We propose a novel cost-sensitive label embedding algorithm called cost-sensitive label embedding with multidimensional scaling (CLEMS). CLEMS successfully embeds the label information and cost information into an arbitrary-dimensional hidden structure by the classic multidimensional scaling approach for manifold learning, and handles asymmetric cost functions with our careful design of the mirroring trick. With the embedding, CLEMS can make cost-sensitive predictions efficiently and effectively by decoding to the nearest neighbor within a proper candidate set. The empirical results demonstrate that CLEMS is superior to state-of-the-art label embedding algorithms across different cost functions. To the best of our knowledge, CLEMS is the very first algorithm that achieves cost-sensitivity within label embedding, and opens a promising future research direction of designing cost-sensitive label embedding algorithms using manifold learning approaches.